#BeautyGate Explained: What iPhone XS is and isn't doing to your selfies

iPhone XS Depth Change
iPhone XS Depth Change (Image credit: Rene Ritchie / iMore)

Our protests have been heard and our long international selfie nightmare will soon be ended: Apple will the "fixing" the computational photography algorithms used for selfies to eliminate the smoothing and warming that led to the misconception that a "beauty filter" was being applied to our faces.

Nilay Patel first reported the good news on The Verge:

Apple told me that the forthcoming iOS 12.1 update, currently in public beta, will address the issue of the front camera appearing to smooth out skin by picking a sharper base frame for Smart HDR, but I wasn't able to test it yet.

I've heard the same. So, if you hated the old look, get ready for a new look as soon as iOS 12.1 drops... maybe at or following Apple's October 30 event?

I've covered the new iPhone XS imaging system in both my initial review and my 3-weeks-later review, but one particularly bad bit of intel just keeps making the rounds, so I wanted to do a real explainer to lay it to rest once and for all.

So, #BeautyGate or #SmoothSelfieGate began as most quote-unquote gates do: A combination of people with legitimate questions and concerns and those who are super eager to amplify anything they can in order to get attention, even and especially if it does nothing to answer those questions or address those concerns. Add internet and… wildfire.

But, if you dig into why people are concerned and you go beyond those who's sole intent is to sensationalize, you get to something truly fascinating:

The ongoing evolution for camera system to computational photography system.

Rather watch than read? Check out the video above and subscribe for more.

Computational photography

If you're not familiar with computational photography, you're probably familiar with two of its biggest watershed moments: Google's auto-awesome, that sucked photos up from a dizzying array of different hardware to give you the best possible process on the cloud, and Apple's iPhone 7 Plus Portrait Mode, that simulated the bokeh of big, prime lens glass using two separate, tiny phone cameras.

In other words, using the near-limitless potential of custom silicon and machine learning to go far beyond the physical limits of sensors and lenses.

And with iPhone XS and Smart HDR, Apple is using computational photography to a far greater degree than ever before. So much so that we're seeing hashtag smoothgate or hashtag selfiegate as a result.

So, to get into in, iPhone XS has not just a new camera system this year but a whole new imaging pipeline.

The iPhone XS camera

iPhone XS

iPhone XS (Image credit: iMore)

It starts with the wide angle on the back. It's still a f/1.8 lens with a 12-megapixel but it's got bigger pixels, up from 1.22 to 1.4 microns, to drink in more light and deeper pixels, up from 3.1 to 3.5 microns to keep that light from getting cross-contaminated. It's also got more focus pixels — Apple's name for phase detection autofocus, so it can latch on to your subject twice as fast as before.

It results in a sensor that, according to John Gruber's calculations on Daring Fireball, is 30% larger and a lens that moves from a focal equivalent of 28mm to 26mm.

It's so different it's led to some speculation that Apple switched from Sony to Samsung as its supplier, with all the changes in characteristics that come with it.

Now, computational front-facing selfies have little to do with rear-facing hardware, but I wanted to highlight just how much and how deeply everything has changed this year.

The RGB camera in the TrueDepth array also has a new sensor on iPhone XS. Apple's only said that it's twice as fast as last year, but that's just burying the lede.

The biggest change is that Apple is tying the new 8-core neural engine in the A12 Bionic into the image signal processor to not only do more, but to do more faster.

Back to the rear camera for a moment, only because Apple has provided more detail on what it's doing there, but I don't believe the processes are that dissimilar.

From the moment you open the camera, it starts buffering so that there's zero shutter lag when you go to take the photo. Like I said in my review, that's not new. That it can buffer 4 frames now in order to better isolate and capture motion is new. At the same time, it's also capturing underexposed versions of each frame to preserve highlight details. And, once you take the shot, it's capturing a long exposure as well so you can get even greater details from the shadows.

By the way, it's also doing something similar for up to 30 frame-per-second 4K video by capturing extended dynamic range data in between each of those frames and seconds.

So, #BeautyGate...

The all new, all better optics combined with the huge advance in computational photography that Apple is calling Smart HDR — is what's leading to the new selfie look.

Specifically, what we're seeing with #beautygate is the result of the new noise reduction and extended dynamic range.

Matthew Panzarino, former pro photographer, current sneaker aficionado and editor-in-chief of TechCrunch tweeted it this way:

https://twitter.com/panzer/status/1046870351563505664

Sebastiaan de With, former Apple stitched-leather enabler, DoubleTwist Pentile Anti-Aliasing inventor, and current Leica shooter and Halide designer, did an amazing deep dive on how the new, higher dynamic range creates images very different from traditional contrast-based sharpening filters. From the Halide blog:

It's important to understand how our brains perceive sharpness, and how artists make things look sharper. It doesn't work like those comical CSI shows where detectives yell 'enhance' at a screen. You can't add detail that's already been lost. But you can fool your brain by adding small contrasty areas.Put simply, a dark or light outline adjacent to a contrasting light or a dark shape. That local contrast is what makes things look sharp.To enhance sharpness, simply make the light area a bit lighter near the edge, and the dark area a bit darker near the edge. That's sharpness.The iPhone XS merges exposures and reduces the brightness of the bright areas and reduces the darkness of the shadows. The detail remains, but we can perceive it as less sharp because it lost local contrast. In the photo above, the skin looks smoother simply because the light isn't as harsh.Observant people noticed it isn't just skin that's affected. Coarse textures and particularly anything in the dark— from cats to wood grain— get a smoother look. This is noise reduction at work. iPhone XS has more aggressive noise reduction than previous iPhones.

Seb goes on to note that it's a result of just how fast iPhone XS is taking photos now, both shutter speed and ISO, and the noise that comes with that speed requiring new and different kinds of reduction.

On the rear-facing camera, with its big, bright sensor, even in low light, it's not as noticeable. On the much smaller front-facing camera sensor, it's more noticeable.

So, it all comes down to this, which is what it always comes down to: Design is compromise and engineering is trade-offs.

With iPhone XS, you get much better exposure, much better dynamic range, detail in highlights and shadows, which means fewer blow-outs and less banding, and a much, much higher tolerance for back, side, or just plain bad lighting. Which means more better selfies in more badder places. But it comes at the cost of what's traditionally been seen as edge detail and image texture.

And, yeah, it has nothing to do with beauty filters or faces — the iPhone XS camera treats all similar objects similarly with this new pipeline.

It's similar if not the same as what Austin Mann mentioned in his iPhone XS camera review: That the dynamic range is now so good he's finding it nearly impossible to shoot silhouettes anymore. Every step forward, dammit, leaves something behind.

Bring the noise

The bad news is, if you hate the way selfies look on iPhone XS, since there's no beauty mode, there's no way for Apple to add an on/off switch for the smoothing. It's an integral part of the entire process. Also, if you shoot RAW, as Seb also explains in his deep-dive, you're in a for a world of computationally-optimized hurt as Neural Networks and ISPs become increasingly, inextricably coupled together.

Apple, for its part, really, truly, deeply believes the new imaging pipeline is better than the previous one and better than what anyone else is doing today. If you disagree — and when it comes to the selfie results, I personally disagree hard — or soft, or smooth, or whatever — it's important to let Apple know. A lot. Because pipelines can and will be tweaked, updated, and improved over time.

And, like I said, if they can detect and preserve fabric, rope, cloud, and other textures, why not skin texture as well?

For now, if you want to avoid it, and you want to shoot in traditional RAW, you're going to have to get a third-party app and go manual.

Rene Ritchie
Contributor

Rene Ritchie is one of the most respected Apple analysts in the business, reaching a combined audience of over 40 million readers a month. His YouTube channel, Vector, has over 90 thousand subscribers and 14 million views and his podcasts, including Debug, have been downloaded over 20 million times. He also regularly co-hosts MacBreak Weekly for the TWiT network and co-hosted CES Live! and Talk Mobile. Based in Montreal, Rene is a former director of product marketing, web developer, and graphic designer. He's authored several books and appeared on numerous television and radio segments to discuss Apple and the technology industry. When not working, he likes to cook, grapple, and spend time with his friends and family.