Your second photo shows the over processing done probably by the Deepfusion[*]: I can see the cartoon face effect on the TV person face and clothes. The same effect I see on my 13 mini on people faces.
The 13 series model have increased the pixel size on the sensor in order to get more light in low light. This means less real world information captured by the sensor = less sharp image. Apple think they can outsmart this with their Deepfusion image enhancing algorithms but is my perception that overall the 13 series can not produce better images than older models except in low light conditions.
I have had authorised service replace the camera and no change!
At this moment everyones' affected options are one or more of these:
- return if within the return window
- sell the phone
- wait and hope for adjustments of Deepfusion processing in a software update but remember you cannot beat the laws of physics (the pixels are larger on the sensor!)
- wait and hope we are a small number of people with a defective camera (I don't remember where I've read that they had shortage of supplies for cameras made in Vietnam) *AND* Apple acknowledges and initiates a recall program.
I personally have stopping using it and will try selling my mini at an obivous loss.
[*] https://www.howtogeek.com/445014/what-is-the-deep-fusion-camera-on-the-iphone-11/
«According to Apple, the new mode uses the iPhone 11’s new A13 Bionic chip to do “pixel-by-pixel processing of photos, optimizing for texture, details, and noise in every part of the photo.” In essence, it works similarly to the iPhone camera’s Smart HDR, which takes several shots at varying exposures and combines them to maximize the clarity in the finished image. Where they differ is in the amount of information that needs to be processed.
What Deep Fusion is doing in the background is quite complicated. When the user presses the shutter button in medium light, the camera immediately takes nine pictures: four short images, four secondary images, and one long exposure photo. It fuses the long-exposure with the best among the short images. Then, the processor goes pixel by pixel and selects the best elements from both to create the most detailed photo possible. All of that takes place in one second.»
[Edited by Moderator]