- Deep Fusion uses advanced machine learning
- It leverages the Neural Engine on A13 Bionic chip
- The feature optimises texture, details, and noise
Deep Fusion, the image processing system that Apple showcased last month, is available to the iPhone 11, iPhone 11 Pro, and iPhone 11 Pro Max through the latest iOS 13 developer and public beta releases. The feature comes as an intrinsic part of the iOS 13.2 developer and public beta builds that both are available for download to all users who have enrolled for beta testing. Apple at the time of previewing Deep Fusion during the iPhone 11-series launch last month revealed that the new feature uses machine learning algorithms to enhance images.
By leveraging the power of the Neural Engine available on the new A13 Bionic chip, Deep Fusion uses advanced machine learning to do “pixel-by-pixel processing of photos, optimising for texture, details, and noise” on photos. Apple Senior Vice President Phil Schiller while previewing the new feature last month called it “computational photography mad science.”
Apple hasn’t provided any particular option to enable or disable Deep Fusion. It’s also not visible on the EXIF data of the processed images. However, you’ll notice the changes the shots that have been processed using the latest technique