martin-marek / hdr-plus-swift

📸Night mode on any camera. Based on HDR+.
https://burst.photo
GNU General Public License v3.0
209 stars 11 forks source link

Apple ProRaw support #14

Open bmatthias opened 1 year ago

bmatthias commented 1 year ago

Unfortunately, Apple ProRaw photos taken with the iPhone 14 Pro don't work. I tried converting them with Adobe DNG Converter, but the result is the same: Unsupported Format

Alex-Vasile commented 1 year ago

See #7.

Might be fixed by #13?

bmatthias commented 1 year ago

It doesn't segfault though. It just displays the "Unsupported Format" dialog. So I'm not sure if it's the same issue as #7 .

Let me know where to find any logs etc., which might help.

I will also try building #13 and report back. EDIT: Same problem with the build from #13 ... The error thrown is "ERROR: MosaicInfo is null."

That is in dng_sdk_wrapper.cpp line 64:

// get size of mosaic patter
        // - this affects how raw pixels are aligned
        // - it is assumed that the pattern is square
        const dng_mosaic_info* mosaic_info = negative->GetMosaicInfo();
        if (mosaic_info == NULL) {
            *mosaic_pettern_width = 1;
            printf("ERROR: MosaicInfo is null.\n");
            return 1;
        } else {
            dng_point mosaic_pettern_size = negative->fMosaicInfo->fCFAPatternSize;
            *mosaic_pettern_width = mosaic_pettern_size.h;
        }
martin-marek commented 1 year ago

I believe the issue is that ProRaw files are demosaiced, whereas Burst Photo currently only supports Bayer and Fuji X-Trans raw images (that are not yet demosaiced).

@chris-rank is currently working on a complete rewrite of the merging algorithm (making it nearly identical to Google's). I believe his implementation actually only supports Bayer raw images. For Fuji X-Trans, he still plans to use the current merging pipeline.

Adding support for demosaiced images would certainly be possible. But I don't think ProRaw files are a good use case – they've already been stacked from a burst of raw images. Or do you think it would be useful?

bmatthias commented 1 year ago

Yes, the ProRaw files are demosaiced, that's probably why GetMosaicInfo() returns NULL.

I was able to make it work by simply setting *mosaic_pettern_width = 2; instead of calling negative->GetMosaicInfo(); Some image stacks from the iPhone actually produced nice improved results with this, others failed to align the images correctly.

You're right that the ProRaw files are already the result of stacking and Apple's built-in image processor has already performed magic on them (the same applies to Google, Samsung, Vivo etc. RAW files). There's hardly any potential for improvement the 14 Pro's main sensor, it's amazingly good. But there's still huge potential on the ultra-wide and tele sensors, which produce extremely noisy images compared to other phones with bigger sensors and especially better stacking algorithms. Especially on the tele sensor, Apple's optimization is very poor compared to e.g. Samsung or Vivo.

chris-rank commented 1 year ago

For me it is a bit of a question about the scope / intended use of burst photo. Is it a tool to bring computational photography (merging of bursts) to digital cameras that is applied on mosaiced RAWs BEFORE raw processing? Or is it a more general tool that can also be applied to demosaiced images of smartphone cameras and RGB images in general AFTER raw processing? From my point of view, the latter purpose could be added as the alignment and merging methods could also be applied to already demosaiced images. However, it would be quite some work to adapt the pipeline for that use case. It is interesting to read that the probably much more sophisticated burst pipeline of the iPhone 14 Pro is not yet good enough and additional application of burst photo may potentially improve results. However, "RAW" files from smartphones may lead to several issues as in reality these files are closer to RGB images than RAWs with heavy processing applied. As an example, I would expect that images are already tone-mapped and as a consequence noise models based on pixel intensity would not work correctly anymore.

bmatthias commented 1 year ago

For me the main use cases would be:

  1. Bringing HDR+ to mirrorless/full-frame cameras because smartphones are so much better at this.
  2. Bringing super-resolution and denoising to smartphone cameras that support RAW.

You already implemented the first one, the second one is what I would like to add :-)

The “Handheld Multi-Frame Super-Resolution” by Wronski et al. from Google Research (https://doi.org/10.1145/3306346.3323024) is some kind of magic. Using GCam on an Android phone with the upscaling feature enabled, I can produce more detailed images from an 8MP 1/4" sized smartphone sensor than from a 24MP APS-C sensor. That is real detail created by the multi-frame stacking algorithm, not some fake AI-produced detail that usually just looks awful.

chris-rank commented 1 year ago

That sounds great. There is one implementation of the second algorithm in CUDA on GitHub: https://github.com/kunzmi/ImageStackAlignator

One thing to consider is the workflow as this superresolution approach would also perform demosaicing. The elegant feature of HDR+ is that it operates on mosaiced RAWs and produces a DNG that virtually behaves identically as the input file, i.e. the user can apply the same RAW workflow as before. That allows to stay in the RAW converter of choice without any constraints. Still I am very impressed by these superresolution approaches that are well-known in other scientific fields such as medical imaging or microscopy.