yuanming-hu / exposure

Learning infinite-resolution image processing with GAN and RL from unpaired image datasets, using a differentiable photo editing model.
MIT License
765 stars 155 forks source link

is it possible to replace the commercial preprocessing by an invokable API? #36

Open Vandermode opened 5 years ago

Vandermode commented 5 years ago

Hi, very appealing work, and I would like to know whether it is possible to replace the inconvenient commercial preprocessing (i.e. the operations implemented by Lightroom mentioned in your wiki by a invokable API, since in most cases we should start from the unprocessed raw data rather than the demosaicked one.

Virtually I tried to utilize the dcraw/libraw to preprocess the mosaicked raw data into the ProPhoto RGB space, but unfortunately no decent result is obtained by post-processing with the 'exposure' framework.

Besides, I was also wondering how you can ensure the Adobe LightRoom behaves as intended? In my sense, the aimed preprocessing consists of three steps as follow: 1) white balance 2) demosaick the (bayer) raw data 3) convert the linear camera-specific raw space into linear ProPhoto RGB space by a 3x3 color transformation matrix.

Nevertheless, it seems the additional step, \e.g. gamma correction is also applied by LightRoom, since I find in your implementation, the linearizing is necessary to invert the effect of non-linear gamma correction. I also have no idea how you can figure out the parameters of gamma correction there (is that just an approximation like 1/2.2?)

In summary, my question is what you exactly intended to do in Adobe Lightroom, and can it be replaced by a invokable API?

Thank you very much!

yuanming-hu commented 5 years ago

Sorry for the very long delay - I've been occupied by the coming SIGGRAPH deadline. I actually tried using dcraw at the early stage of this project, and it worked well. Later I switch to Adobe Lightroom to follow the "standard" way to utilize the data.

In fact, there is no guarantee that the gamma is correctly restored and we start from linear RGB images. The colorspace relationship between the input and the "recovered" output is actually very complicated (given the complexity of PhotoPro RGB, closed-sourced Lightroom processing pipeline, etc), and based on previous work I know, people simply assume what we are doing is a good enough approximation.