mermerico / filmulator

A film emulator with all of the positives and none of the negatives
Other
15 stars 2 forks source link

Integration of filmulator into a separate project #1

Closed aferrero2707 closed 9 years ago

aferrero2707 commented 9 years ago

First of all, congratulations for this excellent tool!

I'm developing a new layer-based FOSS photo editor (https://github.com/aferrero2707/PhotoFlow) and I would like to integrate your code, if possible. My program already provides its own RAW processing engine (based on libraw and code derived from RawTherapee), and I'm wondering what would be the right point in the pipeline to introduce your film emulation. The processing steps are basically the following:

  1. RAW decoding
  2. exposure compensation
  3. white balance
  4. demosaicing (amaze and igv are implemented at the moment)
  5. color processing (application of camera input profile and conversion to working colorspace)

Where should I introduce the film emulation? After step 4, the data is still in linear RGB format.

Another question is wether the code could be used in a tile-based processing. In other words, how many pixels have to be accessed around the one being processed? Or does it need to acces the whole image data all the time?

Thanks a lot!

CarVac commented 9 years ago

Thanks for the interest.

What the algorithm does in the context of this program is take in linear data in the sRGB color space and outputs what is effectively tone-curved data, still in the sRGB color space. It would still work properly with a different working color space, but the wider the gamut of the working color space, the smaller the saturation-boosting effect on bright colors. The farther apart the components' values, the greater the difference is exaggerated. Basically, it should happen after color processing (step 5), but you shouldn't apply any tone curve before it.

The algorithm is not really tileable. There is shared global state: the concentration of developer in the inactive reservoir is diffused with every individual pixel's active layer, so you cannot process any single part of the image alone without easily visible luminance errors. Furthermore, there are very large-radius gaussian blurs of the developer active layer (on the order of a hundred pixel radius), the padding for which would waste a lot of computation if tiled.

Finally, I'd suggest you use code from https://github.com/CarVac/filmulator-gui. It is more recent, with revised and more easily understood variable names, and the core pipeline folder can still stand basically alone without depending on Qt. Additionally, we are planning significant performance improvements in the GUI pipeline by replacing the C++ implementation of our algorithm with a Halide implementation, so you would do better by importing code from there.

You should also reopen this issue in CarVac/filmulator-gui instead of this one. If you have any more questions, we'd be happy to help in that project.