darktable-org / darktable

darktable is an open source photography workflow application and raw developer
https://www.darktable.org
GNU General Public License v3.0
9.44k stars 1.12k forks source link

rework the final-scale and make it possible to have a sharpen after downscaling #13682

Open TurboGit opened 1 year ago

TurboGit commented 1 year ago

For the final-scale there is discussion to happen here.

For the final sharpen there is also discussion to happen here.

I had this proposal:

This issue is to track down the discussion for the implementation of those two issues. Please keep this discussion focused.

EDIT: Initial discussion was in #13635.

EDIT: From reference, having the finalscale doing the job in linear data is a good goal for quality.

EDIT: Moving downscale before any tone curve (Sigmoid, Filmic...) means that all modules after that will work in down-scaled images, what is the impact on quality?

wpferguson commented 1 year ago

What kind of sharpening will it be?

If we use unsharp masking the a threshold could be introduced to avoid sharpening noise.

I implemented sharpening after export with postsharpen.lua. I exported the image, then ran it through Imagemagick's convert usually using the unsharp sharpening. I also tried just using the sharpen function of convert, but got better results using unsharp because of the noise threshold. One thing I did notice was larger file sizes after sharpening.

Marco Carrarini implemented Richardson-Lucy output sharpening in RL_out_sharp.lua. He used GMIC to do the sharpening. Since we link the gmic library, you might be able to use some of the sharpening routines in there. For that matter one of the magick libraries is linked in so that should be available too.

I've pretty much stopped post sharpening. Too often the images were (almost) over detailed, especially after the introduction of Diffuse or Sharpen and the move to a higher resolution camera.

Mark-64 commented 1 year ago

Marco Carrarini implemented Richardson-Lucy output sharpening in RL_out_sharp.lua

Yes, RL algorithm does a really nice job and it doesn't amplify noise.

I've pretty much stopped post sharpening. Too often the images were (almost) over detailed, especially after the introduction of Diffuse or Sharpen and the move to a higher resolution camera.

I also hate oversharpened images and my D&S preset is very mild, but I still find useful to apply some small output sharpening after downscaling

jenshannoschwalm commented 1 year ago

Just some first comments from my side, if you want to understand how the pixelpipe works a reminder: -d pipe will give you all the required information. Also if you want to analyse in depth the processed data of a module you can use --dump-pipe a,b,c with ab and c being any module like demosaic, gamma or whatever you are interested in.

Some points we have to remember:

  1. Whenever we work on an image in darkroom we use interpolated data depending on the part of the image presented in darkroom (ROI), this interpolation (crop & scale) is done either as part of demosaic for raws or as the first part of the pixelpipe for non-raws.
  2. While doing exports the above description is mostly correct if in non-hq mode, with high quality we process full data (no ROI) and apply finalscale at the end of pixelpipe.
  3. The interpolator is chosen via UI preference, default is 'lanczos3' tending to overshoot but gives some sharpness.
  4. gamma is not used while exporting.

So

  1. As hq export and develop in darkroom differ on scaled/unscaled data there can be a significant problem depending on the used interpolator (moire effects, ...) especially on synthetic images. Easy to reproduce in darkroom with the test image provided in #13335
  2. Point 1 explains for me why just putting finalscale to another position in the pipeline doesn't solve that problem at all. It seems to be not a good "solution"
  3. The "bad color" discussed in #13335 is mostly not related to non-linear data processed
  4. We have a lot of modules now that process data using current scale into account. That leads to sometimes non-avoidable differences ...

How to proceed? Not sure, some ideas

  1. If we want some sharpening for exports, i think @TurboGit 's suggestion of a 'finalsharpen' for high-quality exports is a good idea. (Also could be done in finalscale itself though)
  2. remove high-quality from export options and always do so for true exports (not for thumbs) a) for quality and b) more stable results
  3. can we have a better interpolator for pixelpipe scaling?
  4. I wouldn't be in favour of exposing finalscale to be moved around.
TurboGit commented 1 year ago

2. remove high-quality from export options and always do so for true exports (not for thumbs) a) for quality and b) more stable results

This means that exporting could be very memory hungry and slow. Some modules require large amount of data and in HQ mode I had some crashes (I don't use this mode since long time because of this). But maybe the situation has improved as lot of work as been done to reduce memory, speed up modules and have better tiling where needed.

TurboGit commented 1 year ago
  1. If we want some sharpening for exports, i think @TurboGit 's suggestion of a 'finalsharpen' for high-quality exports is a good idea. (Also could be done in finalscale itself though)

You mean just for "exports"? The goal is really to sharpen after down-scaling.

jenshannoschwalm commented 1 year ago

This means that exporting could be very memory hungry and slow. Some modules require large amount of data and in HQ mode I had some crashes (I don't use this mode since long time because of this). But maybe the situation has improved as lot of work as been done to reduce memory, speed up modules and have better tiling where needed.

The underlying problem with exporting is the way dt uses memory (both system ram and OpenCL) so user experience might vary. We define how much memory might be used via preference - many people have set that to large or unrestricted in hope for performance. Unfortunately if you a) are exporting in the background while b) working in darkroom the memory taken by darkroom will likely double thus resulting in a) possibly oom killing or significant swapping and b) OpenCL fallbacks as graphics mem requirements could not be fullfilled as the tiling calculations were wrong.

I don't see an "easy go" here if we want to keep exporting in the backgrounbd as a feature. Maybe we can keep some sort of state flag (exporting under way) that will automatically half the allowed memory but that can easily be a shoot in your feet.

jenshannoschwalm commented 1 year ago

You mean just for "exports"? The goal is really to sharpen after down-scaling.

Yes, that was how i understood it - add some sort of "postprocessing" after scaling. Either down- or up-scale.

jenshannoschwalm commented 1 year ago

Just a summary of what problems related to scaling & colorspace we have

As mentioned in #13335 and #13635 we use scaling operators in the pixelpipe likely at wrong places resulting in more or less subtle errors. (There is one more problem related to the initial report, we don't have any anti-aliasing/moire filters while scaling leading to strong artefacts in cases like the given example png file.)

We do Input scaling while presenting image data to the darkroom and while exporting. This is fine for raws (we zoom in demosaic) and sraws (we zoom before rawprepare), fine because we scale on linear data.

For other images like jpeg/tiff and friends, we scale on whatever-data-we-get via rawprepare. So - likely non-linear (i don't know that code well so i might be mislead) thus some errors get introduced.

We don't Output scale in the darkroom, we got data scaled by demosaic and apply the output color later in the pixelpipe. (BTW gamma / display encodeing module is not involved here, it just either passes data or visualizes any mask)

For exporting we have two modes. normal quality This works basically the same as darkroom mode. We scale at the demosaicer and apply output color before writing data.

High quality is somewhat special. Behind the scenes / not visible to the user it enables the finalscale module and disables the scaling in the demosaicer. Good as all modules in the pipe between demosaic and finalscale are working on full data. The problem here: "finalscale" is after colorout, this is wrong for me and introduces non-linear errors.

So my proposal would be to modify the iop-order of finalscale and move it to a fixed position in the pixelpipe just before "colorout". I wouldn't be in favour of making this "to be changed by the user". Also a conversion to linear in finalscale, do the scaling and back-convert wouldn't be nice imho.

A hidden "export tuning module" would have to go between finalscale and colorout.

kofa73 commented 1 year ago

Do all the modules that have size-related parameters process with scaled parameters to produce the preview image? For example, those related to sharpening and blurring. After all, having a 3 pixel blur radius applied to a 6000x4000 pixel image, and having it applied to the same image scaled to 1500x1000 for preview will work differently.

TurboGit commented 9 months ago

@jenshannoschwalm :

So my proposal would be to modify the iop-order of finalscale and move it to a fixed position in the pixelpipe just before "colorout".

I'm not sure about this because after finalescale we will have:

TurboGit commented 9 months ago

BTW, this was planned for 4.4 so now rescheduling for 4.8.

jenshannoschwalm commented 9 months ago

I'm not sure about this because after finalscale we will have:

soften and splittoning are likely a problem, i confess i have no idea about both modules, i think i never used them myself and never touched the code.

jenshannoschwalm commented 8 months ago

For the record: we might also have a look at "how do we scale". Our current algos don't work nicely with very high frequency content and 0<->1 signal transitions as found in some synthetic images. One example from troy sobotkas test set syntheticChart_rec709.01.zip

todd-prior commented 7 months ago

For the record: we might also have a look at "how do we scale".

Thanks for keeping this on the radar. I just went back through much of the discussion and its a lot of things to consider but given DT has such a priority for handling pixel data and color "correctly" in all the modules I think it would be a disservice not to eventually sort it out. I almost wonder after some recent back and forth about ART vs DT and some people describing the ART look as cleaner can be attributed to the way things are ultimately displayed/exported in DT. Maybe sorting this all out eventually might just be worth it in terms fo the final presentation fo images from DT... Thanks go to you and others for your time and effort on this....

todd-prior commented 7 months ago

For the record: we might also have a look at "how do we scale". Our current algos don't work nicely with very high frequency content and 0<->1 signal transitions as found in some synthetic images.....

That is an interesting image and in addition to showing scaling impact on the preview it also really shows the difference between using yes and no for the HQ resampling when exporting. If you use zoom on that image there are some small changes as you zoom but at 100% there is a jump and a real change in the image.. esp on the horizontal gradient a lot of red just jumps to yellow and black regions change a lot.... When you export with HQR set to yes you get that look to the exported jpg with the provided srgb profile in DT, ie the 100% zoom. If set to no you clearly get a version with the red saturation that you see at any zoom not 100 200 400 or 800 %. Also exports with the icc profiles from color.org have some wild colors that dont' match the display preview but this could be the exr image and the lut table in those profiles vs the matrix profile of DT?? Not sure on that one