Closed Floessie closed 6 years ago
I confirm that HDR Tone Mapping leads to unexpected results regarding the average exposure of the output image which depends on the input image. Luckily in all cases no data is lost (as far as I can tell) and can be brought back into view using negative exposure compensation or curves.
I haven't yet experimented whether its related to the DCP curve but I saw no indication that that would be the case. It reminds me more of the issue with our Auto Levels failing on images with a dark periphery, maybe something similar going on here.
If this is not a bug, it would be nice if there was some wizardry for the tool to base the average resulting exposure on the average input exposure.
I don't think this is related to the DCP curve per se. I agree that it would be nice if the tone mapping were more predictable, this is something I'd like to investigate more. fwiw, I personally like to start from neutral when using fattal, and add curves later if needed, otherwise it gets easily out of control
A few years ago there was a similar problem with Tone Mapping.
After I was told it was "normal" ... In fact, on some images, as soon as one exceeded 0.5 for "strength", the image was overexposed, with a histogram strongly shifted to the right.
After many exchanges, sometimes lively, I implanted a gamma cursor, with an application before calling TM, then restitution by inverse after TM.
Can this be useful here?
Jacques
@Desmis manipulating the gamma sounds like a reasonable workaround, worth trying at least. people should keep in mind though that this tone mapping is applied before tone curves, so it sees a very "flat" image. that's why if you apply also tone curves you might get overexposed results. I think it's better to start from neutral if you plan to use hdr TM
@agriggio I agree with "I think it's better to start from neutral if you plan to use hdr TM", or at least "reset auto levels"
@Floessie @Beep6581 @Desmis can you test this patch please?
diff --git a/rtengine/tmo_fattal02.cc b/rtengine/tmo_fattal02.cc
--- a/rtengine/tmo_fattal02.cc
+++ b/rtengine/tmo_fattal02.cc
@@ -1121,13 +1121,20 @@
const float min_luminance = 1.f;
TMatrix ws = ICCStore::getInstance()->workingSpaceMatrix (params->icm.working);
+ float max_Y = 0.f;
+ int max_x = 0, max_y = 0;
+
#ifdef _OPENMP
#pragma omp parallel for if (multiThread)
#endif
-
for (int y = 0; y < h; y++) {
for (int x = 0; x < w; x++) {
Yr (x, y) = std::max (luminance (rgb->r (y, x), rgb->g (y, x), rgb->b (y, x), ws), min_luminance); // clip really black pixels
+ if (Yr(x, y) > max_Y) {
+ max_Y = Yr(x, y);
+ max_x = x;
+ max_y = y;
+ }
}
}
@@ -1170,6 +1177,8 @@
const float hr = float(h2) / float(h);
const float wr = float(w2) / float(w);
+
+ const float scale = std::max(L(max_x * wr + 1, max_y * hr + 1), epsilon) * (65535.f / Yr(max_x, max_y));
#ifdef _OPENMP
#pragma omp parallel for if(multiThread)
@@ -1181,7 +1190,7 @@
int xx = x * wr + 1;
float Y = Yr (x, y);
- float l = std::max (L (xx, yy), epsilon) * (65535.f / Y);
+ float l = std::max (L (xx, yy), epsilon) * (65535.f / Y) / scale;
rgb->r (y, x) = std::max (rgb->r (y, x), 0.f) * l;
rgb->g (y, x) = std::max (rgb->g (y, x), 0.f) * l;
rgb->b (y, x) = std::max (rgb->b (y, x), 0.f) * l;
@agriggio I tested on 12 images of various types. Shadows seem to be affected as before, while now the algo is extra careful not to affect the highlights, even when increasing the amount. I like it!
Old behavior:
New behavior:
New behavior with exposure compensation to match old behavior (the value differs between images):
@Beep6581 great, thanks! BTW, this should make the tool work well also with "auto levels" or the DCP tone curve. (Essentially, the patch is just normalizing relative to the brightest point in the input image)
should I just push to dev
then?
EDIT: we should probably also raise the default for amount, any suggestions?
+1 for dev
, and I typically use Amount=50.
@agriggio
I am just compiling and testing :)
@agriggio @Beep6581
Same comment as DrSlony, but I think Amount=50, is generaly too high, 30 seems (for me) better :) But not worry, no problem if you choose "50" Jacques
Ok, I'll settle on amount=30, 50 seems a bit too high for me as well
@agriggio May I push a correction? There is a race in the max loop
@heckflosse sure! and sorry :-)
@agriggio Should be fixed now
@agriggio Alberto, I gave it a short try, and it's MUCH better now! 🎉
Very small speedup with this patch:
diff --git a/rtengine/tmo_fattal02.cc b/rtengine/tmo_fattal02.cc
index efa9c53d..92d5c7f8 100644
--- a/rtengine/tmo_fattal02.cc
+++ b/rtengine/tmo_fattal02.cc
@@ -1131,7 +1131,7 @@ void ImProcFunctions::ToneMapFattal02 (Imagefloat *rgb)
float max_YThr = 0.f;
int max_xThr = 0, max_yThr = 0;
#ifdef _OPENMP
- #pragma omp for nowait
+ #pragma omp for schedule(dynamic,16) nowait
#endif
for (int y = 0; y < h; y++) {
for (int x = 0; x < w; x++) {
@@ -1193,10 +1193,11 @@ void ImProcFunctions::ToneMapFattal02 (Imagefloat *rgb)
const float hr = float(h2) / float(h);
const float wr = float(w2) / float(w);
- const float scale = std::max(L(max_x * wr + 1, max_y * hr + 1), epsilon) * (65535.f / Yr(max_x, max_y));
-
+ const float scale = 65535.f / std::max(L(max_x * wr + 1, max_y * hr + 1), epsilon) * (65535.f / Yr(max_x, max_y));
+
+
#ifdef _OPENMP
- #pragma omp parallel for if(multiThread)
+ #pragma omp parallel for schedule(dynamic,16) if(multiThread)
#endif
for (int y = 0; y < h; y++) {
int yy = y * hr + 1;
@@ -1205,7 +1206,7 @@ void ImProcFunctions::ToneMapFattal02 (Imagefloat *rgb)
int xx = x * wr + 1;
float Y = Yr (x, y);
- float l = std::max (L (xx, yy), epsilon) * (65535.f / Y) / scale;
+ float l = std::max (L (xx, yy), epsilon) * (scale / Y);
rgb->r (y, x) = std::max (rgb->r (y, x), 0.f) * l;
rgb->g (y, x) = std::max (rgb->g (y, x), 0.f) * l;
rgb->b (y, x) = std::max (rgb->b (y, x), 0.f) * l;
With the lasts commits changing fattal behaviour, there is a small bug in rendering album preview. See this:
The left thumb is correct regarding the file in the editor. The right (in film strip) is obviously wrong. looks like the new scale param is not applied.
One other bug: you did not change the default “amount” value, but the “amount” step. Default is still 1 and when clicking +/-, the value change by steps of 30 ;)
Hi @casta, thanks for testing!
The left thumb is correct regarding the file in the editor. The right (in film strip) is obviously wrong. looks like the new scale param is not applied.
Did you clear your thumbnail cache? If not, can you please try again?
One other bug: you did not change the default “amount” value, but the “amount” step. Default is still 1 and when clicking +/-, the value change by steps of 30 ;)
Oops! Sorry, will fix in a minute!
I just cleared the thumbnail cache: same thing.
can you share a raw and pp3 where this happens? I'm afraid there won't be a quick fix, because the scaling is applied in both cases. The problem is that the original tone mapping code was very sensitive to the image size. We tried to mitigate this in RT as much as we could, but it could still be that you see some differences in extreme cases -- and thumbnails definitely qualify as such. (Incidentally, that's the reason why the tool is marked with the "1:1" icon). However, if I get an example of this, I can try and see if there's some workaround... :-)
Here it is: https://www.xwing.info/dl/rawtherapee/20170814_153816_cast2281.tar.xz This is not the same as in the screenshot above (my wife would not be happy ;)), but I have the same behavior with this one.
Tonemapping sensitive to size is not very handy as to evaluate the settings, seeing the full picture (so zoomed out) is quite common ;) I thought it was applied on the full raw then resized for display after that.
thanks, I'll have a look.
Tonemapping sensitive to size is not very handy as to evaluate the settings, seeing the full picture (so zoomed out) is quite common ;)
True, that's why we spent some time in making sure that what you see in the main editor window is reasonably close to what you get in the output. For thumbnails the situation is a bit different though... do you experience the same problem in the editor window?
I thought it was applied on the full raw then resized for display after that.
There are plans to have this as an option, but not for 5.4 (requires some major refactoring of the internals to be done properly)
do you experience the same problem in the editor window?
Not at all. When zooming out to get the same size as the film-strip thumbnail, this is still fine (just like the thumbnail in the left panel)
When zooming out to get the same size as the film-strip thumbnail, this is still fine (just like the thumbnail in the left panel)
hmmm... okay, I might need to double check the code and my memory of it then :-) Thanks, I'll let you know if I find out something.
@casta I found the culprit. This is due to highlight reconstruction with "color propagation". For some reason (performance probably) this is not applied to the thumbnail. Therefore, the thumbnail and the preview will have different max luminance levels, leading to differences in the HDR tone mapping output. To see this, just try to change the reconstruction method to e.g. "blend" and the thumbnail will be much closer to the preview.
I agree it is annoying, but it's not something that we will fix for 5.4 (or at least, not something I will do), since we have already plans for a big refactoring of the processing pipeline after 5.4 is out.
Yes indeed. Thank’s for digging. Anyway it’s a bit annoying, but it’s minor I agree as it only affects thumbnails.
Hello again.
It seems that this new normalization is causing highly underexposed results when using “color propagation” highlight reconstruction, in some case, I must increase EV by 1.5 or more. Switching to “Blend” method, the exposure is more correct and around EV 0 is fine.
See for example this file (2 different pp3 files bundled) : https://www.xwing.info/dl/rawtherapee/20170526_163511_cast2157.raf.tar.xz Switching the highlight reconstruction method without Fattal HDR (with last normalization changes) does not modify the exposure that much. With Fattal HDR enabled, the EV difference is just huge.
Maybe there is no data loss with such EV change, but I do not know, and this may worth the check ;) Anyway, I’m not sure this behaviour is really correct: switching the HL reconstruction method should not affect globally the image that much when Fattal HDR is enabled IMHO.
@casta can you try this patch please?
diff --git a/rtengine/tmo_fattal02.cc b/rtengine/tmo_fattal02.cc
--- a/rtengine/tmo_fattal02.cc
+++ b/rtengine/tmo_fattal02.cc
@@ -1119,6 +1119,12 @@
const float epsilon = 1e-4f;
const float luminance_noise_floor = 65.535f;
const float min_luminance = 1.f;
+ const auto unclipped =
+ [=](int y, int x) -> bool
+ {
+ const float c = 65500.f;
+ return rgb->r(y, x) < c && rgb->g(y, x) < c && rgb->b(y, x) < c;
+ };
TMatrix ws = ICCStore::getInstance()->workingSpaceMatrix (params->icm.working);
float max_Y = 0.f;
@@ -1136,7 +1142,7 @@
for (int y = 0; y < h; y++) {
for (int x = 0; x < w; x++) {
Yr (x, y) = std::max (luminance (rgb->r (y, x), rgb->g (y, x), rgb->b (y, x), ws), min_luminance); // clip really black pixels
- if (Yr(x, y) > max_YThr) {
+ if (Yr(x, y) > max_YThr && unclipped(y, x)) {
max_YThr = Yr(x, y);
max_xThr = x;
max_yThr = y;
I agree with @casta that it would improve usability if the highlight reconstruction method didn't affect Fattal, for the simple reason that changing the HR mode would require exposure readjustments.
I can test the patch tonight.
For me the behaviour with the patch is much more predictable. Images with clipped highlight become a bit underexposed, but there is not anymore exposure variation, which is much more pleasant IMHO.
@casta thanks for testing (and finding the problem)! I agree that the behaviour is much better now. I'll push later today
Joyeux Noël,
Try the following steps:
It depends on the input image, how badly it will be clipped. The
soderasen_1.hdr.dng
isn't the best example, but I don't have the full samples corpus here to try.Now the question: Is this expected? I naïvely thought Fattal would only lift the shadows, but it also lifts the highlights (beyond repair?). Or is this only due to a bad interaction with the DCP tone curve?
Best, Flössie