darktable-org / darktable

darktable is an open source photography workflow application and raw developer
https://www.darktable.org
GNU General Public License v3.0
9.69k stars 1.14k forks source link

[DT Master 4.1.0] color balance rgb perceptual briliance grading highlight slider creates artifacts #12442

Closed s7habo closed 9 months ago

s7habo commented 2 years ago

Describe the bug/issue

If the highlights of the photo are close to the limit of dynamic range, the highlight slider of the perceptual briliance grading in the color balance rgb module produces artifacts when a certain threshold is exceeded.

This only happens in the default darktable USC (2022) color space. In JzAzBz (2021) color space the slider behaves as expected.

To Reproduce

  1. increase the exposure of the photo with exposure module until the highlights are at the limit of the dynamic range.
  2. in the color balance rgb module, increase the value of perceptual brilliance in highlights using the highlights slider.
  3. After a certain value, the artifacts - wide white spots - appear (see screenshot/video below).

Screenshots

Artifact

Screencast

Short demo:

https://user-images.githubusercontent.com/38165484/189202459-4b598eb2-263c-421e-ba1c-e04402929b3e.mp4

Platform Please fill as much information as possible in the list given below. Please state "unknown" where you do not know the answer and remove any sections that are not applicable

Additional context

I can't say for sure, but I suspect that this problem may be related to the

https://github.com/darktable-org/darktable/pull/12233

since this behavior appeared after that commit.

todd-prior commented 2 years ago

I think AP generates this in the last 10 min of his most recent video and he comments that it is due to gamut mapping....

Mark-64 commented 2 years ago

I have this too and I believe that it is generated in filmic, maybe by reconstruction kicking-in at a certain point when increasing exposure or brilliance. Reconstruction is never really disabled, the threshold is set high instead, but it can alway be activated if exposure is set high enough

pehar1 commented 2 years ago

@Mark-64 , this has already been discussed here : https://discuss.pixls.us/t/something-wrong-with-darktable-4-1-0-git272-5a1a1845-1-clipped-highlights/32540

pass712 commented 2 years ago

Can reproduce under Windows.

dt: 4.1.0+283~g5cca9d64b OS: Win10 Mem: 16 Gb Graphics card: NVidia GEFORCE GTX 1060 6GB OpenCL installed : yes OpenCL activated : yes

jenshannoschwalm commented 2 years ago

As @s7habo hinted the problem is somewhat known and discussed in the mentioned pr. Pinging @flannelhead here...

flannelhead commented 2 years ago

The fix that was done in https://github.com/darktable-org/darktable/pull/12233 was about avoiding divisions by zero.

Here's the result with current master: Screenshot from 2022-09-09 19-36-23 With that PR reverted: Screenshot from 2022-09-09 19-49-55 Notice that in the latter image there are specular highlights with black pixels. That's what the fix meant to address. However, it seems here that neither result is really acceptable.

As others have concluded here, the large blotches are caused by filmic's reconstruction which seems to spread those specular highlights. color balance rgb's brilliance adjustment in dt UCS mode seems to push specular highlights quite far, so filmic's reconstruction picks up these very bright highlights and spreads them all around.

Contrary to what some stated about filmic's reconstruction, at least in the case of this image it can be practically disabled. Turn the threshold all the way up to 6 EV and transition slider all the way down to 0.25. At least with those settings I could no longer reproduce this sort of artefacts.

It might be worth investigating why color balance rgb brilliance adjustment causes those insanely high values for specular highlights. However, the fix in https://github.com/darktable-org/darktable/pull/12233 doesn't cause them in itself, I believe. It just makes the issue more visible due to triggering filmic's reconstruction into generating those blotches.

jenshannoschwalm commented 2 years ago

Would there be a way to check in either module for such an issue and give a "module warning"? (BTW I find the black pixels far worse as "not so easy to catch" if not pixelpeeping.)

flannelhead commented 2 years ago

BTW I find the black pixels far worse as "not so easy to catch" if not pixelpeeping.

In the case of this image, yes. But it can happen in vastly larger areas (e.g. sun disc) so that's a no-no anyway (in my opinion).

Would there be a way to check in either module for such an issue and give a "module warning"?

I'm not sure. The best thing to do would be probably to fix the brilliance formula (or clamping) in such a way that avoids those vast changes of bright highlights and clamps them to some sensible range. That was already briefly discussed in https://github.com/darktable-org/darktable/issues/12163 but would need more investigation because there didn't seem to be a viable fix yet.

todd-prior commented 2 years ago

I dont know if it offers any clues but the artifacts do not appear the same at different zooms in the central preview and dont match the preview used for panning and zoom navigation which seems to briefly show during panning on the center display and then disappears and this also is nothing like what gets exported... I checked and got the same with or without Opencl. I posted on the topic with a video and still example..... For sure the strongest impact seems to be the filmic transition slider but the data used being different might indicate more???

herrkami commented 2 years ago

I just edited around 200 images and encountered the problem in around 20 of those. In five cases the blobs revealed themself only after export, because of the zoom level dependency.

da-phil commented 1 year ago

I can also reproduce this issue on a couple of my images too. The "workaround" described by @flannelhead works though.

TurboGit commented 1 year ago

@flannelhead : Do you think you'll have time to handle this for 4.2?

flannelhead commented 1 year ago

Not sure yet. If anything, this would deserve a "proper" fix, such that the brilliance formula is modified to give more sensible results. As it currently stands, it's far too easy to push the luminance to a nonsense territory of very very high values.

I can try to take a look at it during November. Failing that, I would suggest just reverting the PR https://github.com/darktable-org/darktable/pull/12233 and accepting the black pixels.

da-phil commented 1 year ago

@flannelhead I hope you're not still discouraged from APs words some months ago. Your work is really appreciated and I'm glad there is another sane contributor understanding the colour science magic in dt.

flannelhead commented 1 year ago

Not discouraged, been just kind of busy for the whole autumn.

Nilvus commented 1 year ago

For future work @flannelhead, #12734 issue is related to this one and can add infos to this one.

TurboGit commented 1 year ago

I can try to take a look at it during November. Failing that, I would suggest just reverting the PR #12233 and accepting the black pixels.

Will do that.

TurboGit commented 1 year ago

@gi-man : That's a sensible view... @flannelhead is you final view a revert? In both cases we have an issue anyway. I have no strong opinion, maybe leaning slightly toward @gi-man option.

flannelhead commented 1 year ago

@flannelhead is you final view a revert?

No, I would like to take a look at this during November to see if a better fix can be found. Just have to carefully watch for nasty side effects that might destroy old, proper edits.

todd-prior commented 1 year ago

Since Aureliens original defaults were essentially to set this feature at disabled... would there be any harm in setting those to the max ie 6ev threshold and the transision at the lowest value... as mentioned here

https://discuss.pixls.us/t/editing-moments-with-darktable/11770/688?u=priort

As it stands to use the feature requires the user to set the slider based on the mask so it would be little extra effort and it might minimize the number of issue with this artifact until it can be fixed....

Just a thought?? I know it is more of an avoidance than a fix but still might be useful in the short term..

aurelienpierre commented 1 year ago

Don't fix it if you don't understand it. But since idiocracy has become the new way, just break it some more.

It's ok, I have backups.

aurelienpierre commented 1 year ago

Seriously, this thread is a pain to read.

Color balance RGB with dt UCS 22 uses saturation $S$, chroma $C$, brightness $B$ and lightness $L$. $B = L/L_{white} * (C^{1.33} + 1)$. The brilliance adjustment multiplies $B$ by some coefficient between 0 (-100% in GUI params) and 2 (+100% in GUI params).

Problem one is the backtransfom $B$ -> $L$ -> $Y$ is stable only up to $L = 2.1$, which maps to… 3.89e+19 Cd/m². Clipping L was already attempted and should not be needed because I guaranty that, if your scene had something as bright as 38900000000000000000 Cd/m², you and your sensor would not be there to report it.

Problem two is $L_{white}$ which should be user-defined in the mask options because in a scene-referred pipeline, white is unbounded. This is meant to normalize the scene brightest value to 1, aka a virtual medium white. But of course, people don't do that.

Here, it's multiplied by 1.6 as per user request, and I bet the white value didn't got set properly, say 1.3 × 1.6, you got your 2.08 at the output… What you are creating there is a super-nova white, as per user request.

Now, filmic HL reconstruction does something useful : blurring the edge between clipped and non-clipped. The threshold was set such as it would practically disable reconstruction, only because people with weak computers complained. Blurring is a kind of diffusion. You diffuse super-nova white. You got what you asked: a super-nova glow.

As per fucking usual, filmic showed the issue, so you blamed it. Then defiled it with some more GUI bloat in the shape of that stupid checkbox. Great job at making things worse.

The problem here is user error. The solution is to narrow the range of the brilliance slider to -20;+20 %. It's a GUI problem. For fuck's sake, when children get hurt in power outlets, do you rewire all your house in 24 V so it would be harmless to play with them or you make sure they can't insert things inside the outlet ?

But you couln't refrain from jumping on the guns and hide the dust under the rug to be able to call the next release clean.

See, you are everything that's wrong with open-source:

I suggest you all go back to your day job, hoping you are at least ok at it. What you are doing here is a waste of everybody's time including yours, you are only making things worse. This is not the highschool computer club.

todd-prior commented 1 year ago

Thanks for the explanation...I think I follow it for the most part.

Sorry it creates such frustration and anger. I know that you have poured countless hours into the project and are likely the most connected to the color science but I don't think the changes are trying to undo your work just but a cover on the outlet for those risky enough to try and stick our fingers in...

If brilliance can be pushed too far then I think as you say it should be investigated and set in line with the math.

Also the point about addressing the white point as you move along in the masking is a good one and I think you are right its not getting used by a lot of people and this is a good exampe of where that can come into play

I think you have so much to offer but when you are moved to such anger it dilutes your input. I think you can still be strong in your conviction, inform users and disagree with coding decisions but what is to be gained by doing it in a barrage of insults....

You have forked DT and have your platform to show how it "should" be done so can you not just change that to your liking and communicate your solution ?? You are aware of all the boundries that can break things. Many of us are not so intimately linked to the inner workings of the core modules or able to digest the math as easily as you can so there can be a case for a safety net here and there...

Moving on from that I think the one thing that struck me with this issue is that it was obvious something was likely pushed too hard but the fact that it could also appear or not appear in various zoom levels , lightable previews and or at the time of export was also important. I believe that the pipelines are managed differently for center preview and some other things that likely go on for zoomed previews etc but whatever solution was presented it needed to address this aspect as well, ie keep this both managed and consistent when viewing the image.

I guess an on/off seemed like a staightforward fix ??

kofa73 commented 1 year ago

@aurelienpierre : thanks for the explanation. Could setting the white threshold be made automatic, so people wouldn't need to pick white manually (and re-pick it after adjusting exposure or using tone EQ)? Or, at least, more visible? When people see the module, they see the easy-to-use-looking sliders first, and have no idea they have to set the white level (I actually do that quite often, but it is not something mentioned in the documentation. See https://docs.darktable.org/usermanual/4.0/en/module-reference/processing-modules/color-balance-rgb/#thresholds:

Set the white point luminance in EV. This is used to normalize the power setting in the 4 ways tab.'

I pretty much never use the 4-ways tab; however, based on your explanation, the documentation is wrong.

kofa73 commented 1 year ago

Note: I've just realised Aurélien mentions that the GUI should not allow setting brilliance outside +/- 20%, which is in line with the values below showing a rapid increase above 20%.

@aurelienpierre : for the image and XMP posted at https://discuss.pixls.us/t/something-wrong-with-darktable-4-1-0-git272-5a1a1845-1-clipped-highlights/32540, using the white picker sets white fulcrum to +3.23 EV, but the highlights remain blown if highlight's brilliance is set to 22% or above. Setting white fulcrum to +16 EV (the max allowed value) has no effect, either.

Disabling filmic, setting a white fulcrum of +16EV and setting display, output, histogram profiles all to linear Rec2020, the colour picker reports a maximum RGB of: highlights brilliance 0% -> 3254, 2698, 2966 10% -> 8718, 7191, 7925 20% -> 64181, 52675, 58209 21% -> 96949, 79532, 87910 (with filmic's recovery threshold set to +6EV, only the brightest parts are marked as targets for recovery) 22% -> 167714, 137510, 152039. At this point, blooming in filmic becomes visible if I turn the module on, even with theshold +6 EV. Bringing the transition down to the minimum, 0.25 EV, removes the blur.

Setting the white fulcrum has very little effect -- and it is actually increasing(!) the readings. For the 22% case (note: even with values below 20%, the numbers increase when the fulcrum is increased): 0 EV: 153614, 125959, 139262 3.23 EV (picked automatically): 164445, 134830, 149074 8 EV: 167329, 137193, 151689

Using negative values reduces the numbers: -2 EV: 133564, 109481, 121066 -4 EV: 94279, 77232, 85433 -8 EV: 16484, 13422, 15162 Reducing it further triggers some kind of instability: -8.1 EV: 18032, 12707, 18279 -8.2 EV: 22423, 15712, 22725 -8.85: 447814304, 348071548, 497751360 -8.9: all negative, -2^31

github-actions[bot] commented 1 year ago

This issue did not get any activity in the past 60 days and will be closed in 365 days if no update occurs. Please check if the master branch has fixed it and report again or close the issue.

kofa73 commented 1 year ago

Since the white fulcrum must be set correctly for the maths to work, I suggest to:

github-actions[bot] commented 1 year ago

This issue has been marked as stale due to inactivity for the last 60 days. It will be automatically closed in 300 days if no update occurs. Please check if the master branch has fixed it and report again or close the issue.

Donatzsky commented 1 year ago

Yet another user got in trouble because of this: https://discuss.pixls.us/t/filmic-rgb-with-reconstruction-on-suddenly-burns-through-managed-highlights/38794

It seems to me that the solution would be to fix the UX by soft-limiting the brilliance sliders to ±20%. Then it should even be possible to re-enable filmic HL reconstruction.

Longer term, figuring out a better way to manage the white fulcrum would be worthwhile, but I'm not sure there's an elegant solution to that.

calculate the fulcrum automatically every time the module is processed (when input data changes).

Wouldn't that lead to unpredictable behaviour? If it doesn't, having an Automatic white fulcrum setting, enabling or disabling the current slider, should work.

Donatzsky commented 1 year ago

@gi-man The "fix" was to disable filmic HL reconstruct by default. So as long as you don't use that, you won't notice anything. But as you can see from that pixls thread, it's very much still capable of tripping up unaware users (which would be the majority).

jenshannoschwalm commented 1 year ago

This issue is certainly about modules interfering. The best option probably would be to soft-limit brilliance settings but that might still lead to instability of filmic highlights due to other modules "running wild on user request".

github-actions[bot] commented 1 year ago

This issue has been marked as stale due to inactivity for the last 60 days. It will be automatically closed in 300 days if no update occurs. Please check if the master branch has fixed it and report again or close the issue.

Donatzsky commented 11 months ago

I have been thinking more about it, and also learned some more about how this works, and I don't think think there's any particularly neat solution.

The biggest obstacle is that the effect of the sliders is cumulative, so changing one slider may limit how far you can go with another. As such, I see three possible solutions:

  1. Soft-limit all sliders to 20%. Will limit the risk somewhat, but by no means eliminate it and the user will still not know why things break.
  2. Dynamically limit the sliders based on the values of the other sliders. Assuming the math behind this is sound, it should work well. Has the benefit of indicating to the user, in a non-threatening way, that going too far is problematic.
  3. Leave the sliders as-is, but show some kind of warning if the cumulative value goes over a threshold. Warnings can be scary to the user, though, so care must be taken in the design.
github-actions[bot] commented 9 months ago

This issue has been marked as stale due to inactivity for the last 60 days. It will be automatically closed in 300 days if no update occurs. Please check if the master branch has fixed it and report again or close the issue.

jenshannoschwalm commented 9 months ago

Fixed in master.