CarVac / librtprocess

A project to make RawTherapee's processing algorithms more readily available.
Other
53 stars 24 forks source link

added white balance function, WB_apply() #36

Closed butcherg closed 4 years ago

butcherg commented 5 years ago

White Balance operator on the float** rawData.

heckflosse commented 5 years ago

@butcherg Wouldn't it be better to apply this factors on the fly in the demosaicers and cacorrection?

CarVac commented 5 years ago

Is this something worth doing on the raw data? How much benefit is there for demosaicing quality?

What about highlight handling, how does doing this before demosaic affect that?

heckflosse commented 5 years ago

@CarVac RT applies auto-wb before ca-correction and demosaic. For reference

Iirc Photoflow applies the used (user defined) wb before demosaic. @aferrero2707 correct?

aferrero2707 commented 5 years ago

@heckflosse yes, that's correct. This actually has the great advantage of allowing to clip the RAW values after WB and prior to demosaicing, thus removing some purple fringing in very high-contrast edges that is introduced by pixels for which the red channel goes beyond 1due to WB multipliers.

This is only possible if the final WB multipliers are already applied before the demosaicing step...

butcherg commented 5 years ago
  1. I also want the opportunity to do WB downstream in the color correction, where the WB is baked into the camera profile.
  2. I've never been a fan of "this_AND_that" operations, even for performance. Do one thing, do it well...
  3. I need to change what I submitted to include two green multipliers... thought about that about 3 minutes after I clicked Submit... will update the push in a bit.
butcherg commented 5 years ago

Bump.

Is there something about it I need to reconsider? The only consideration I'd like to protect is to be able to choose to come out of demosaic with an un-whitebalanced image...

heckflosse commented 5 years ago

@butcherg Glenn, why not name it multiplyChannels instead of WB_apply?

butcherg commented 5 years ago

Works for me, "it's not just for white balance" makes it relevant to any reason to scale.

heckflosse commented 5 years ago

@butcherg

makes it relevant to any reason to scale

absolutely! That's why I suggested to change the name of the function.

aferrero2707 commented 5 years ago

The other aspect of WB that IMHO should be addressed is how to determine the WB coefficients, how to normalize them, and how to deal with the resulting RAW values when they exceed 1.

DT (and PhF) use the same WB mode names and coefficients as the corresponding cameras, while RT AFAIK has a fixed set of temperature/tint values independently of the camera model. Should we provide an interface for both approaches?

The other aspect is the normalization of WB coefficients and the clipping of the WB output. The simplest and safest approach is to normalize the coefficients such that the smallest one is equal to 1, and to clip all WB output values that exceed 1. This guarantees neutral highlights and reduces the purple fringing near clipped areas that is otherwise introduced by the demosaicing step. However, this removes some available dynamic range in the red and blue channels, and does not allow further HL recovery...

CarVac commented 5 years ago

Determining the white balance coefficients is indeed problematic.

In Filmulator, I apply the Camera WB in raw color prior to the conversion (matrix multiplication) to the working color space (sRGB in Filmulator), and then apply the user's selected adjustments in the working color space.

This yields good results when there's no clipping in the channels with the lowest WB multiplier, but has issues with many modern cameras (with negative coefficients in the raw -> sRGB conversion matrix) when the red or blue channels are clipped, yielding the dreaded magenta highlights.

This is a legacy thing though from working with LibRaw. I'm not sure how to set the multipliers in the raw color space, which would easily allow clipping prior to applying the color matrix; I've spent many hours lately messing around with this and failing.

butcherg commented 5 years ago

Might be my naivety, I wrote an ApplyCameraWhiteBalance method for rawproc that just takes the three multipliers and applies them to the mosaic. rawproc will allow one to open the raw without modification, except transform to 0.0-1.0 for internal floating point, so my 14-bit images produce a maximum float value of about 0.25. Opened my test image, the camera multipliers are 2.098,1.9,1.316, and with those the max of any value is 0.524, nowhere near clipping at 1.0. Yes, applying white balance before scaling...

All that said, the thing about this tool I consider important is to be able to not apply it. I'm still messing with camera profiles made from un-whitebalanced target shots, and for those the white balance correction comes when the image is converted from camera space to whatever working space. With a target shot from similar light as the scene, I'm finding I start with less-desaturated colors than when I use the multipliers. So, in cases where I have an appropriate camera profile, I don't want to do the multiplication thing in the raw processing, anywhere.

BTW, this week sometime I'm going to incorporate in rawproc's demosaic tool the widgets to change the parameters unique to particular demosaic algorithms. If anyone would like to try it out, I can make and post an AppImage...

KevinJW commented 5 years ago

As there are cases for demosaicing raw files for more 'data' purposes I would avoid mandatory hard clipping, the example given of highlight recovery is one. In my tool, I have both options, I default to clipping, but enable unbounded output when requested.

I say this a a potential user of the library ( I want to make up for the loss of the additional demosaic algorithms in libraw 0.19.x).

Kevin

aferrero2707 commented 5 years ago

@CarVac I am almost certain that the only colorspace in which you an correctly apply the WB multipliers is the camera colorspace. That is, if you first convert to linear sRGB and after you apply the WB multipliers to the sRGB values, you do not obtain the same output as when you apply the WB multipliers to the pixel values in camera colorspace, before converting to sRGB.

This has to do with matrices that do not commute when they are multiplied (both the colorspace conversion and the WB adjustment can be represented by 3x3 matrices that are multiplied with the RGB vectors).

heckflosse commented 5 years ago

@KevinJW Great that you are thinking about using librtprocess for your project. We need feedback and every use case will give feedback I think. Very much appreciated.

As there are cases for demosaicing raw files for more 'data' purposes

What do you mean by more 'data' purposes concerning demosaicing?

KevinJW commented 5 years ago

Apologies, should have been clearer.

A lot of how the images output by my tool treat the files as data, think less of them being pretty pictures but more an approximate capture of the original scene light, e.g. for HDR lighting capture or texture reflectance. As such I'm looking for processing that does not apply any aesthetic adjustments, but purely technical adjustments ideally. This includes options for not clipping data, outputting camera "native" colour space, etc.

At the same time options to improve quality/performance by sensibly turning on some combination of features such as white balance/exposure before demosaic, or outputting unconstrained colour space conversions i.e. allowing for unbounded floating point, negatives included, due to different encoding primaries etc.

On the topic of white balance multipliers, yes you should apply those before any colour conversion, or if you do commute the matrix you'll need to adjust the numbers to account for the change in basis.

It is like trying to remove channel specific sensor noise after you have mixed the channels up by colour space conversion - it is also more involved.

Kevin

CarVac commented 5 years ago

@aferrero2707

What makes the raw color space distinct from, well, every other color space? And what do you mean by "correctly" apply?

I apply the camera WB first, and then do a post-adjustment afterwards.

The formulas I found for the Planckian Locus (blackbody radiation) were in XYZ, not camera space or working space.

KevinJW commented 5 years ago

The multipliers coming out of the camera relate to the proportion of each of the camera channels it estimates to make white, or put another way, if you had a white card under the illumination in the scene imaged by the sensor then these multipliers are needed to make the card roughly equal in each channel.

Those proportions of 'red', 'green' and 'blue' only apply in the native space of the camera, if you apply them in another colour space they will in general give you a different result.

So doing WB first then colour space converting is the appropriate thing to do.

Black body radiation is defined in terms of wavelength and temperature, you can convert that to XYZ by integrating with the appropriate CMFs. Or you may have some alternate formulation that directly outputs XYZ (or chromaticities).

If you want to define a UI that allows people to input a temperature and offset from it (assuming illumination from a black body radiator) to define a custom white balance then you would need to convert those into the camera native colour space in order to correctly get the multipliers.

At least if your goal is to try output the image the camera could have produced had those been the result of the cameras own estimation of the illumination.

Quickly looking at the code here https://github.com/CarVac/filmulator-gui/blob/1150b31dea14a3c62f42e14c92e42903a3cc9c03/filmulator-gui/core/whiteBalance.cpp#L9 it looks like that is the kind of thing you are doing, though I only read it by hand !

If your goal is more of a creative adjustment, then you can do what ever makes the image look as desired!

KevinJW commented 5 years ago

I should probably say this is only an approximation as the camera spectral sensitivities are almost certainly not a match for the CIE 1931 CMFs, i.e. the average eye and the camera are likely different in the strictest sense.