There are two "philosophical" perspectives on luminance linearization: Should the grey levels 0 to 1 map to the luminance range of the monitor (relative) or to a line through the origin (absolute)?
Currently, we implement the relative approach, which is more intuitive from a technical perspective and uses the full range of luminance resolution. However, experimentalists are most often interested in contrast differences. For example, a level change of +0.1 and -0.1 should change the luminance by the same (absolute) $cd/m^2$. This absolute luminance calibration should be available as an additional option during calibration.
The implementation is rather easy:
Add another flag to the calibration routine
Save this flag to the calibration file
Respect the flag during linearization: Either offset by min. luminance, or don't.
The solution should make use of the stored minimum luminance to calculate correct contrasts, but internally all 8 (or 16) bit should resolution should be used on the range from minimum to maximum luminance.
There are two "philosophical" perspectives on luminance linearization: Should the grey levels 0 to 1 map to the luminance range of the monitor (relative) or to a line through the origin (absolute)?
Currently, we implement the relative approach, which is more intuitive from a technical perspective and uses the full range of luminance resolution. However, experimentalists are most often interested in contrast differences. For example, a level change of +0.1 and -0.1 should change the luminance by the same (absolute) $cd/m^2$. This absolute luminance calibration should be available as an additional option during calibration.
The implementation is rather easy: