Open hym3242 opened 2 months ago
A visualization in 8bit luma:
This grayscale ramp
After conversion will look like this. Notice how the 0 in sRGB is not 0 in Rec.709.
255x255 versions:
The sRGB standard states that the reference display is assumed to have pure 2.2 gamma (pure power law)
Such a quote does not exist in the standard. What it says is that there is some cathode display with 2.2 pure gamma, but no one uses cathode display anymore and we know no cathode display used 2.2 gamma, they were all 2.35, 2.4. It was common mistake at the time, but then someone measured true gamma response.
What zimg is doing is assuming that the sRGB material is graded on a monitor with a compound function transfer curve
This is not an assumption, modern Macbooks are indeed using sRGB EOTF, they are not calibrated for 2.2 gamma, there is a linear spline. They all use P3-D65 + sRGB, that is what Display P3 is.
@ValeZAA Thank you for your reply. I do not have the equipment to test, so I used this shadertoy which shows that my Macbook is using pure power law gamma. (I used 100% scaling) I also have another question: If there is no such pure power law gamma in standard, why do Baselight etc. have it in the Viewing Color Space option? Baselight even warns us that: ~2.2 is the sRGB encoding color space and for scene-referred workflows we should use sRGB Display (=2.2) for display and rendering. Is FilmLight (and the colorists) wrong?
so I used this shadertoy which shows that my Macbook is using pure power law gamma
I know this test, it was used to show that Macbook uses sRGB curve since circa 2021. Maybe your macbook is older.
do Baselight etc. have it in the Viewing Color Space option
Nobody uses Baselight in Hollywood. Everyone uses Davinci. It has 2.4, 2.2, 2.0, sRGB for input, output and timeline color spaces. sRGB is different there with 2.2 gamma.
To conclude here, CRT was never 2.2 gamma. They are all 2.35. GPU was gamma correcting data to show 2.4 gamma or sRGB on 2.35 gamma, in fact this is how Apple did it, the corresponding correction of 709 OETF to get EOTF linear image (if the 1.2 end-to-end gamma is assumed) was a pure gamma of 1.2 / 2.35 = 0.51 = 1/1.9608. It was used in such way by Apple until Display P3 devices came into existence. You can see the code for this here: https://github.com/mm2/Little-CMS/pull/69
Nowadays Apple uses more complex stuff. Also Apple used png images that were tagged as gAMA 1.8 for quite some time when they were using gamma correction in the GPU to show stuff on CRT.
sRGB standard describes OETF and EOTF and the way how to convert linear RGB to XYZ. It makes no sense if in your example EOTF is never used, because when you have XYZ data you convert to RGB, encode it using OETF. And decoding happens with EOTF that in sRGB is just an inverse...
Adobe RGB e.g. uses pure gamma 2.2. For OETF and EOTF.
Also, you are proposing the decoding of data with 2.2 gamma instead of EOTF. I do NOT understand where in your opinion EOTF is used then. Also, do you even know what EOTF stands for? It litetally is Electro optical, so it proves displays must use EOTF. It is just that they kind did not until new Macbooks.
so I used this shadertoy which shows that my Macbook is using pure power law gamma
I know this test, it was used to show that Macbook uses sRGB curve since circa 2021. Maybe your macbook is older.
do Baselight etc. have it in the Viewing Color Space option
Nobody uses Baselight in Hollywood. Everyone uses Davinci. It has 2.4, 2.2, 2.0, sRGB for input, output and timeline color spaces. sRGB is different there with 2.2 gamma.
My MacBook is M2 Max MBP from 2022. It really does use =2.2 gamma. I used Baselight as example because it is more for "professional" colorists compared with resolve which, in my opinion, does not even have proper consistent naming for their colorspaces/gamma. Resolve using compound curve for "sRGB" output gamma is only a sign that there is so much misconceptions in the industry. Also Baselight is often mentioned in color grading societies and sometimes even called an industry standard.
I unfortunately do not have the sRGB standard at hand, I would like to see where the standard states that EOTF is the compound curve. I thought that it's like rec.709/bt.1886 where the EOTF is defined with reference display gamma. If I remember correctly, sRGB, being a scene-referred system like Rec.709, defines EOTF as Lc=OOTF(OETF^-1(V)). If they are the same there is no need to specify a EOTF anymore.
I don't know if you have noticed, the Rec.709 OETF has approx. gamma of 1.96. The equation you used to derive 1.96 does not actually mean much since end-to-end (OOTF) gamma 1.2 is derived with reference OETF and reference display characteristics, not vice versa. macOS AVFoundation uses Rec.709 camera OETF without the linear part (pure power law) to linearize the signal.
@hym3242 I was unable to find full IEC 61966-2-1 standard for free, only earlier draft (with offset 0.0):
https://personalpages.manchester.ac.uk/staff/d.h.foster/Tutorial_HSI2RGB/IEC_61966-2-1.pdf
and even more earlier draft with different offset
https://ftp.osuosl.org/.1/libpng/documents/proposals/history/sRGB-iec6196621cd1.pdf
I'll ask around for full text :)
Internet research has not shown a consensus regarding the debate between gamma and piecewise-gamma. However, ZIMG_TRANSFER_BT470_M is available for those desiring a pure-power function with exponent 2.2.
@sekrit-twc Thank you for your reply. BT470M -> BT709 indeed gets the result I expected. However, considering that Apple uses pure powerlaw gamma from the result of this shadertoy, and Baselight specifically tells us the sRGB display gamma is power law gamma, I think the consensus is quite clear. At least on Apple devices, to make sure a video looks the same before and after gamma conversion, we should use power law gamma. I think the standard also has a word on it, but I didn't manage to get a copy. I don't think you will ever get a consensus by a simple internet search, since not everyone is a colorist or familiar with standards/implementaions.
However, considering that Apple uses pure powerlaw gamma from the result of this shadertoy, and
Since recently on Apple macbooks they now has sRGB and not pure gamma 2.2000.
However, considering that Apple uses pure powerlaw gamma from the result of this shadertoy, and
Since recently on Apple macbooks they now has sRGB and not pure gamma 2.2000.
No they are not from my experiments. In my experiments, it only follows the compound ~2.2 gamma when monitor preset is set to presets other than Apple XDR Display or Apple Display (which are presets most people use). This is true both for the sRGB tagged shadertoy canvas and for an sRGB PNG in Preview.app (but surprisingly not in QuickLook which follows piecewise curve). Please stop making baseless and vague (since when? in what preset? what model? does the software use ColorSync?) claims without any proof. What the monitor are calibrated to does not matter because it's a measurement of the monitor itself, the true presentation through color management pipeline when a framebuffer is tagged IEC61966-2-1 matters. Your argument itself is flawed.
Also, there's a colorist from liftgammagain confirms my argument.
Such a quote does not exist in the standard.
I think the sRGB OETF/EOTF issue is similar to the BT.709 one, and for that, inverse pure gamma 2.4 is pretty much universally used nowadays as OETF, instead of the OETF from the actual specification, which was supposed to lighten dark areas in analog live TV transmissions. I think using an OETF that's not the inverse of the EOTF is just really stupid with the processing options we have today. However, for sRGB, there are actually displays out there that use the inverse sRGB OETF as EOTF, although the majority definitely uses pure Gamma 2.2, at least nowadays. It's easy to tell the difference by looking at a natural picture with dark areas that you know which OETF it has been encoded with.
Btw, if any change to pure gamma 2.2 should be done, it should definitely happen to both encoding and decoding sRGB, otherwise a lot of scripts out there would produce unintended bad looking results. The downside of that would be that I couldn't fix sRGB images encoded with sRGB OETFs with some zimg implementation anymore...
Such a quote does not exist in the standard.
That is assuming you use CRT, CRT is no longer in use. Otherwise you just use EOTF. CRT never used 2.2 gamma, that was not well understood at the time the standard was written. CRT is actually 2.35 and first LED were 2.4.
Again, EOTF is electro optical transfer function. Unless you are saying optical photons are "VIRTUAL" and not ideal values you must use EOTF to decode sRGB pixels.
When you encode linear data you use OETF that is curved, that was so that black details are not lost due to 8 bit low bitness. Its decoding should be done with inverse OETF.
Yes, there is a fact that most LED displays on the planet used 2.2000 gamma... so there is a mastering issue. But in practice this is no different from BT.1886, while it says 2.4 gamma it has CRT emulation mode and it depends on how deep black is, so it is perfect 2.4 only on OLED.
That is assuming you use CRT, CRT is no longer in use.
It's the only mention of a precisely defined reference display in official sRGB specs.
Otherwise you just use EOTF.
There is no official sRGB EOTF, only an OETF, just like with BT.709, for which zimg also uses the inverse EOTF of the reference display for BT.709, as defined in BT.1886, as OETF.
I'm not saying this is inherently the right approach, but the industry has definitely moved to targeting the reference displays when using OETFs, mostly because transfer curve color management is very rarely done, even nowadays, unless PQ has to be tonemapped. There is also a compression benefit when encoding to Gamma 2.2 instead of sRGB OETF, as pure gamma 2.2 is more perceptually uniform. And pure Gamma 2.2 decoded using inverse sRGB definitely looks nicer than the other way around.
There is no official sRGB EOTF, only an OETF
There is always an EOTF. From the standard:
like with BT.709
There is an official EOTF for BT.709, it is specified in BT.1886.
There is also a compression benefit when encoding to Gamma 2.2 instead of sRGB OETF
That is because any pure gamma just erases a lot of data near black.
pure gamma 2.2 is more perceptually uniform
No, it is not, as it loses data near black WHERE OUR EYES ARE MOST SENSETIVE. PQ is more perceptually uniform, perceptual quantiser is literally what our eyes do, as Barten constract sensivity formula desribes.
There is also a compression benefit when encoding to Gamma 2.2 instead of sRGB OETF
That is because any pure gamma just erases a lot of data near black.
pure gamma 2.2 is more perceptually uniform
No, it is not, as it loses data near black WHERE OUR EYES ARE MOST SENSETIVE. PQ is more perceptually uniform, perceptual quantiser is literally what our eyes do, as Barten constract sensivity formula desribes.
Hmm, I did some math, and it seems that below ~0.216% luminance (linear), Gama 2.2 OETF preserves more information, and above that until ~41.33% luminance, sRGB OETF preserves more information.
Hmm, I did some math, and it seems that below ~0.216% luminance (linear), Gama 2.2 OETF preserves more information, and above that until ~41.33% luminance, sRGB OETF preserves more information.
I think ValeZAA is talking about the crushing of black regions when we encode with ~2.2 then display with =2.2 gamma. And precisely because of that, we should just use the =2.2 gamma when encoding from linear so everyone will be happy. No matter what the standard says, as long as we all use the same gamma when displaying, the problem will be easy to solve. Unfortunately that's precisely the problem we are facing. Even Apple does not handle the same sRGB image the same across their builtin softwares.
There is also a compression benefit when encoding to Gamma 2.2 instead of sRGB OETF
That is because any pure gamma just erases a lot of data near black.
pure gamma 2.2 is more perceptually uniform
No, it is not, as it loses data near black WHERE OUR EYES ARE MOST SENSETIVE. PQ is more perceptually uniform, perceptual quantiser is literally what our eyes do, as Barten constract sensivity formula desribes.
Hmm, I did some math, and it seems that below ~0.216% luminance (linear), Gama 2.2 OETF preserves more information, and above that until ~41.33% luminance, sRGB OETF preserves more information.
It does not if you use 8 bit quantisation. Linear spline of sRGB was created only for 8 bit, even now most images are 8 bit. I am talking about values like near 0, 0, 0 R'G'B' like 1, 1, 1; 2, 2, 2, 3, 3, 3.
There is also a compression benefit when encoding to Gamma 2.2 instead of sRGB OETF
That is because any pure gamma just erases a lot of data near black.
pure gamma 2.2 is more perceptually uniform
No, it is not, as it loses data near black WHERE OUR EYES ARE MOST SENSETIVE. PQ is more perceptually uniform, perceptual quantiser is literally what our eyes do, as Barten constract sensivity formula desribes.
Hmm, I did some math, and it seems that below ~0.216% luminance (linear), Gama 2.2 OETF preserves more information, and above that until ~41.33% luminance, sRGB OETF preserves more information.
It does not if you use 8 bit quantisation. Linear spline of sRGB was created only for 8 bit, even now most images are 8 bit. I am talking about values like near 0, 0, 0 R'G'B' like 1, 1, 1; 2, 2, 2, 3, 3, 3.
But it's the transfer curve that determines where and how finely resolved the lowest 8-bit values are. On sRGB OETF, 8-bit R'G'B' of 1, 1, 1 is ~0.03035% linear luminance. But for Gamma 2.2 OETF it's an extremely low ~0.000508% linear luminance. For 100 nits PQ it's even less, ~0.0001354%. The actual compression advantage of the linear part that I see is that less bits are wasted on shades that would land below the darkest values the monitor can actually display or would become invisible due to bright elements in the image or surrounding.
Gamma 2.2 has the disadvantage that the tone transfer function curve actually "lies" on the value axis, and several shades between zero and deep shadows are not distinguishable, several shades are actually lost for image encoding, and instead of 256 values in 8 bits, we have about 250-252 values when encoding with gamma 2.2. This is also clearly visible when comparing the histograms of the same images with gamma sRGB and with gamma 2.2 (simplified) in the same gamut:
At the bottom left of the histograms of the same image with different gamma, you can see that pixels with some of the values in the shadows are simply missing, they all merged to zero.
Here is somebody saying that Apple Pro display XDR is true sRGB, not 2.2 gamma. That is what I was quoting.
I understand what you mean, but still, we are not here to discuss the advantages and disadvantages, but how the standard says and which one is the one we should all use. We are not and should not be making new standards or new interpretations of old standards. Which one is better is simply out of our scope.
Can your "somebody" please post how their experiment was carried out? As I said, the result can be very different across different settings and software/frameworks used. I would like to reproduce his experiment, and maybe we should file a radar to Apple...
There is an EOTF spec in the standard (how you get the photons back), why are you not adressing this issue, often sRGB is encoded by Photoshop as actual sRGB, 1DLUT used by Microsoft is not 2.2 gamma, it is sRGB, simplified sRGB that is 2.2 is not even in use. If you use reference display 2.2 then it means EOTF is never used at all, why is that even there then?
Note that it ia different with BT.709, ALMOST no camera used BT.709 pure 2.0 gamma with linear spline to encode.
This problem is similar to #61.
The sRGB standard states that the reference display is assumed to have pure 2.2 gamma (pure power law). But the current implementation of sRGB->bt709 (transfer_characteristics) seems to use sRGB piecewise curve as EOTF, then applies BT1886's EOTF^(-1) which is pure 2.4 gamma. What should be done is to apply Lc=V^2.2 then apply V=Lc^(1/2.4) to match the two reference displays. What zimg is doing is assuming that the sRGB material is graded on a monitor with a compound function transfer curve. But from this video most people in industry assume a monitor to have pure 2.2 gamma when it is said to be sRGB.
When conversion is done in 8bit depth, this also causes a problem of the inverse of crushed shadows -- black level is too high and no more true black can be achieved after the conversion.
I would like to apologize if I turn out to be wrong, in which case please kindly point out my mistake!
Edit: @sekrit-twc does not seem to be active though, so this issue will act as a reminder for any color-sensitive pipeline using ffmpeg, which includes large video hosting sites. Do your conversion/LUT locally in Resolve/Baselight etc, do not let YouTube &c do it.