Closed swick closed 1 year ago
Are you sure? I admit that I've not bought the official spec (or yet had a rummage to see whether my current employers have it), but:
https://www.w3.org/Graphics/Color/sRGB
(which is the original proposal document) is pretty specific that it's talking about an EOTF, and it talks about it in the context of the BT.709 OETF being part of the system. My understanding is that sRGB is effectively doing for computer monitors what BT.1886 was intended to do for televisions - with the latter appearing later.
The section initially cited on the freedesktop.org link notes that the reference display uses a pure gamma function, but that the sRGB transform is an approximation which includes the linear term to allow for invertibility. As far as I'm aware it still defines an EOTF, though. Particularly it incorporates a deliberate OOTF relative to a BT.709 input expected to allow for different viewing conditions, as does BT.1886. I'll try to join in on freedesktop.org.
Sorry, I've now (in case something had radically changed) had a chance to look at the official specification, which says:
This sRGB standard essentially defines the second part of this transformation between the reference RGB display space and the display CIEXYZ tristimulus values in a dim viewing environment.
That is, it's an EOTF (electro->optical transfer function), not an OETF (optical->electrical transfer function), which only describes how a camera converts captured light from the scene into an electrical representation. Being designed for computer displays, sRGB doesn't have a "camera" or a concept of a real-world scene.
An internal representation claiming to be BT.709, for example, may be "scene-referred" in that it is defined in reference to what a real-world scene would look like. Televisions were expected to do whatever was appropriate to make that real-world scene look palatable to the user allowing for variations in size, brightness and expected viewing conditions, until BT.1886 defined such a mapping for a reference display.
sRGB is display-referred (like BT.1886), and describes the relationship between the internal representation and a reference monitor. That monitor may have a nominal gamma function with a 2.2 exponent, but the sRGB representation - mostly for reasons of invertibility - incorporates the linear segment while approximating this curve.
So while I concur with the freedesktop thread that BT.709 defines only an OETF and not an EOTF, I don't believe any part of sRGB defines an OETF, and only defines an EOTF. Its only reference to an OETF is to mention BT.709 by example, in the context of the total OOTF for the reference system.
@swick - are you able to persuade me otherwise? I'm very much prepared to be wrong, but everything I've read says sRGB defines only an EOTF (or two, if you consider the reference monitor gamma to be a separate EOTF)...
Took a while to come back to this.
I think you managed to convince me that the EOTF is not a pure 2.2 power function. The spec really never talks about an OETF. The pure 2.2 gamma of the reference display and the actual two-part encoding do not have an intentional mismatch. The reference display just has pure gamma because it's a CRTC and that's how they operate and the EOTF is as close to the pure gamma as possible while also making it possible to actually encode it. The intentional mismatch the spec is talking about seems to be the one between BT.709 OETF and the sRGB EOTF.
Thanks for investigating!
You might want to actually read the specification more closely, or even speak with one of the original authors. Optionally, speak with anyone responsible for display colourimetry or mastering.
It is rather clear:
There is even an entire annex that speaks of the weakness of BT.709’s failure to address the EOTF context, and how sRGB can be used with compatibility.
(@sobotka - I assume this means you're still unconvinced.)
Yes. It talks about the display (EOTF), characteristics of the display (EOTF) and reference display (EOTF). At no point (except in reference to what BT.709 defines) to does it talk about an OETF - sRGB contains no translation from scene data to an electrical representation, because it's got nothing to do with cameras, only an internal digital representation and the expected reference display. I've only worked with the Samsung team who worked on the TV standards (and cameras) and with the standards groups involved in implementing sRGB and colour management in GPUs (and for a few manufacturers of them), and I'll mention that nobody else has ever asserted to me that sRGB is scene-referred - but colour science is complicated and it's always possible that nobody else was paying attention. I concede that I've not directly spoken to the authors of the sRGB document.
I see the discussion moved on - I've skimmed the Berns/Katoh and it does indeed briefly mention an OETF in the context of CRTs - but I'd put reasonable money on that being a typo. I've absolutely written this backwards in the past even knowing the difference (there are plenty of other typos in the KDFS that I'm in the process of fixing) - and it's really not clear to me how a display is ever supposed to translate light into an electrical representation, so a display having an OETF really doesn't seem plausible.
I remain willing to eat my words, and obviously I want the KDFS to be correct. I'll see whether I can find a way to reach out to the sRGB draft authors. It's good to know that Dr Poyton is still active - last time I attempted to query some typos in formulae on his website I didn't get a response. :-) I'll report back if I manage to reach them.
At no point (except in reference to what BT.709 defines) to does it talk about an OETF - sRGB contains no translation from scene data to an electrical representation, because it's got nothing to do with cameras
This has nothing to do with the brain wormed nonsense of “scene”. It has to do with integer stability, as referenced in other documents.
so a display having an OETF really doesn't seem plausible.
Displays have EOTFs. sRGB is an OECF / OETF, built atop of the notion of implicit management chains.
This has nothing to do with the brain wormed nonsense of “scene”. It has to do with integer stability, as referenced in other documents.
I am very confused. "Opto-electrical transfer function" is a way of translating between optical (scene/light) information and an electrical representation, which is what a camera does. "Electro-optical transfer function" is a way of translating from the electrical representation to the light emitted by the display. That's what those terms mean. I don't see how sRGB could ever be defining an OETF. It does refer to the BT.709 OETF, and that was certainly under consideration when mapping to the output display, but it doesn't replace it.
Displays have EOTFs. sRGB is an OECF / OETF, built atop of the notion of implicit management chains.
Figure 3 in https://www.w3.org/Graphics/Color/sRGB.html (also in the sRGB spec), annotated "This sRGB standard essentially defines the second part of this transformation between the reference RGB display space and the display CIEXYZ tristimulus values in a dim viewing environment.", seems pretty clear to me that the sRGB transfer function is intended to apply to the mapping between the digital representation and what the output display does. It's certainly not representative of the OOTF you get by combining it with BT.709's OETF. Like BT.1886, it defines a reference display and associated EOTF intended to be compatible with the OETF of BT.709, applying a suitable OOTF for mapping from the agreed scene-referred BT.709 production representation to the (darker) standard viewing conditions.
From the perspective of common computer graphics, the intent is that if you perform linear-light image processing operations (such as antialiasing filtering), the result needs to have the sRGB inverse EOTF applied in order to meet the internal image representation; that representation is then re-linearised by the EOTF applied (implicitly) by the display path. As you say, it's not normal (in 8bpp) to have a linear-light frame buffer because of the nonlinear perceptual light response and its interpretation of quantisation. If you don't get a good match to the linearity of the display output with this transform, you get nonlinearities as shown in 13.1 of the KDFS. The sRGB formulae in the KDFS have been universally used to translate linear light values for texturing and filtering since sRGB came into existence; I find it a bit hard to believe that they're not expected to represent this mapping or that nobody would have raised the misuse by now.
If there's a failure in my understanding of these terms and I'm misdocumenting them, I of course want to know that. They're certainly open to confusion, and I don't want to propagate that. But I struggle to find any reference of sRGB describing an OETF anywhere, except possibly in https://gitlab.freedesktop.org/pq/color-and-hdr/-/blob/main/doc/pixels_color.md which presumably triggered this issue. Sebastian seems to have been persuaded that this was just a misconception?
I'll continue to rummage through my contacts and see whether I can find a way to contact the sRGB draft authors.
But I struggle to find any reference of sRGB describing an OETF anywhere, except possibly in https://gitlab.freedesktop.org/pq/color-and-hdr/-/blob/main/doc/pixels_color.md which presumably triggered this issue. Sebastian seems to have been persuaded that this was just a misconception?
I adjusted the document when @sobotka raised the issue and it convinced me. I intend to revert the commit if it is wrong (which I now have good reason to believe).
I'll continue to rummage through my contacts and see whether I can find a way to contact the sRGB draft authors.
That would be amazing. Thanks a lot!
Thanks, @swick. This is a confusing enough topic that I'm always prepared to look stupid about it, but I think my mental model is internally consistent, which beats a lot of films I've watched...
I'm poking my contacts who might know the right people in Microsoft and HP now, since their time zone is becoming more compatible with being conscious.
I am very confused. "Opto-electrical transfer function" is a way of translating between optical (scene/light) information and an electrical representation, which is what a camera does.
There is a long and deep rabbit hole here, but let me be very clear; no camera has ever presented the colourimetry fit to its sensor catches.
There is no “scene” in the Mona Lisa; the picture, as a picture, exists as a closed domain arrangement of colourimetry.
Happy to try and delve into the rabbit hole, but the point above stands, as well as notions of “scene” being distractionary here. The question is about quantised signals, typically expressed in an integer encoding, failing to quantise well in the lower end. There is zero need to bring “scene” into the equation, and it is a red herring.
"This sRGB standard essentially defines the second part of this transformation between the reference RGB display space and the display CIEXYZ tristimulus values in a dim viewing environment.", seems pretty clear to me that the sRGB transfer function is intended to apply to the mapping between the digital representation and what the output display does.
Note the RGB part, which is tied to representations as code values.
From the perspective of common computer graphics, the intent is that if you perform linear-light image processing operations (such as antialiasing filtering)
Case in point, uniform tristimulus is the wrong domain for constructive approaches, as well documented in glyph rendering over the years.
RGB is not light. It’s tristimulus. It is 100% photometry all the way down.
If you don't get a good match to the linearity of the display output with this transform, you get nonlinearities as shown in 13.1 of the KDFS.
I have no idea what the KDFS is, but if the above statement about AA is any indication, I suspect there is room for caution here. Constructive approaches are attempting to build a signal that never existed, as though a photometric observer were to sense the geometry. Hence sampling from uniform tristimulus is the wrong domain.
The whole reason I bring up the discrepancy between the sRGB OETF and the prescribed EOTF is to appreciate that there is a historical gulf here that must be bridged. ICC protocols cannot account for this given that it effectively negates the display transfer characteristic to get to a uniform tristimulus no operation from code value to exit stimulus in terms of radiation.
There are merits to this, but we must also accept that the historical approach had implicit management at work, which also accounted (or at least attempted to) for viewing conditions. Applying a discrepancy between an OETF and an EOTF does more than simply what folks think of “contrast”. It changes the colourimetry fundamentally. It changes chromaticity purity, as well as chromaticity angles, and luminance output. This is the “happy accident” part of the implicit approach; low overhead, and acceptable results.
Again, for 99% of cases, all of this is bunko nonsense. It is worth, however, noting how the historical facets worked at the mechanical level in order to fully appreciate that we cannot simply gloss over the importance of the discrepancy, even in full ICC chains; there is still a lack of accounting for surround and veiling glare, as well as hardware colourimetry in terms of absolute luminance minimum and maximum. All of these facets play a role in the formulation of colour within our visual systems, and as such, the oldschool implicit mechanic is worth studying more than in passing!
The sRGB formulae in the KDFS have been universally used to translate linear light values for texturing and filtering since sRGB came into existence; I find it a bit hard to believe that they're not expected to represent this mapping or that nobody would have raised the misuse by now.
There is a subtle different set of contexts here.
In cases where texturing is important, the photometric colourimetry being arranged as uniform tristimulus can be beneficial. In this case, an “idealized” inverse of the encoding applies. With that said, a majority of folks are using random textures and frankly, none of it matters. And we won’t get into the nuances of pseudo-albedos anchored in photometry…
The subtle nuance is that a texture is not a picture in the surface-presented-as-a-picture for audience reading is. It is an incredibly important distinction.
I'll continue to rummage through my contacts and see whether I can find a way to contact the sRGB draft authors.
Mr. Motta might be at a large search company. Mr. Stokes can also be tracked down. Dr. Poynton is well worth reaching out to.
The point I would make here is not to distract or belabour things over what amounts to effectively nonsense, including the whole heap!
The point here is to glean what we can from the historical approach of implicit management, and its critical role in picture appearance equivalency. We can plausibly do better, but a comprehensive grasp of the issues at hand, including fully understanding how discrepancies between an OETF and EOTF shape our picture reading, is critical.
It probably cascades into a much deeper hole with respect to what a picture is, and why cameras never can, or should, represent the colourimetry that is fit to their quantal catches. This is a point worth noting as there is often an assumption that a picture is nothing more than as-measured values. We don’t have even the faintest idea of how pictures work, but we can indeed focus on the appearance facets and learn from the historical implicit processing.
I should point out that by matter of protocol, close to 100% of computing usage sees the classic sRGB prescribed two part OETF to pure power function 2.2 EOTF. Given that the vast majority of displays conform to a uniform 2.2 EOTF power function, under the default installation for macOS and Windows, within an ICC managed situation:
That is, by default, the entire reference specification outlined is followed to precision in the majority of desktop installations. As an added side note, the colourimetry only changes, rendering the discrepancy moot, in a characterized setting.
I believe everyone here understands the implicit management of appearance equivalency just fine. What we seem to disagree on is whether the sRGB spec defines an two-piece OETF and a 2.2 gamma EOTF or only a two-piece EOTF.
The argument for the first case is that the spec says the reference display has 2.2 gamma and together with the two-piece function forms some implicit appearance adjustment.
The case against it is that the two-piece function is merely a technical trick to make encoding for a 2.2 gamma display possible and the goal is not implicit appearance adjustment. The appearance adjustment the spec talks about is instead about the BT.709 OETF and the two-piece EOTF which approximates 2.2 gamma.
Good summary, @swick. @sobotka - sorry for the delay replying, I was reading through your colour blogs to confirm where there's a disconnect and make sure I was replying accurately. I'm still going, and I'll try to respond soon.
Just for clarity:
I have no idea what the KDFS is
Sorry, the "Khronos Data Format Specification" (the spec this repo belongs to). It strikes me that that acronym may not have been made particularly public, even though it's been called by that name from the beginning. I should fix that...
Any news?
Any news?
Sorry, haven't forgotten, and was still doing background reading, although I'm standing by my interpretation that sRGB was only talking about EOTFs, pending evidence to the contrary; however, I'd like to prove it in a way that satisfies you (or know I'm wrong). I had asked after the original authors; I'll chase my contacts again. I'm swamped, but I'm still trying to devote time to this soon.
Thanks, and no worries, this is not time critical. I was just curious.
I had asked after the original authors
Did any reply. ;)
I had asked after the original authors
Did any reply. ;)
The person I asked who would have a route to the right contacts didn't reply. But I'll poke him again.
Stumbled over this issue again. Did you eventually got a response?
Funny you should mention it - I've run into the relevant contact this week, and he's promised to introduce me to one of the sRGB authors. Hopefully I'll have an answer very soon.
Primary author is Michael Stokes.
In the mean time, I wrote https://gitlab.freedesktop.org/pq/color-and-hdr/-/merge_requests/40 promoting the "display is gamma 2.2" case.
I have also heard from Steam Deck developers, that gamma 2.2 is the decoding that makes SDR games look as intended, as opposed to the piecewise function, when re-rendered to a non-sRGB display e.g. BT.2100/PQ.
However, if all you do with sRGB is to decode textures for game rendering purposes, only to re-encode with the exact inverse function, it doesn't matter too much which function you use as long as decoding and re-encoding cancel out, and the space in between is sufficiently light-linear.
FWIW, I've managed to make contact with Ricardo Motta, one of the original sRGB authors. He's on holiday, but promises me an accurate response on his return. Hopefully this will move forward soon. And I'll try to get myself back up to speed.
I didn't know the Steam Deck (at least, the LCD one) could make anything look as intended. :-)
Steam Deck with external monitors, as I mentioned BT.2100/PQ. But they have done lots of work to make the internal display (old and new) to look as good as they can as well.
Yes, sorry for the snark about the old LCD, cheap shot and I don't mean to disparage the developers. I'll get myself back up to speed on the transfer functions.
He's on holiday
He was at CIC.
Sorry for being annoying here, but any news?
Sorry, been blocked. I'll progress asap.
The definition of the sRGB EOTF given in this document is the sRGB inverse OETF. The sRGB EOTF however is a pure 2.2 power function.
Related discussion: https://gitlab.freedesktop.org/pq/color-and-hdr/-/issues/12