w3c / csswg-drafts

CSS Working Group Editor Drafts
https://drafts.csswg.org/
Other
4.47k stars 658 forks source link

[css-color-4] Conversion precision for hue in LCH model creates false results #5309

Closed snigo closed 4 years ago

snigo commented 4 years ago

One thing I have noticed with sRGB to Lab/LCH conversion as it produces chaotic and very incorrect hue value in LCH color model. Converting any shade of gray to Lab/LCH will result in components a and b very close to zero, but still not zero. It is absolutely fine for calculating chroma, as square root of those numbers will still result in number very close to zero, however calculating hue with Math.atan() gives very high range of (falsy) values whenever there is any difference between a and b, which always there!

Math.atan2(0.0000000005, -0.0000000001) * 180 / Math.PI; // 101.30993247402021
Math.atan2(0.0000000005, 0.0000000001) * 180 / Math.PI; // 78.69006752597979
Math.atan2(0.0000000005, 0.0000000003) * 180 / Math.PI; // 59.03624346792648

Rounding to 3 decimal places of a and b solves problem for any shade of gray resulting in correct value of 0, however creates incorrect results for some highly desaturated colors like hsl(0 1% 1%) or hsl(0 1% 99%), which in my opinion is better comparing to incorrect results for any shade of gray (including white and black)

snigo commented 4 years ago

In library I work on I've set precision such as converting lab(100% 0 0) to XYZ would give me exactly value of D50 ([0.96422, 1, 0.82521]) and converting rgb(100% 100% 100%) to XYZ would give me exactly value of D65 ([0.95047, 1, 1.08883])* and this alone solved all conversion inconsistencies I had.

*I've normalized D65 to have same precision as D50

svgeesus commented 4 years ago

Yes this is why some libraries represent such poorly-defined hues as NaN.

svgeesus commented 4 years ago

See also https://github.com/w3c/csswg-drafts/issues/4928

svgeesus commented 4 years ago

So this requires three spec changes:

  1. Define that <hue> can include the value NaN. Currently, hue is defined as part of HSL, and LCH points to that which is problematic due to the angles mapping differently in the two systems. I plan to beak out hue into a separate section, which both HSL and LCH can point to; put the general stuff in there, and the per-colorspace specific stuff stays assocated withthe colorspace is belongs to.

  2. Define how NaN hues get serialized. This will be part of the new Color OM section, moved from CSS OM.

  3. Define how interpolation works if one or both hues are NaN. This would fit well in the existing CSS Color 5 section on hue interpolation.

Myndex commented 3 years ago

@snigo said: One thing I have noticed with sRGB to Lab/LCH conversion as it produces chaotic and very incorrect hue value in LCH color model. Converting any shade of gray to Lab/LCH will result in components a and b very close to zero, but still not zero. It is absolutely fine for calculating chroma, as square root of those numbers will still result in number very close to zero, however calculating hue with Math.atan() gives very high range of (falsy) values whenever there is any difference between a and b

Hi Igor @snigo

Here's the solution I'm using in SeeLab:

  // Send either a*b* of LAB or u*v* of LUV to create LCh

 function processLCh(au = 0.0, bv = 0.0) {
        var Cabuv = Math.pow(au * au + bv * bv, 0.5);
               // If Cabuv's less than 0.01, set hue to 360, 180, or Nan, or whatever as you need.
               // Here it's set to 0 because I wanted to return a number and also be falsy.
        var habuv = (Cabuv < 0.01) ? 0.0 : 180.0 * Math.atan2(bv,au) * piDiv;
            habuv = (habuv < 0.0) ? habuv + 360.0 : habuv;
    return [Cabuv,habuv];
 }

I'm using the Chroma value to determine if the hue should be clamped. And C < 0.01 is well below the 8bit quantize level. I don't have to round a b or C at all this way, though of course when sending C to a string to display, I'll add a .toPrecision(4) or fixed, etc. The input params default to 0.0, so the return should never be undefined.

ALSO: I stay in D65 because I am not doing anything related to print, CMYK, ProPhoto, or comparing to D50, etc. I'm only working with RGB image, color, or display spaces that are D65 so that's all that's needed, which helps reduce noise/errors. In addition I've pre-calculated all the constants and rounded to 20 places which had a great effect on reducing the noise for grey sRGB colors, and the pre-calcs improve performance too.

   // Lab/Luv constant pre-calcs to 20 places:
const CIEe = 0.0088564516790356308172;      // 216.0 / 24389.0
const CIEk = 903.2962962962962963;          // 24389.0 / 27.0
const CIEkdiv = 0.0011070564598794538521;   // 1.0 / (24389.0 / 27.0)
const CIEke = 8.0;
const CIE116 = 116.0;
const CIE116div = 0.0086206896551724137931; // 1.0 / CIE116
const pi180 = 0.017453292519943295769;      // Math.PI / 180 (pi divided by 180)
const piDiv = 0.31830988618379067154;       // 1/pi to use n*piDiv instead of n/Math.PI
const cubeRoot = 0.33333333333333333333;    // Math.pow(n, cubeRoot)
                                            // Instead of Math.cbrt()

You can see that CIEk * CIEe equals exactly 8.0 as it should, but if k was rounded wrong or to fewer places, an error would be introduced (especially if 'CIEke' was being calculated at runtime instead of being a static constant of 8.0).

I think I could get even lower noise in C if I recalc the sRGB -> XYZ matrix to a higher precision too.

svgeesus commented 3 years ago

Here is the solution we are using in color.js:

    from: {
        lab (Lab) {
            // Convert to polar form
            let [L, a, b] = Lab;
            let hue;
            const ε = 0.0005;

            if (Math.abs(a) < ε && Math.abs(b) < ε) {
                hue = NaN;
            }
            else {
                hue = Math.atan2(b, a) * 180 / Math.PI;
            }

            return [
                L, // L is still L
                Math.sqrt(a ** 2 + b ** 2), // Chroma
                angles.constrain(hue) // Hue, in degrees [0 to 360)
            ];
        }

I'm using the Chroma value to determine if the hue should be clamped. And C < 0.01 is well below the 8bit quantize level.

Hmm, we are doing an epsilon on a and b which gives a square area on the neutral axis. Using chroma gives a circular area and would be better. We return NaN for the hue angle of these indeterminate-hue colors. Our epsilon is smaller than yours, ideally we want it below the 12bit level in Rec BT.2020 colorspace and I should probably verify that it is.

ALSO: I stay in D65 because I am not doing anything related to print, CMYK, ProPhoto, or comparing to D50, et

Okay, but that means the Lab value you calculate and published Lab measurements, or the Lab result returned from a commercial spectroradiometer will be different. I wasn't sure about this either, but some expert guidance plus a desire to be compatible with ICC workflows plus my own experimenting on round-trip error produced by Bradford CAT between D65 to D50 to D65 again, convinced me this was not a significant source of error.

In addition I've pre-calculated all the constants and rounded to 20 places which had a great effect on reducing the noise for grey sRGB colors, and the pre-calcs improve performance too.

Yes, early rounding has often been a source of trouble. That is why the sRGB specification changed the transfer function during standardization, because the early testing was done at high precision while the published proposal was rounded to insufficient number of significant digits and the linear and curved portions of the transfer function didn't actually meet! (Not that this had any visible effect below 10 bits per component).

Likewise the CIE publication 15 was revised to give the constants as rational numbers rather than a rounded-off floating point value. Color.js also uses those:

    ε: 216/24389,  // 6^3/29^3
    κ: 24389/27,   // 29^3/3^3

and again for Jzazbz:

    b: 1.15,
    g: 0.66,
    n:2610 / (2 ** 14),
    ninv: (2 ** 14) / 2610,
    c1: 3424 / (2 ** 12),
    c2: 2413 / (2 ** 7),
    c3: 2392 / (2 ** 7),
    p: 1.7 * 2523 / (2 ** 5),
    pinv: (2 ** 5) / (1.7 * 2523),

I think I could get even lower noise in C if I recalc the sRGB -> XYZ matrix to a higher precision too.

Perhaps, although the precision limit is that the defining chromaticities for most colorspaces are only given to 2 or 3 significant figures.

It isn't really noise (in the sense of measurement noise, although that can certainly be a factor especially for spectroradiometric measurements of dark colors unless care is taken to specify a longer integration time) but simply numerical instability as a and b tend to zero.

Returning NaN in such cases allows handling to be deferred to later processing. Sometimes it is appropriate to treat it as zero, in other cases (such as perceptually uniform interpolation on LCH) it is better to substitute the hue angle of the other color being interpolated.

svgeesus commented 3 years ago

In library I work on I've set precision such as converting lab(100% 0 0) to XYZ would give me exactly value of D50 ([0.96422, 1, 0.82521]) and converting rgb(100% 100% 100%) to XYZ would give me exactly value of D65 ([0.95047, 1, 1.08883])* and this alone solved all conversion inconsistencies I had.

We have the same in color.js

    whites: {
        D50: [0.96422, 1.00000, 0.82521],
        D65: [0.95047, 1.00000, 1.08883],
    },

The sample code in the specification is intended to be clear, simple and easy to read. Production code needs more error checking and handling of corner cases.

I should really add the NaN on hue angle though, since the specification now mentions it.