color-js / color.js

Color conversion & manipulation library by the editors of the CSS Color specifications
https://colorjs.io
MIT License
1.9k stars 82 forks source link

Higher precision for srgb to linear-srgb #596

Open ntkme opened 1 week ago

ntkme commented 1 week ago

Quote from https://en.wikipedia.org/wiki/SRGB:

The values A = 0.055 and Γ = 2.4 were chosen so the curve closely resembled the gamma-2.2 curve. This gives X ≈ 0.0392857, Φ ≈ 12.9232102. These values, rounded to X = 0.03928, Φ = 12.92321 sometimes describe sRGB conversion.

Draft publications by sRGB's creators further rounded Φ = 12.92, resulting in a small discontinuity in the curve. Some authors adopted these incorrect values, in part because the draft paper was freely available and the official IEC standard is behind a paywall. For the standard, the rounded value of Φ was kept and X was recomputed as 0.04045 to make the curve continuous, resulting in a slope discontinuity from 1/12.92 below the intersection to 1/12.70 above.

https://entropymine.com/imageworsener/srgbformula/ - As this linked article says, we might not want to change 12.92 given that is somewhat fundamental to the published standard (even if it is already inaccurate), however, I do think we could use points 0.04044823627710784 and 0.003130668442500607 instead of 0.04045 and 0.0031308 to make the conversion slightly more accurate going back and forth.

ntkme commented 1 week ago

Or if we want to go accurate only honoring the original input constant A = 0.055 and Γ = 2.4 and ignore the "standard" rounded Φ = 12.92, we can use Φ = 12.923210180787853 and cut off point 0.03928571428571429 and 0.003039934639778432 as high precision values.

facelessuser commented 1 week ago

As far as the International Electrotechnical Commission. (1999). IEC 61966-2-1:1999 is concerned, those are the values for the transfer function: https://cdn.standards.iteh.ai/samples/10795/ae461684569b40bbbb2d9a22b1047f05/IEC-61966-2-1-1999-AMD1-2003.pdf.

As far as most people are concerned, and when comparing expectations of sRGB everywhere else, that would be the expectation. While using the described values might be "more accurate", I'm not sure it would translate to any noticeable difference in actual use. But maybe I'm wrong 🤷🏻.

ntkme commented 1 week ago

Agreed that it's probably not enough for human to notice.

However, it is significant enough for computer to notice. For example, in Sass we use a 11 digit precision for comparing if colors' value are the same or not. The transformation matrix in dart-sass has 17 digits, and the color.js has 16 digits, so after space conversion, it is still accurate enough after rounding to 11 digits in Sass. However, if the conversion only has 4 digits precision to begin with and we're trying to compare them at 11 digit precision, we would get lots of inaccuracy.

facelessuser commented 1 week ago

Can you provide a minimal example? I'm not sure I understand the scenario.

ntkme commented 1 week ago

See the test code and result below. When input is around the cutoff point and the difference being around 1e-16, you can see the current output generate output with difference greater than 1e-10, where with the improved v2 (using Φ ≈ 12.92 from the standard) or v3 (using Φ = 12.923210180787853), the output difference around cutoff point is way much smaller.

---
v1.to   difference: 2.3295073188142612e-9
v1.from difference: 2.851730858399737e-8
v1.to   difference: 8.673617379884035e-19
v1.from difference: -1.3183898417423734e-16
v1.to   difference: 7.806255641895632e-18
v1.from difference: -1.3183898417423734e-16
---
v2.to   difference: 5.637851296924623e-18
v2.from difference: -1.27675647831893e-15
v2.to   difference: 8.673617379884035e-19
v2.from difference: -1.1796119636642288e-16
v2.to   difference: 7.806255641895632e-18
v2.from difference: -1.3183898417423734e-16
---
v3.to   difference: 5.637851296924623e-18
v3.from difference: -1.27675647831893e-15
v3.to   difference: 1.3010426069826053e-18
v3.from difference: -1.249000902703301e-16
v3.to   difference: 6.938893903907228e-18
v3.from difference: -1.249000902703301e-16
const v1 = {
  from: val => {
    let sign = val < 0 ? -1 : 1;
    let abs = val * sign;

    if (abs > 0.0031308) {
      return sign * (1.055 * (abs ** (1 / 2.4)) - 0.055);
    }

    return 12.92 * val;
  },
  to: val => {
    let sign = val < 0 ? -1 : 1;
    let abs = val * sign;

    if (abs <= 0.04045) {
      return val / 12.92;
    }

    return sign * (((abs + 0.055) / 1.055) ** 2.4);
  }
}

console.log('---')
console.log('v1.to   difference:', v1.to(0.0404500000000001) - v1.to(0.0404500000000000))
console.log('v1.from difference:', v1.from(0.0031308000000000) - v1.from(0.0031308000000001))
console.log('v1.to   difference:', v1.to(0.04044823627710784) - v1.to(0.04044823627710783))
console.log('v1.from difference:', v1.from(0.00313066844250060) - v1.from(0.00313066844250061))
console.log('v1.to   difference:', v1.to(0.0392857142857143) - v1.to(0.0392857142857142))
console.log('v1.from difference:', v1.from(0.00303993463977844) - v1.from(0.00303993463977845))

const v2 = {
  from: val => {
    let sign = val < 0 ? -1 : 1;
    let abs = val * sign;

    if (abs > 0.003130668442500607) {
      return sign * (1.055 * (abs ** (1 / 2.4)) - 0.055);
    }

    return 12.92 * val;
  },
  to: val => {
    let sign = val < 0 ? -1 : 1;
    let abs = val * sign;

    if (abs <= 0.04044823627710784) {
      return val / 12.92;
    }

    return sign * (((abs + 0.055) / 1.055) ** 2.4);
  }
}

console.log('---')
console.log('v2.to   difference:', v2.to(0.0404500000000001) - v2.to(0.0404500000000000))
console.log('v2.from difference:', v2.from(0.0031308000000000) - v2.from(0.0031308000000001))
console.log('v2.to   difference:', v2.to(0.04044823627710784) - v2.to(0.04044823627710783))
console.log('v2.from difference:', v2.from(0.00313066844250060) - v2.from(0.00313066844250061))
console.log('v2.to   difference:', v2.to(0.0392857142857143) - v2.to(0.0392857142857142))
console.log('v2.from difference:', v2.from(0.00303993463977844) - v2.from(0.00303993463977845))

const v3 = {
  from: val => {
    let sign = val < 0 ? -1 : 1;
    let abs = val * sign;

    if (abs > 0.003039934639778432) {
      return sign * (1.055 * (abs ** (1 / 2.4)) - 0.055);
    }

    return 12.923210180787853 * val;
  },
  to: val => {
    let sign = val < 0 ? -1 : 1;
    let abs = val * sign;

    if (abs <= 0.03928571428571429) {
      return val / 12.923210180787853;
    }

    return sign * (((abs + 0.055) / 1.055) ** 2.4);
  }
}

console.log('---')
console.log('v3.to   difference:', v3.to(0.0404500000000001) - v3.to(0.0404500000000000))
console.log('v3.from difference:', v3.from(0.0031308000000000) - v3.from(0.0031308000000001))
console.log('v3.to   difference:', v3.to(0.04044823627710784) - v3.to(0.04044823627710783))
console.log('v3.from difference:', v3.from(0.00313066844250060) - v3.from(0.00313066844250061))
console.log('v3.to   difference:', v3.to(0.0392857142857143) - v3.to(0.0392857142857142))
console.log('v3.from difference:', v3.from(0.00303993463977844) - v3.from(0.00303993463977845))
ntkme commented 1 week ago

The sass-embedded npm package uses color.js internally. Sass will consider two colors the "fuzzy equal" if their channel values' differences is less than 1e-11 after rounding, like the examples in the test code which is 1e-16, however, it is currently possible that after space conversion, Sass would suddenly consider the converted colors different because the channel value difference is more than 1e-10.

facelessuser commented 1 week ago

You mention that dart-sass uses 11 decimal places, but why? With the current EOTF, in this one very specific area, we are at worst still at 32 bits of precision 7-8 decimal places, probably plenty enough precision in all practical cases. In most cases, the precision will be much higher. I think the CSS spec requires only a minimum of 10 or 12 bit precision.

From a practical standpoint, I don't think the average user would ever notice a problem. With that said, I don't think the change would cause significant differences in results either, so it wouldn't technically break anything that people would notice either. The edge case for this discontinuity is pretty narrow.

If it were me, I'd probably prefer to keep to the spec, accepting this as a quirk of the space, but I can see the appeal of trying to "fix" the space. I'll let others comment with their opinions.

ntkme commented 1 week ago

I think the CSS spec requires only a minimum of 10 or 12 bit precision.

Sass serializes to CSS at 10 digits precision, but internally fuzzy compare at 11 digits precision. It is indeed a little bit strange: https://github.com/sass/sass/issues/3953

With the current EOTF, in this one very specific area, we are at worst still at 32 bits of precision 7-8 decimal places.

I noticed it too when porting Sass Color 4 support from Dart/JS to Ruby. Sass has recently released support for Color Level 4. The sass npm package uses a dart based color implementation, while sass-embedded package uses color.js. Some of our color conversion tests only pass at a very low 4 decimal places when comparing output of the two different color implementations - we don’t even get 7-8 digits precision, and I’m reviewing places where things might be off.

The edge case for this discontinuity is pretty narrow.

Indeed. I don’t think it will have any real user impact as long as we keep Φ ≈ 12.92, but will fix a few narrow edge cases.

facelessuser commented 1 week ago

I noticed it too when porting Sass Color 4 support from Dart/JS to Ruby. Sass has recently released support for Color Level 4. The sass npm package uses a dart based color implementation, while sass-embedded package uses color.js. Some of our color conversion tests only pass at a very low 4 decimal places when comparing output of the two different color implementations - we don’t even get 7-8 digits precision, and I’m reviewing places where things might be off.

I'm not entirely sure what you are testing though. Are your tests testing what you think they are testing? Are you pushing results past reasonable floating-point errors? Does this actually affect round-tripping?

I am skeptical that the current EOTF actually affects round-trip conversions or shows any meaningful difference when taken into context of normal conversions with normal floating point error.

Can you demonstrate a real-world practical case that would benefit from this change?

facelessuser commented 1 week ago

A more compelling case is probably the round-trip of 0.04045 (which I imagine would be the worst offender 🤔) which gives us 0.040449970408122 while your suggested values give 0.04045000000000001. This is less to do with whether 0.4045 is the most correct value but more to the fact that the inverse cutoff is clipped to 0.0031308 and not the true inverse which would be 0.0031308049535603713 (calculating a new full precision 0.04045 from 0.0031308 would also have the same effect).

While the unaltered values preserves precision to 5 significant digits, assuming that 0.040450 is the assumed precise value, but that's clearly not going to support accuracy up to 10 decimal places.

I guess the argument would be if you have to recalculate the inverse for a better round-trip, why not recalculate to better align the curve at the same time.

facelessuser commented 1 week ago

I'm actually not sure if 0.0404482362771082 and 0.00313066844250063 is any more "accurate" than doing something like 0.04045 and 0.0031308049535603713. I think we are guessing about the exact intent of the original numbers and what is the most correct. One solution just calculates the inverse assuming the original value is right, and the other adjusts both values such that they intersect, but that doesn't mean that either is the true intent just that they both solve the precision problem by using the same precision in the forward and reverse.

facelessuser commented 1 week ago

Never mind, I ran through the experiment myself, I see now why 0.0404482362771082 and 0.00313066844250063 was selected.

They essentially graphed the logic above the cutoff and the logic below the cutoff and found the intersection of those two methods, one being a parabola, the other a line.

That is why those two points are chosen. That makes more sense now that I ran through it. Those points being more accurate, but also maintaining the same precision between them improving round trip.

ntkme commented 1 week ago

They essentially graphed the logic above the cutoff and the logic below the cutoff and found the intersection of those to methods, one being a parabola, the other a line.

Correct, they are basically better approximation of roots (parabola of the two curves) for the following math formulas:

Note: that with Φ = 12.92, there are technically two roots that are slightly apart from each other. As long as we pick the right root pairs from the the two formula, we get a good round trip.


With the unrounded value of Φ = 12.923210180787853, the formula has a single root instead of two roots:

Note: wolfram alpha may still show two roots due to float precision issue, but mathematically there is only one root.


First, I think using better approximations for the roots (parabolas) is almost a no brainer.

One the other hands, we have the question of using Φ ≈ 12.92 vs Φ = 12.923210180787853. The number itself is calculated as (((1 + 0.055)**2.4)*((2.4-1)**(2.4-1)))/((0.055**(2.4-1))*(2.4**2.4)). The single root can be calculated as 0.055/(2.4-1) => 0.03928571428571429 and the root for inverted curve can be calculated as (0.055/(2.4-1))/((((1 + 0.055)**2.4)*((2.4-1)**(2.4-1)))/((0.055**(2.4-1))*(2.4**2.4))) => 0.003039934639778432.

This would be the most precise as it follows the original mathematical intent with no rounding at all (other than a tiny loss from float precision). The only concern is that it will create a drift around 0.003210180787853 * x where x <= 0.003039934639778432 and a drift around 0.0000192263 * x where x <= 0.03928571428571429 from the IEC standard. In other words, even if we go with the more accurate Φ = 12.923210180787853, the actual drift from the "standard" is expected to be less than 0.00001 in the worst case.

My personal take is that the original IEC was so old at the time people probably did not care about precision that much as you can see the drift after rounding is not noticeable by the standard of human eye. However, when converting color and sometimes even goes round trips multiple times between different color space, I think we do want to be as accurate as possible.

facelessuser commented 1 week ago

Yep, I'm convinced the results are better in the sense of more accurately approaching what I think the true intention was as to how the cuttoff was supposed to work, especially after analyzing low light round trip and physically plotting it out as I wasn't sure they were doing what I intuitively thought they should do, but now I see those are indeed aligned.

I think keeping 12.92 as is would be the best route if a change was to be made as it keeps the 0.04045 cuttoff pretty much the same (if it were rounded) and provides a more accurate inverse cuttoff that actually round trips. So, I like 0.0404482362771082 and 0.00313066844250063.

The question is whether Color.js wants to be more accurate or more accurate to the spec as it is defined. I think @svgeesus would be the one to comment on the direction to take here. I know this library is meant to align with the CSS spec. I don't think results would be meaningfully different than what the CSS spec produces as defined now, but it would certainly yield cleaner results, but it would also stray from the official sRGB spec, if that matters.

facelessuser commented 1 week ago

For anyone interested, here are the results reproduced as a sanity check of the claim:

from scipy.optimize import fsolve

def equations(p):
    x, y = p
    return (y - ((x + 0.055) / 1.055) ** 2.4, y - x / 12.92)

# Find first root
print(fsolve(equations, (0.0, 5.0)).tolist())

# Find second root
print(fsolve(equations, (5.0, 0.0)).tolist())
[0.03815479871331798, 0.00295315779514845]
[0.04044823627710784, 0.003130668442500607]
ntkme commented 1 week ago

I think your value of 0.04044823627710784, 0.003130668442500607 are slightly better than mine. I will update the previous comments.

svgeesus commented 1 week ago

If it were me, I'd probably prefer to keep to the spec, accepting this as a quirk of the space, but I can see the appeal of trying to "fix" the space. I'll let others comment with their opinions.

sRGB has been:

It doesn't seem worthwhile to make a new, almost-exactly-sRGB colorspace that re-fixes it.

svgeesus commented 1 week ago

I don't think results would be meaningfully different than what the CSS spec produces as defined now, but it would certainly yield cleaner results, but it would also stray from the official sRGB spec, if that matters.

I do think that matters, yes.

facelessuser commented 1 week ago

I do think that matters, yes.

Yep, that is certainly my main argument. Currently, we follow sRGB, but any change would be sRGB adjacent. CSS does not ask for anyone to support 32 bit+ accuracy, but only 10 bit accuracy for sRGB. Any other desired constraints are self imposed by the library.