Closed LeaVerou closed 1 year ago
To take one example, in The Science of Color & Design by James O'Leary discusses their hybrid color model HCT which uses the L axis from CIE Lab (Tone) but Hue and Chroma axes from CAM16 (Hue, Colorfulness). Contrast is then the difference in Tone:
The HCT color system makes meeting accessibility standards much easier. Instead of using the unintuitive measure of a contrast ratio, the system converts those same requirements to a simple difference in tone, HCT’s measure of lightness. Contrast is guaranteed simply by picking colors whose tone values are far enough apart—no complex calculations required.
For example, to meet WCAG contrast requirements, smaller elements (less than ¼” or 40 dp) require a tone difference of 50 with their background, larger elements require a tone difference of 40. This principle works consistently for any pair of colors.
Note that the minimum contrasts for small and large text (50 and 40) are different from the thresholds for WCAG 2.1 and for APCA; thresholds are algorithm-specific.
color-contrast(white vs yellow, blue to wcag3(AA) hct(40) wcag2(AA))
is starting to be a mouthful, but could be mitigated with custom properties and supports queries.
@supports (color: color-contrast(#F00 vs #00F, #0F0 to wcag3(AA))) {
:root {
--target-ratio: wcag3(AA);
}
}
.custom-label {
background: var(--some-bg);
color: color-contrast(var(--some-bg) vs #111, #eee to var(--target-ratio, wcag2(AA)));
}
E.g. we know that WCAG 2.1 contrast is severely broken,
I think that is overblown, the article says (several times) "if the ‘APCA Lightness Contrast’ is more accurate...". If it isn't, the conclusions are not applicable.
When I've run usability testing with people with low vision, there has been good correlation between colours they struggled with, and WCAG 2 fails.
I've been following the work (as a non-colour expert) and I think APCA is probably a better formula, and we should continue to work incorporating a better formula into WCAG 3.
I'm just requesting that people in W3C don't use language like "severely broken", when the overall impact of the current guideline is still a net positive.
Also from that article:
the range of possible background colours for black text reduces by approximately 47% (for these particular thresholds).
the range of possible background colours for white text increases by approximately 63% (for these particular thresholds).
47% false positives and 63% false negatives with a sample size of 8000 tested color pairs does not fill one with confidence. "Broken" seems applicable.
"if the ‘APCA Lightness Contrast’ is more accurate..."
In the cited article, on a calibrated screen and with my normal color vision (slight age-related macular yellowing) the APCA was more accurate in all cases.
I agree that more testing is needed.
Hi Lea @LeaVerou
.... as well as to satisfy legal constraints....WCAG 2.1 contrast ... is legally mandated that websites pass it.
Legally mandated is a strong term for the narrow areas where there is actual codification into law.
In the USA, the ADA does not mandate WCAG 2 contrast (indeed, the native ADA signage regulations were gutted of any specific contrast guidance regarding architectural signage some time ago). For government sites and government procurement, the 508 rules do specify WCAG 2 contrast, but with two big exception clauses:
1. Commercially available. If something is needed but no commercially available solution is WCAG 2 compliant, it does not have to comply. 2. Alternate method. An alternate method can be used so long as it provides for equivalent or better accessibility.
As for case law, the 11th circuit vacated Winn-Dixie in February, so that is moot.
Above is federal level. For state level there is mainly New York and California. I don't have Lexus Nexus access at the moment, but from what Ive seen, no cases relating to contrast have gone to trial and won on merits. Most are out of court settlements, and I'm going to guess many of these were relying on Winn-Dixie.
For other nations it's a grab bag, but in nearly all cases the specification of WCAG 2 is limited to governmentally controlled entities or sites. An exception is Finland. In Australia it extends to non-governmental sites, but last I checked it was level A only, so contrast is not included. In Canada, there is some case law, but a number of exclusions.
....we may want color-contrast() to find us a color pair that both satisfies WCAG 2.1, as well as the new improved algorithm.
I'd like to introduce you to Bridge-PCA. It is fully backwards compatible to WCAG 2 but using APCA technology, it fixes the problem of false passes. What is lost is the greater design flexibility of the full APCA guidelines. Bridge-PCA was created specifically to answer the question of "meeting legal obligations by the absolute letter of WCAG 2, regardless of actual veracity".
I do not suggest Bridge PCA as a permanent solution—it is specifically a stop-gap, stepping-stone to address various concerns. It does a much better job calculating for dark mode for instance, and also has enhanced conformance levels. The npm package is:
npm i bridge-pca
And the demo tool is https://www.myndex.com/BPCA/
.... HCT which uses the L axis from CIE Lab (Tone) .... Note that the minimum contrasts for small and large text (50 and 40) are different from the thresholds for WCAG 2.1 and for APCA; thresholds are algorithm-specific.
I've been watching James' developments with interest. I'm a little surprised at the use of L* instead of J.
∆L has an interesting attribute in that is sort of lines up with WCAG 2 — the implication is, that contrast using a plain L suffers the same issues as WCAG2.
Digging up some of the early comparison tables, here's one with ∆L*:
An L* simple difference, when the lightest color is white, lines up with a little less than 40 = WCAG 3:!, 50 = 4.5:1, and 62 = 7:1 ... and still as colors get darker, contrast is over reported. I've also tried this with multiple offsets, but that still is not polarity sensitive.
L* is based on Munsell value, so it is perceptually based.... on diffuse reflected light using large color patches, when the observer is in the defined ambient/adaptation conditions.
Viewing text on a self-illuminated monitor is a different matter, and a different perception. "Middle contrast" for text and other high spatial frequency stimuli isn't at 18%, it's up between 34 and 42 (ish).
As such, ∆L* without some offsets and massaging is not much different than WCAG 2 as this chart shows:
I agree that more testing is needed.
Me too. And I realize I need to start publishing sooner rather than later.
I've been watching James' developments with interest. I'm a little surprised at the use of L* instead of J.
Yes. Going to all the trouble of taking viewing conditions into account, to calculate the hue and chroma from CIECAM16, while ignoring them to calculate tone (L*), is odd, and not well explained in the article.
What is SmrsModWbr? I assume it is a modified Weber, got a reference?
What is SmrsModWbr? I assume it is a modified Weber, got a reference?
"Somers Modified Weber" is a further offset and scaling following an idea from Peli/Hwang's Modifed Weber, and I indicated that was from a series of evaluations in 2019 of various contrast maths. I abandoned it as it does not track the full range very well, and reverse polarity is also sketchy.
The thing with Weber and Michelson contrasts is that they track at very low, threshold contrasts, but they do not predict what happens at supra-threshold readability levels, and the difference is significant.
Maureen Stone (PARC, NIST) and Larry Arend (NASA) had written about using ∆L* for luminance contrast, and in some experiments adding in scaling and offsets, that avenue began to indicate the shape of the perception. This led to greater consideration of CAMs and perceptual models, notably CAM02, R-Lab, Barten's model and Hunt's model. When you consider that viewing a self-illuminated display and reducing to luminance (as that is the parameter essential for readability) this allows a subset of CAM input conditions, permitting a reasonable simplification to determine luminance & stimulus size-based readability contrast via perceptual lightness/darkness difference.
HI @svgeesus
And to add: I covered the Peli/Hwang modweber in thread 695 back in 2019. It is based on the essential idea behind WCAG 2 contrast math, but makes the "flare" component asymmetrical. My iteration goes with the asymmetry, but changes amplitude and also incorporates additional scalings. But as I indicated, a dead alley—linear scalings or offsets don't accurately model supra-threshold perception.
Stevens and others pointed out the inaccurate nature of the 180-ish year old Weber, and Stevens indicated the different perception curve shapes varied based on spatial frequency related issues.
Michelson is sensitive to spatial frequency, but is not uniform in terms of position relative to adaptation.
TL;DR: The source for the delta L measure is contrast ratio. They are equivalent. And APCA can be measured that way too
The conversations around contrast tend to assume too much and go too far too quick, something very simple and tremendously helpful has gotten missed:
The intellectual hole there is the rule of thumb we give designers is contrast ratio 3.0 is a L* delta of 40...but the actual maximum is 38.3, and its as low as ~31. This is the effect Andrew mentions in message that starts with "I've been watching James' developments with interest."
I'd prefer to use J or something more advanced, but CAM16 J is dependent on more than luminance, and I haven't seen anything remotely convincing that says we can count on non-luminance contrast
(I know Andrew is thinking about / working towards a measure that includes hue/chroma in contrast measure, but in a purely a11y context, I'm not sure it's relevant. you'd need to know if a user had a CVD, what it was, and how severe it was to make it relevant for a11y. and that's even before we start talking about how perceptually weak chroma is compared to luminance)
Here's a Google Sheet for visualizing this. You should be able to edit contrast ratio/SAPC values and see the graphs updated https://docs.google.com/spreadsheets/d/1G6rZInfua3Y8125avr2_OczDfZ7eWdqhFywRpvbdS1U/edit?usp=sharing
In the discussion re: charged language about WCAG 2.1 being broken, there's a category error occurring. It's far, far, from being severely broken.
Contrast measures are for a11y, guaranteeing legibility for the population.
Both the article and Chris mention which is more legible for them, while noting they have normal color vision.
In that case, the exercise is "which has more contrast for viewers with full vision" This is very different from the goal of an a11y standard and contrast measurement: covering the population.
To do that, you need a story for how you're handling the ~10% of users with skews in perception of hue and chroma (some can't see it entirely!) who aren't going to see as much difference as you are. Additionally, this falls far short of the standard scientific approach I've seen to measuring this, reading speed.
This, in addition to the last leg of severely broken being one has a larger gamut of dark colors that pass, the other has a larger gamut of light colors passing[1] is worrisome to me. There seems to be a significant gap in things that are simpler to understand, but harder to talk about it.
[1] this is a weak argument because this is a relative judgement, and frankly, the fact APCA neither explicitly models flare, nor does its delta L behavior show it reflecting that white cant get lighter but black can lighter in the presence of flare, make APCA the one that has worrisome behavior if I had to pick one. Even though I love it and can't wait for it to be a standard!
Hi @jpohhhh
This material is so much easier to discuss live with examples, instead of text, but here's a bunch ahead of the call.
First, I do want to mention as kindly as I possibly can that your take on APCA does not reflect the reality or underlying theory. So I want to ask if you've read any of the documentation or white papers? My concern is that I must not have explained things correctly, which is apparently my Achilles heel !! The canonical documentation is at the main repo, and I'll list links in order of preferred reading later in this post.
James, I am wondering where you read or came to the following opinion:
...and frankly, the fact APCA neither explicitly models flare, nor does its delta L behavior show it reflecting that white cant get lighter but black can lighter in the presence of flare...
I am very concerned that such a line of thought is out there—did you read this somewhere? What prompted this?
APCA as documented for WCAG 3 draft
Which draft? I looked at the spreadsheet—but the functions are hidden so I can't examine the math or method....?? It does not look right. See the readme at the main repo, or at the apca-w3 npm page. Do not use anything labeled SAPC.
Contrast measures are for a11y, guaranteeing legibility for the population.
Contrast is important for 100% of sighted users. Human vision is a spectrum as wide as the human experience, and our vision changes over our lives: we're essentially blind at birth, and it takes 20 years to develop peak contrast sensitivity... and then we hit our 40s, and presbyopia sets in. At 60, it's all down hill from there!
how you're handling the ~10% of users with skews in perception of hue and chroma (some can't see it entirely!)
The short answer is that the APCA guidelines are directly following the long established, peer reviewed scientific consensus of modern readability research, especially for low vision, and particularly Dr. Lovie-Kitchin, Bailey, Whittaker, et alia, and also Legge, Chung, etc.....
Okay, we need to define a couple things:
1) Readability Contrast—Prime Focus of APCA
2) Discernibility Contrast—A Different Beast, processed differently in the brain.
3) The Term "Contrast":
This last point is the most critical to understand, because not only are perceptual lightness estimation power curves curved, but the perception of contrast resulting from the distance of two different lightnesses is ALSO curved, by which I mean the contrast change from threshold to supra-threshold is also very non linear relative to the distance (difference). And the shape of THAT curve is ALSO dependent on spatial frequency.
Take a look at this image:
The two yellow dots are EXACTLY the same. As far as XYZ or LAB or the sRGB vaules being sent to the monitor are concerned, both yellow dots are identical.
And yet they both look distinctly different.
The photometric difference between two light or dark "things" does not define contrast perception. It is not contrast.
And neither WCAG 2 ratio or ∆L* are uniform to perception. Historically this has not been a problem BECAUSE, in physical PRINT, it is almost always black ink on white paper. And even if not, there was a designer there to lock it in place.
The WEB is dynamic content, not locked into place as printed words on paper are, and today there is the desire to have automatic properties in CSS and automation for things like auto darkmode or auto high-contrast.
AUTOMATION of colors is only realistically possible if you have perceptually uniform methods.
While L may be more or less uniform to perception of low spatial frequency diffuse surfaces under ideal illumination conditions, I can tell you that L does not predict lightness perception of high spatial frequency elements (text) on a self illuminated monitor.
I'll fill you in on this this afternoon, but in short:
Now a few things to set the record straight, particularly for anyone reading at home....
There is this misunderstanding out on the web, and I am not sure the source, but I do see it a lot in the context of accessibility. I believe it may be in conjunction with the WCAG 2.0 understandings documents. Here are the facts.
It is peer reviewed scientific consensus that individuals with color vision deficiency have standard or better contrast sensitivity and visual function.
It is peer reviewed scientific consensus that achromatic luminance contrast is what is most important for readability, and also for fine details.[6][7]
I've been working hard to clear up these misconceptions, so I've been trying to organize the documentation is a logical manner. The links listed in this section are placed in an order that starts with the plain language overview before getting into the minutiae.
These are all a bit rough or in draft form, and are much deeper dives into the underlying theories.
That's it for this post, thank you for reading.
—Andy
1 • Spatial visual function in anomalous trichromats: Is less more? 2 • Eye-Tracker Analysis of the Contrast Sensitivity of Anomalous and Normal Trichromats: A Loglinear Examination with Landolt-C Figures 3 • Contrast sensitivity of patients with congenital color vision deficiency 4 • Evaluation of contrast sensitivity and color vision in lead and zinc mine workers 5 • Effects of Contrast Sensitivity on Colour Vision Testing 6 • Effects of luminance contrast and character size on reading speed 7 • Luminance and chromatic contrast effects on reading and object recognition in low vision
8 • What’s Red & Black & Also Not Read? 9 • Chromostereopsis
Footnotes
(a) Nevertheless, I am removing non-standard linearization, and using the standard(s) per CSS 4/5, and handling protan compensation with a separate protan color module shortly...
Any observations I are off this code, I believe we confirmed this is the latest and greatest: https://github.com/Myndex/apca-w3/blob/master/src/apca-w3.js
1) Let's drop anything I said about flare: that was a side point, to a side point that it doesn't make sense to look at two asymmetrical gamuts and say one is "right". I have changed my mind & believe APCA intendeds to model flare, as you describe.
2) ...to the main point that it doesn't make sense to describe WCAG 2.0 as severely broken, that's very charged language, and from your perspective, I agree. having that perspective is what motivates one to work on problems like this.
However, once other people, in a more formal setting, justify that perspective via a couple people with standard color vision noticing that magnitude of luminance difference is different from magnitude of color difference...that's...not good. At all.
3) L stuff combats a claim that isn't being made. I, nor anyone else, claims that the delta L required for a constant value for contrast with a given L x is itself a constant value. I think you're seeing L, or delta L, and assume I'm proposing yet another contrast algorithm that uses delta L. No.
My claim:
I'm very surprised this wouldn't be embraced, given how beneficial it is for designers who must work with these algorithms. With that, it becomes trivial to understand what range of colors contrasts with a given color, as well as trivial to alter a color to meet contrast via Lab*/LCH/HCT.
4) I know about red/black, WCAG 2.x has called that out explicitly.
5) SAPC/APCA simply does not have a dependency on hue or chroma. It is a function of 2 luminances. I imagine you chose values for constants in the formula that accommodated observations about hue. But, it obscures to say there is a hue/chroma dependency. If it did, that would raise a whole bunch more questions along the lines of "Why does someone with standard color vision think their vision describes contrast, much less contrast that is accessible?"
Hi @jpohhhh
- ..._to the main point that it doesn't make sense to describe WCAG 2.0 as severely broken, that's very charged language, and from your perspective, I agree. having that perspective is what motivates one to work on problems like this._
Just to point out, this is not new (I started this project with post #695 circa April 2019) and I am not the only one, this issue has been widely criticized, including back in 2007 when objections from IBM were ignored for instance. I layout the basis of the problems in the 44,000 word thread #695 back in 2019.
However, once other people, in a more formal setting, justify that perspective via a couple people with standard color vision noticing that magnitude of luminance difference is different from magnitude of color difference...that's...not good. At all.
First, this is not true on the fact of it. I do not have standard vision. I WAS legally blind due to severe early onset cataracts, and now 6 surgeries later I have low vision. Yay.
But the conflation of visual function and color insensitivity is a spurious one. For readability, only achromatic luminance contrast is critical. Color is useful for discrimination of objects, but not for reading. These are two completely separate visual functions.
- I know about red/black, WCAG 2.x has called that out explicitly.
It is only mentioned in the "understanding document" but is NOT considered in the algorithm. In APCA, the algorithm specifically derates red and fails it as part of the math. And the protan compensator does so even more strongly.
The contrast algorithms we've seen, whether WCAG 2.1 or APCA, take two Ys as input
Not exactly. WCAG 2.1 takes two luminances (CIE Y) as input. APCA uses a non-standard transfer function for linearizing sRGB and thus the Ys
term is not the same as CIE Y, and cannot be computed from it. It requires the individual non-linear color component values as starting values. I reported that here and the conclusion was that this is intentional, not an approximation error.
Y and L* are the same physical quantity.
No.
They are pure functions of each other. They can be converted to and from each other with no other inputs.
Yes.
Thus, given a L and contrast measure, we can find the L that contrasts
Yes. We convert L to Y, compute the other Y, and then can if we want convert that back to L
Given that, the L and the L needed for contrast, we can calculate delta L* required for contrast
For that color, yes, if we want to replicate the WCAG 2.1 algorithm. Doing so seems pointless because the problem with WCAG 2.1 is that is uses a non perceptually-uniform measure (CIE Luminance) instead of a perceptually uniform measure (CIE Lightness).
And just to be clear:
I am going to change the input section so that the linearization transform used matches "standard" as defined in CSS 4/5. However, the lightness predictions will still not be a "standard CIE 1931 Y" as discussed below.
Each color input to APCA will be the separate, normalized,, linear R,G, and B values, and a flag indicating the colorspace.
A key reason for this is the need for compensation for protan (red insensitive vision) and also possibly for the (potential, but still under review) halation/glare-compensation, as these features work by adjusting/offsetting the RGB to Y coefficients in a minimally invasive way. And also for certain automated color-contrast functions, which need to identify hue and chroma.
The fact there was no perceptually uniform contrast metric prior to APCA is simply that it was less important, when a designer was ultimately making color choices. BUT:
The overwhelming reason that perceptually uniform contrast is needed today is the need for AUTOMATED color and contrast adjustments. It is not possible to make "good" color or contrast adjustments without the eye of a designer, unless there is a perceptually uniform model with which to do so.
In both cases, these can be linearized RGB values, and in all cases the color-space must be known.
I have been looking at Judd/Vos as a potentially more appropriate path to emulating display response Y (something Sony has been working with as a means to correct metameric issues with narrow band primaries).
However, recent research in relation to CIE Technical Committee 1-98 entitled "A Roadmap Toward Basing CIE Colorimetry on Cone Fundamentals." indicates a potential shift is colorimetry, and likely important implications for display technology.
For more background on this, see R.W.Pridmore. "A new transformation of cone responses to opponent color responses"
So, I've been slow to make changes till I have a chance to review some of this further.
At present, APCA is being demonstrated with SDR sRGB as this is the web default. The future of multi-colorspace web content system anticipates the above issues will only increase in importance.
Some questions we need to be asking are:
These questions lead to the following key question:
All alpha blending or compositing must be done prior to sending to the APCA inputs, as in most cases the alpha blend is happening in a gamma encoded space.
To summarize: there are at least four contrast related factors that involve the independent RGB channels, and/or coefficients used to transform RGB into a photometric luminous intensity, as applied to the purpose of improving readability of text on self-illuminated monitors.
And
In preparation for discussing the various contrast algorithms, the color.js documentation on contrast might be useful may be helpful.
Note: Weber and Michelson are broken in color.js currently, investigating.
We resolved to add <contrast-algo>+
to contrast-color()
where <contrast-algo>
represents a contrast function.
Note: Weber and Michelson are broken in color.js currently, investigating.
I fixed that bug, which was a stupid typo of = for === /facepalm
Note: Weber and Michelson are broken in color.js currently, investigating.
I fixed that bug, which was a stupid typo of = for === /facepalm
They still seem broken here, unless they really are that bad.
By broken I mean that they were returning +Inf
as the contrast ratio, regardless of inputs. A test that was intended to trap division by zero actually assigned zero to the denominator :)
Yes, they are both not very good, especially for light text on dark backgrounds.
I'm curious to know, is there going to be specific formulas that are supported by color-contrast()
? Will it only support WCAG 2.x relative luminance and APCA, or are there plans to support more (and if so, which ones and why)? This is based on some questions and concerns I mentioned here.
Hi @LeaVerou and @svgeesus
...unless they really are that bad.
In the context of the black/white page, both the unmodified Weber and Michelson have a polarity sensitive issue, as neither are perceptually uniform. While both are useful in research relating to the JND, neither are useful for practical design guidance for supra-threshold contrasts.
I evaluated these and all other available contrast models in 2019, along with many variants (some of which I mention later in this post).
As for the black and white flip page:
Weber: (lighter - darker) / darker
Michelson: (lighter - darker) / (lighter + darker)
So if white is 1 and black is 0, we can see why both of these fail to define a useful "flip point".
Weber: (1.0 - color) / color
Michelson: (1.0 - color) / (1.0 + color)
Weber: (color - 0) / 0
Michelson: (color - 0) / (color + 0)
So, as we can see for black, Weber produces infinity and Michelson = 1, and in both cases white vs any color will never be the higher compared to black vs any color.
in Weber.js, there is:
return Y2 === 0 ? 0 : (Y1 - Y2) / Y2;
To fix devide by 0, which would be infinity. But in returning 0, it hides that the actual result should be a maximum. As a result, in the black/white page, the Weber shows white text, when in reality it should show black text for all cases similar to Michelson, due to the nature of these algorithms.
If I may suggest to consider instead:
return Y2 === 0 ? 50000 : (Y1 - Y2) / Y2;
The reason: the darkest sRGB color above black is #000001
and this produces a plain Weber contrast of ~45647. So, setting the divide-by-zero result at 50000 is a reasonable max clamp for the plain Weber.
I don't know if you want to play with these, but there are other variants, some are interesting, and we evaluated all of them in 2019. Among the variants is a couple modified Webers where a brute-forced offset is added to the denominator. Sometimes this is claimed to be a "flare" component, but in reality is in effect a "push" to a suprathreshold level.
These assume Y is 0.0 to 1.0:
hwangPeli = (Y1 - Y2) / (Y2 + 0.05);
somersB = ((Y1 - Y2) / (Y2 + 0.1)) * 0.9;
somersE = (Y1 - Y2) / (Y2 + 0.35);
However these do not track polarity changes particularly well, and have a mid-range "bump".
A better and interesting modification is this delta L* variant we created on the path toward SACAM (and APCA).
Here, create Lstar from the piecewise sRGB->Y and L* per the standard CIE math, then:
deltaPhiStar = Math.abs(bgLstar ** 1.618 - txLstar ** 1.618) ** 0.618;
This mainly works for "Light Mode" but does not track dark mode quite as well. Also, while this is close to parity with light mode APCA at Lc +90, lower contrasts are over-reported, and it does not match in dark mode. Some of this can be addressed with scales and offsets.
Nevertheless, I thought you might find these variants interesting.
APCA builds on these early experiments, but has added value in terms of polarity sensitivity and wider range for better guideline thresholds.
Regarding the simple concept of a black/white flip, I have this interactive demo-page: FlipForColor which includes a brief discussion.
For a deeer dive, there is a CodePen, and a Repo, and a Gist that discusses this and related issues including font size and weight as it relates to flipping.
Thank you for reading
Regarding the infinity fix:
in Weber.js, there is:
return Y2 === 0 ? 0 : (Y1 - Y2) / Y2;
To fix devide by 0, which would be infinity. But in returning 0, it hides that the actual result should be a maximum. As a result, in the black/white page, the Weber shows white text, when in reality it should show black text for all cases similar to Michelson, due to the nature of these algorithms.
If I may suggest to consider instead:
return Y2 === 0 ? 50000 : (Y1 - Y2) / Y2;
The reason: the darkest sRGB color above black is
#000001
and this produces a plain Weber contrast of ~45647. So, setting the divide-by-zero result at 50000 is a reasonable max clamp for the plain Weber.
Pull request?
Pull request?
Oh, that would have been a good idea but already done
Closing this, as @fantasai and I made the prose edits, however we currently only have one algorithm.
Assuming we can specify the contrast algorithm at all (see https://github.com/w3c/csswg-drafts/issues/7356), we should be able to specify multiple of them as a safeguard for algorithm bugs, as well as to satisfy legal constraints.
E.g. we know that WCAG 2.1 contrast is severely broken, yet it is legally mandated that websites pass it. Once we have a better contrast algorithm, we may want
color-contrast()
to find us a color pair that both satisfies WCAG 2.1, as well as the new improved algorithm.Syntax could just be a space-separated list of contrast algorithms.
(Issue filed following breakout discussions between @svgeesus, @fantasai, @argyleink and myself)