googlefonts / colr-gradients-spec

63 stars 8 forks source link

Define how to interpolate colors #27

Closed rsheeter closed 3 years ago

rsheeter commented 4 years ago

https://github.com/googlefonts/colr-gradients-spec/blob/master/colr-gradients-spec.md#color-palette-variation currently reads "Colors expressed in sRGB r/g/b channels cannot be easily interpolated. Another solution is needed, perhaps involving transforming to a linear color space and back."

Courtesy of Romain Guy I have a concrete suggestion on how to interpolate in linear space: "apply the color space's EOTF before interpolating, then apply the OETF." I hope I'm not the only one who had to look up the terms :D

Android implements this for animated colors and users seem satisfied. https://android.googlesource.com/platform/frameworks/base/+/master/core/java/android/animation/ArgbEvaluator.java. Related, https://developer.android.com/reference/android/graphics/ColorSpace.Named gives names and equations [in source] for a bunch more color spaces.

I suggest we might consider directly adopting this approach; it's been used in real world animation and has a well defined implementation :)

https://www.colour-science.org/posts/the-importance-of-terminology-and-srgb-uncertainty/

rsheeter commented 4 years ago

We might also consider allowing the font to specify how to linearize and interpolate colors from a fixed set of choices. Potentially in the next spec rev we define only one choice but leave the door open to add more in future.

PeterConstable commented 4 years ago

Well, I was already familiar with "OETF" and "EOTF", but it still wasn't clear what was intended.

An electrical-optical transfer function (EOTF) maps electrical (analog or digital) signals into an optical result. In displays "gamma" refers to an EOTF (it's a power function with a particular gamma exponent). But most usage of EOTF and OETF I've seen (in the context of imaging sources and displays) is dealing with luminance / intensity levels. Not color blending.

And not mapping between color spaces. But I really don't think that's what's meant.

The colour spaces using for imaging signals are usually non-linear with respect to luminance. This might have originated in the natural response of phosphors in CRTs, but there's also a correlation with human vision. We're more sensitive to power differences at lower luminance levels than high luminance levels. As result, we get better utilization of available bits in a digital signal by distributing them non-linearly on an optical scale.

sRGB is an example of a signal color space that is defined with a non-linear OETF. It uses colour primaries defined in the ITU BT.709 spec, but with a particular OETF (a hybrid that's linear at very low levels, then a "gamma" power function).

But non-linear signal colour spaces are not good for colour-transform operations. I'm pretty sure what was meant by "apply the EOTF first" was linearization of the signal.

The scRGB space is an example of a linear colour space. (At the link of named colour spaces above, this would be LINEAR_EXTENDED_SRGB.) In addition to being linear, it supports a larger gamut: it uses the same primaries as sRGB / BT.709, but it allows for RGB values outside the range [0, 1] (including negative values), which allows for a much wider gamut, as well as much greater dynamic (luminance) ranges.

CIE XYZ is another linear colour space. It's very commonly used as a reference for defining other colour spaces, and as a pivot between spaces or as a transitional space for applying colour transforms. (E.g., for the night light and adaptive colour features in Windows, the desktop image would be mapped to XYZ, and then a transform for the filter expressed in terms of XYZ would be applied, and then the image is mapped back to whatever is the target wire format space.) But there is one aspect of XYZ that could be limiting for some applications: the X and Y members specify colours in absolute physical terms derived from specific spectral power functions related to human visual response, but the Z member for luminance is relative, not absolute.

I know that composition in Windows is done by converting image signals first into scRGB. That works for non-HDR, sRGB imaging in which luminances are relative; but it also works for HDR imaging with BT.2020 (or any other) colours and absolute ("scene-referred") luminances.

If we want to make CPAL colours interpolatable, conversion to a linear signal before interpolation makes sense based on what I learned while working on HDR. But if we want to add that kind of enhancement to CPAL, then at the same time we probably should also be considering supporting something beyond sRGB, both in terms of gamut but also dynamic range (i.e., more than typical SDR, display-relative luminances). scRGB might be a good candidate.

=== But is this something we want to do at the same time as COLR v1?

rsheeter commented 4 years ago

To your point, the Android implementation indeed linearizes then moves a fraction of the way between start/end. This appears to give a concrete solution (granted, one of many possible solutions that might be contextually more/less appropriate) to the problem posed by the current spec proposal.

IMHO it would be nice to have >0 options for interpolating between CPAL entries. If we can relatively quickly ship support for at least one means of interpolation, with very clearly defined expectations, and the door open (in that you specify something like a color space identifier to use when interpolating) for adding more in future perhaps that works out well?

But is this something we want to do at the same time as COLR v1?

Can I give two answers?

behdad commented 4 years ago

To me is clear that luminance and saturation can be interpolated. I agree with Peter that luminance should probably be interpolated in a log-scale not linear space. The elephant in the room is what to do with hue, since that's a circular space. Taking the closer path makes sense but leaves colors of exact opposite hue undefined.

I'll check the Android implementation.

behdad commented 4 years ago

Okay the Android impl just interpolates each channel separately in a linear space. That's the simplest acceptable way but has the luminance issue that Peter pointed out among other problems.

I believe @raphlinus, and either @RoelN or @Pomax, don't remember which, have insight to offer.

behdad commented 4 years ago

Here's the hue question: what's the color mid-way between blue and yellow? Android approach gives gray. Which from color theory point of view is wrong since blue and yellow are fully saturated but gray is fully unsaturated, so the interpolation is not convex in saturation.

PeterConstable commented 4 years ago

Sorry, what problem are we trying to solve? Is it what colors to interpolate between stops on a colour line? How to make CPAL entries variable (i.e., responsive to fvar coordinates)?

PeterConstable commented 4 years ago

Why do you say "from color theory point of view [it's] wrong" that gray is midway between blue and yellow? By "color theory", do you mean the practice and guidance around use of color that is taught in design schools? Or do you mean colour science / colorimetry—the stuff that's used in designing imaging systems (cameras, displays, imaging workflows, etc.)?

For gray to be midway between blue and yellow actually makes complete sense to me. But that's from a colour science perspective, and thinking of trichromatic additive color modeling with "RGB" primaries.

It's also the behaviour I see in various apps:

Illustrator: image

Powerpoint: image

It appears you're thinking that the interpolation should preserve luminance and saturation, and only interpolate hue. But HSL is only one way to model color, and it's not obvious that hue-only interpolation, when assuming hue is modeled as a circle, can be well defined: should midway between blue and yellow be green or magenta? That choice is arbitrary.

Again, what problem are we trying to solve? If it's what colors to interpolate between stops on a gradient, I'd want to ask first what all of the 2D graphics libraries out there are doing.

Pomax commented 4 years ago

If this issue is about the actual phrasing "Colors expressed in sRGB r/g/b channels cannot be easily interpolated. Another solution is needed, perhaps involving transforming to a linear color space and back." then I'd say that's simply incorrect and should be omitted. Colors expressed in sRGB r/g/b channels are easily interpolated.

And if an algorithmic description is required, a text can be added that explains that RGB interpolation requires first taking the square of each channel, interpolating those values, then square rooting each channel result to get the final value. Done, we have interpolated RGB. (Minute Physics did a nice clear short on that a few years ago over on https://youtu.be/LKnqECcg6Gw?t=154).

For the blue-yellow gradient, that would result in what Peter's Powerpoint graphic shows.

behdad commented 4 years ago

Yeah nevermind. I take everything I said back. CPAL colors should be interpolated per-channel the same way that the gradients we are introducing interpolate colors. Which I think is specified as being done in linear space?

So yes, I think we can resolve this item and add CPAL variation to the proposal bundle.

PeterConstable commented 4 years ago

From the Minute Physics video: "The actual camera values were square rooted for better data storage."

Well, both not always true, but also generally not relevant: The OETF used in cameras can depend on the image format used for capture, and the color space used for that format. (Color spaces involve OETF/EOTF as well as gamut.) But when blending colors, what matters is the color space of the data buffer(s) for the content being blended.

I think a more appropriate statement would have been that the colors should be mapped to a linear color space. Or coming back to what Rod quoted, "apply the color space's EOTF before interpolating, then apply the OETF."

behdad commented 4 years ago

The reason I was hesitant to do things in per-channel ARGB is this:

Linear interpolation of spatial coordinates is well-defined and independent of coordinate space / frame of reference, and consistent with how humans experience space. The same cannot be said about linear interpolation of per-channel ARGB in linear color-space, for two reasons:

Anyway, just my mental model for why I was hesitant to interpolate colors without further thinking. But I completely missed that gradients do that already... :))

PeterConstable commented 4 years ago

Understood.

Actually, the RGB model for color representation isn't arbitrary but is derived from how the human visual system works. Going back to the early 19th c, Maxwell demonstrated that full colour images could be produced using red, green and blue filters. In the early 20th c., several experimental color matching studies were done that showed that the human visual system (normally) has three distinct colour receptors that have bell-shaped spectral response curves sensitive to different parts of the visible spectrum. It's these three ρ/γ/β response curves for cones in our retinas that led to colour imaging to be done in terms of R, G and B values.

Later studies found that there is additional signal processing that occurs in the eye before images get transmitted on the optic nerve, with the ρ/γ/β signals being transformed into a different representation that uses differences. That is what led to the CIELab* representation model (L being luminance, and a, b carrying chromaticity information on red-green and blue-yellow scales),.

behdad commented 4 years ago

Actually, the RGB model for color representation isn't arbitrary

I almost wrote that but removed. Even if that is how the visual system works, it's not how colors are perceived. As you pointed out, the red-green + blue-yellow axes are closer to how perception works; eg. humans can't perceive reddish-green, or bluish-yellow. Then again, I know you know all these.

thanks all.

RoelN commented 4 years ago

Dumb question perhaps, but will the box and the letter A in this example get the exact same gradient? I assume there won't be competing ways to interpret the gradient between a background and text color?

:root {
    --my-gradient: linear-gradient(to left, blue, yellow);
}

.box {
    width: 100px;
    height: 100px;
    background: var(--my-gradient);
}
.letter {
    font-size: 100px;
    color: var(--my-gradient);
}
<div class="box"></div>

<div class="letter">A</div>

Maybe this is answered in the discussion above, but my brain wasn't able to parse that out :-)

PeterConstable commented 4 years ago

From MDN:

"Because <gradient>s belong to the <image> data type, they can only be used where <image>s can be used. For this reason, linear-gradient() won't work on background-color and other properties that use the <color> data type."

davelab6 commented 4 years ago

@behdad wrote

I think we can resolve this item and add CPAL variation to the proposal bundle.

What are the advantages and disadvantages of decoupling CPAL variation to/from COLR gradients?

behdad commented 4 years ago

I think we can resolve this item and add CPAL variation to the proposal bundle.

What are the advantages and disadvantages of decoupling CPAL variation to/from COLR gradients?

Even this update to COLR table refers to CPAL for the actual sRGB colors. They have always been decoupled.

RoelN commented 4 years ago

From MDN:

"Because s belong to the data type, they can only be used where s can be used. For this reason, linear-gradient() won't work on background-color and other properties that use the data type."

Is the proposed syntax for CPAL gradients on the roadmap or being discussed somewhere? Should we expect a different syntax to define linear gradients for override-color?

PeterConstable commented 4 years ago

Roel: At the moment, a proposed extension to the COLR table to support gradient fills for glyphs (rather than simply flat colour fills) is being discussed here. I have no information about anything regarding CSS.

davelab6 commented 3 years ago

It seems to me that we should consider part of "defining how to interpolate [v1] colors" to include recommendations for how author can set them. I expect that this would be similar to the axis registry recommendations.

rsheeter commented 3 years ago

Marking v2 because we can (and hopefully will) ship a high value v1 w/o this.

PeterConstable commented 3 years ago

I think we've concluded that the key issue is that blending of sRGB colours requires transforming into a "linear" colour space, at which point linear interpolation can be used. People working on graphic display stacks are very familiar with these "de-gamma" and "re-gamma" operations before and after color operations are performed. I'm pretty sure that in Direct2D everything is done internally in scRGB, which is a linear space; I'm guessing that would be true in any graphics libraries. That would be appropriate to add (along with a normative reference to IEC 61966-2-1:1999, which is the formal specification of sRGB.

PeterConstable commented 3 years ago

Here are draft additions for the CPAL table.

in the introduction:

Palettes are defined by a set of color records. Each color record specifies a color in the sRGB color space using 8-bit BGRA (blue, green, red, alpha) representation. The sRGB color space is specified in IEC 61966-2-1:1999 Multimedia systems and equipment - Colour measurement and management - Part 2-1: Colour management - Default RGB colour space - sRGB. See Blending and Interpolation of Colors, below, for related information.

Then, a new section at the end. Feel free to tell me if this is too much background detail (there's more background info than the key point regarding color operations); I should also get this reviewed by a real colour expert for complete accuracy.

Blending and Interpolation of Colors

Color spaces in graphics and imagery are derived from models of the human visual system. In general, color spaces are understood to have chromaticity and luminosity components, and are fundamentally defined in terms of tri-color chromaticity coordinates and white point in some reference color space. This fundamental definition is an optical definition, independent of a digital representation. Typically, the reference space used is the CIE 1931 XYZ space, now standardized as ISO/CIE 11664-1 Colorimetry — Part 1: CIE Standard Colorimetric Observers. Two important characteristics of these optically-defined color spaces are that conversion between spaces can be done using linear transformations, and that human-perceived chromaticity is a function of the linear proportions of the tri-color components rather than absolute power of the light stimulus.

While the optical definition of a colour space is independent of digital representation, use of colors in graphical data requires that a digital representation also be defined. This definition is often included in the definition of a color space, and is referred to as a quantized optical-electrical transfer function (OETF); the inverse is referred to as the electrical-optical transfer function (EOTF). For efficient data representation, the mapping from optical levels to electrical signal is typically non-linear—often referred to as a “gamma” function.

The sRGB color space specifies a non-linear OETF that combines a linear mapping at very low levels with a geometric function using a gamma exponent of 2.4 elsewhere. (The combination is sometimes approximated by a gamma of 2.2.) It also specifies a default digital quantization of 8-bit depth.

An important caveat for non-linear color representations is that color operations cannot be performed as linear operations with good results, as is possible in linear, optically-defined representations. When performing color operations, such as blending colors from multiple elements (with some transparency) or interpolating colors between stops in a gradient, these operations should be done in a linear color space. For sRGB colors defined in the CPAL table, the sRGB EOTF should be applied to the sRGB color values (“de-gamma”), or the colors otherwise mapped into some other linear color space, before the color operations are performed. Once colors are represented in a linear color space, interpolation between colors can be done by linear interpolation of values in individual color channels. Alpha values are separate from the color specification and can be interpolated directly without requiring any pre-processing transform.

svgeesus commented 3 years ago

Folks, sorry I came late to this conversation. I only discovered this repo existed yesterday!

I have a concrete suggestion on how to interpolate in linear space: "apply the color space's EOTF before interpolating, then apply the OETF." I hope I'm not the only one who had to look up the terms :D

Yes, that is exactly what is done to operate on linear-light. You take the sRGB value, undo gamma-encoding (correctly, not using an approximation like 2.2) and now have linear-light values which are additive. Once you have done whatever calculation on those values, apply gamma encoding to get a displayable sRGB color.

It is also possible to further transform from linear-light sRGB values to CIE XYZ which is a device-independent linear-light space (the Y component is the luminance). But since this is a 3x3 matrix transform of linear-light sRGB and since all your inputs are in the same colorspace there is no benefit in doing so.

(The combination is sometimes approximated by a gamma of 2.2.)

Don't. There should be a linear segment in the sRGB OETF, which limits the gain near zero. Note that some software (notably from Adobe) has non-compliant ICC profiles for sRGB, ProPhoto etc without the linear segments. Note too that this is an interoperability trap because the Adobe software silently adds linear segments, overriding the OETF and EOTF in the ICC profile, while other software will not. So just use the correct transfer functions so everyone gets the same result.

svgeesus commented 3 years ago

I should also get this reviewed by a real colour expert for complete accuracy.

Working on it

svgeesus commented 3 years ago

chromaticity and luminosity components

No. Luminosity is something else entirely. If you are defining in terms of chromaticity (x,y or u',v'), then the extra information you need to get back to CIE XYZ colorspace is luminance which is the Y component of XYZ.

svgeesus commented 3 years ago

This definition is often included in the definition of a color space, and is referred to as a quantized optical-electrical transfer function (OETF);

The OETF and EOTF are typically continuous functions. Quantizing to a given number of bits-per-component is a separate step, and typically the same OETF is used to 8, 10, and 12 bit representations; only the quantization step differs. (for some colorspaces used in video, the full range is not used for color information; this is to allow undershoot and overshoot headroom, and because the values 0 and 255/1023/4095 are used for signalling purposes not for color. That doesn't apply to sRGB though.

svgeesus commented 3 years ago

An important caveat for non-linear color representations is that color operations cannot be performed as linear operations with good results, as is possible in linear, optically-defined representations.

This could be clearer, because "linear operations" makes it sound as though they are operating on linear values. Also several things can be linear, so it is worth clarifying that linear-light is meant.

"An important caveat for non-linear color representations is that color operations such as addition and multiplication will give the wrong results; too dark and with color bleeding, compared to the correct result when these operations are carried out in linear-light, optically-defined representations."

svgeesus commented 3 years ago

or the colors otherwise mapped into some other linear color space,

I see no reason for such weasel-wording. If something specific is meant, say so; otherwise, delete

svgeesus commented 3 years ago

Alpha values are separate from the color specification and can be interpolated directly without requiring any pre-processing transform.

No.

Alpha values do need to be linearly interpolated, true; and they do not need any special EOTF or OETF because they are inherently linear, and correspond to the degree of occlusion of the background by the foreground.

However, when interpolating linear-light values with alpha, it is necessary to pre-multiply the linear-light RGB components by the alpha values. This retains the physically-based interpretation of light intensity. To get back to colors for the calculated result, the computed vales are divided by the interpolated alpha value. Then, the OETF is applied to get a displayable color.

Consider a group of 2x2 pixels. One has full opacity, and is producing light with a particular linear-light intensities r g and b. The other three are fully transparent and produce no foreground light.

Now lets combine this group into one pixel of the same size (2x2 subsampling). It has an alpha of 0.25, and the amount of light emitted is r/4 g/4 b/4.

svgeesus commented 3 years ago

If a reference is needed, I suggest Porter, Thomas; Duff, Tom (July 1984). Compositing Digital Images. SIGGRAPH Computer Graphics 18 (3) pp253–259. doi:10.1145/800031.808606. ISBN 9780897911382.

svgeesus commented 3 years ago

experimental color matching studies were done that showed that the human visual system (normally) has three distinct colour receptors that have bell-shaped spectral response curves sensitive to different parts of the visible spectrum. It's these three ρ/γ/β response curves for cones in our retinas that led to colour imaging to be done in terms of R, G and B values.

Kind of. There were two phases of experiments, the ones in the 1920s and 30s where an arbitrary color stimulus was matched by mixing three colored lights (which led to the realization that some colors could not be matched and needed negative amounts of one of the lights) and again in the latter part of the 20th century where the absorbance properties of chemically extracted cone pigments were studied.

The key issue here is that a) the three cone responses overlap very significantly, and b) the peak sensitivities are yellow, yellow-green, and blue-violet. So calling them red green and blue receptors (or coyly Greekifying that as ρ/γ/β) is inaccurate. Modern work tends to call these the long, medium and short wavelength cones (LMS). Conversion from XYZ to LMS is commonly done, for example as the first step in color adaptation (calculating how colors look when the spectrum of what we see as white changes) or modern wide color gamut, high dynamic range colorspaces such as ICtCp.

svgeesus commented 3 years ago

Luminance is not experienced linearly, but exponentially.

Lightness ( as in CIE Lab for example) is computed from Luminance with an exponent function. Luminance is linear-light. Lightness is perceptually uniform, i.e. appears to be evenly spaced. L=50 is a mid grey that is visually in between black (L=0) and white (L=100). A luminance of 50% of white appears very differently. This is why a "mid grey" photographic target has a reflectance of 18%, not 50%.

svgeesus commented 3 years ago

the X and Y members specify colours in absolute physical terms derived from specific spectral power functions related to human visual response, but the Z member for luminance is relative, not absolute.

That is incorrect. All three are defined by spectral power distributions defined by the CIE in publication 15, and all three are absolute. However, it is common to use normalized XYZ where the luminance of the media white is given as 100 (or 1) and the rest scaled to match.

I suspect the source of confusion in this statement is that, in addition to X, Y and Z, there exist chromaticity coordinates x,y, and z. The way these are defined:

let sum = X + Y + Z;
let x = X / sum;
let y = Y / sum;
let z = Z / sum;

Means that x + y + z = 1.0, and so z can be omitted, which gives the conveniently two-dimensional representation x.y as used on the 1931 chromaticity diagram.

To get from x,y back to XYZ requires having retained the luminance, Y. Sometimes colors are expressed as x,y,Y although that is less common nowadays.

svgeesus commented 3 years ago

If we assume we can rev the spec more frequently than has been true in the past I'd vote to do COLRv1 first, followed fairly quickly by a rev to define interpolation on CPAL.

I strongly suggest not doing this. Shipping an undefined color interpolation method, at the same time as shipping color gradients (which require color interpolation) is asking for trouble.

svgeesus commented 3 years ago

Okay the Android impl just interpolates each channel separately in a linear space. That's the simplest acceptable way but has the luminance issue that Peter pointed out among other problems.

Interpolating in linear-light space is the correct thing to do. It gives the correct result, in that if you have two physical lights, one with measured color X1 Y1 Z1 and the other with X2 Y2 Z2 and mix them one to one, the measured color of the mixture will indeed be exactly (X1+X2) (Y1+Y2) (Z1+Z2) and the second (Y) term is the measured luminance.

svgeesus commented 3 years ago

It appears you're thinking that the interpolation should preserve luminance and saturation, and only interpolate hue. But HSL is only one way to model color, and it's not obvious that hue-only interpolation, when assuming hue is modeled as a circle, can be well defined: should midway between blue and yellow be green or magenta? That choice is arbitrary.

If perceptual uniformity is a goal, then mixing in Lab will achieve that. If Chroma-preserving is also a goal, then mixing in the polar form of Lab (LCH) will work. This is the approach taken in CSS Color 5, for example, for the color-mix() function which gives perceptually uniform results. And yes, that approach requires defining whether hue interpolates on the shorter or longer arc, and what happens when the hue difference is exactly 180 degrees.

HSL, however, is a totally different beast. For example, in HSL the lightness of #FFFF00 yellow and of #0000FF blue is the same, while the lightness and luminance are clearly not the same. Changing hue of an HSL color also changes the (real, visual) Lightness but does change the L component of HSL. Also, the hues are not evenly distributed in HSL.

svgeesus commented 3 years ago

And if an algorithmic description is required, a text can be added that explains that RGB interpolation requires first taking the square of each channel, interpolating those values, then square rooting each channel result to get the final value. Done, we have interpolated RGB.

That (gamma of 2.0) is an approximation, made to ease explaining something in a couple of minutes in a YouTube video. It correctly makes the point that you interpolate in linear-light. But the actual transfer function is approximately 2.2 and exactly a linear portion near black and a shifted exponential function with a power of 2.4.

svgeesus commented 3 years ago

If we want to make CPAL colours interpolatable, conversion to a linear signal before interpolation makes sense based on what I learned while working on HDR.

Agreed.

But if we want to add that kind of enhancement to CPAL, then at the same time we probably should also be considering supporting something beyond sRGB, both in terms of gamut but also dynamic range (i.e., more than typical SDR, display-relative luminances). scRGB might be a good candidate.

Note that the approach which has been hashed out here of interpolating in gamma-premultiplied, linear-light space (and sorry for the numerous small corrections) will still be correct if wider gamut colorspaces and higher dynamic range is added later. The only thing missing is how to mix SDR (luminance relative to media white) and ITU Rec. BT.2100 HDR/PQ (absolute luminance), which is already defined by the ITU in report BT.2408

PeterConstable commented 3 years ago

chromaticity and luminosity components

No. Luminosity is something else entirely.

Right: I overlooked or forgot that "luminosity" is used to mean a perceived attribute of light (= brightness?)

PeterConstable commented 3 years ago

If a reference is needed, I suggest Porter, Thomas; Duff, Tom (July 1984). Compositing Digital Images. SIGGRAPH Computer Graphics 18 (3) pp253–259. doi:10.1145/800031.808606. ISBN 9780897911382.

https://dl.acm.org/doi/10.1145/964965.808606

PeterConstable commented 3 years ago

The only thing missing is how to mix SDR (luminance relative to media white) and ITU Rec. BT.2100 HDR/PQ (absolute luminance), which is already defined by the ITU in report BT.2408

https://www.itu.int/pub/R-REP-BT.2408

svgeesus commented 3 years ago

@PeterConstable are my comments sufficient for you to make a revision of your earlier text ? Please let me know if anything is unclear; also if you would like me to suggest specific replacement text for any portion.

PeterConstable commented 3 years ago

@svgeesus Thanks for the comments. I've been focusing on other content but will come back to this.

PeterConstable commented 3 years ago

In SVG 2.0, the color-interpolation property allows the content to declare whether interpolation of colors (e.g., for a gradient) is done using non-linear sRGB or linearRGB interpolation.

https://www.w3.org/TR/SVG2/painting.html#ColorInterpolation

PeterConstable commented 3 years ago

@svgeesus : I notice that color-interpolation and color-interpolation-filters were both part of SVG 1.1

SVG 1.1 properties index

You've mentioned that interpolation should be done on linear colour representations, which matches the impression I had gained elsewhere. But why is it that the default value for the color-interpolation property is sRGB, not linearRGB?

And, curiously, why is it that color-interpolation and color-interpolation-filters have opposite defaults?

svgeesus commented 3 years ago

And, curiously, why is it that color-interpolation and color-interpolation-filters have opposite defaults?

Two reasons, both relevant in the late 1990s:

So doing the right thing was opt-out for filters and opt-in for the rest.

Of course, the performance of indirecting through a 256-element lookup table is pretty minor. And in the mean time, graphic libraries have tended to do the right thing, at least as an option, often as the default.

PeterConstable commented 3 years ago

@svgeesus : You provided this link as a CR for CSS Color Level 4: <CSS Color Module Level 4>. But the title on that page says "Working Draft". Is there a CR?

PeterConstable commented 3 years ago

@svgeesus Btw, the Compositing and Blending Level 1 CR does not specify for any case whether blending should be done using linear or non-linear scales.

For SVG, of course, the feBlend behaviour is controlled by the 'color-interpolation-filters' property, and this is spelled out in 15.7.1 of SVG 1.1 2nd edn (though in 15.7.2, the comments regarding SourceGraphic are a bit confusing since they seem to suggest that the raster data is always linear).

For CSS gradients, CSS Images Module Level 3 or Level 4 WD does specify using pre-multiplied-alpha values but is silent on the linear/non-linear issue.

Similarly, HTML Canvas is silent about linear/non-linear. But it's explicit in stating "without premultiplying the alpha value".