linebender / vello

A GPU compute-centric 2D renderer.
http://linebender.org/vello/
Apache License 2.0
2.34k stars 135 forks source link

Blending #151

Closed raphlinus closed 1 year ago

raphlinus commented 2 years ago

Now that the changes to clip are landing it's a good time to consider blending. This will very much build on top of the clip infrastructure, and, at least in the first iteration, will actually be a specialization of clip.

Blending is not supported in the piet API. For a first cut, we will extend piet-gpu directly. Whether we diverge from piet or keep them in sync is a deeper question; I'm leaning towards diverging.

The blend modes are defined in the Compositing and Blending Level 1 spec from the W3C . The immediate goal is to support COLRv1 - if there's anything that's complex or tricky, the deciding factor is whether it's needed in COLRv1. I haven't carefully gone through the spec, but from my current understanding, the main things are:

In the initial implementation, blends are additional parameters added to clip. Each clip still has an associated path, which can just be a rectangle. (We can consider relaxing this and allowing blends without a path, as discussed some in #119, but this will require some architectural changes). BeginClip is annotated with a flag indicating whether it's a pure Porter-Duff over with unity alpha; if not, then the "all-1" optimization is disabled, as that still needs to push a layer for further compositing. It also has a flag for isolated, indicating the initial value of the newly pushed layer.

EndClip is annotated with the remaining info: the blend enum, compositing enum, and alpha value.

I reread the COLRv1 spec and believe it falls into the above framework: a PaintComposite node is translated into drawing the backdrop node, doing a BeginClip, drawing the src node, then doing an EndClip. The glyph viewport may be used as the clip path. It's possible I'm missing something, as this stuff can be a bit subtle.

Also, it seems to me compositing should happen in sRGB color space. Right now, alpha compositing happens in linear sRGB space, which can give superior antialiasing but may not be compatible. We probably need a mode to set this, either at compile or run time.

eliasnaur commented 2 years ago

Also, it seems to me compositing should happen in sRGB color space. Right now, alpha compositing happens in linear sRGB space, which can give superior antialiasing but may not be compatible. We probably need a mode to set this, either at compile or run time.

Compositing in sRGB is a disappointing part of the spec, because not only can anti-aliasing suffer, but also colors. I find https://blog.johnnovak.net/2016/09/21/what-every-coder-should-know-about-gamma/ compelling, in particular this example.

Maybe I'm naive, but I don't understand why color standards don't specify linear sRGB. Even upcoming standards seem to lack enthusiasm for it: https://github.com/google/iconvg/issues/37.

raphlinus commented 2 years ago

I agree, this is something I'd like to see addressed better. But for now I think the best approach is to take what we're given and render it the best we can. When the client is in control over the scene, we can provide more options.

I should note that since the RAVG paper (and going through MPVG) they tout a subsampling approach, where the composition is computed in whatever color space the scene description requires (say, sRGB), while supersampled AA happens in a linear space. I think we might want to go for that as a long term goal, though of course it does demand some changes in fine rasterization.

dfrg commented 2 years ago

This is an interesting take. It looks like neither web nor COLRv1 support selecting both blend and Porter Duff compositing modes for a single operation— the blend modes seem to imply source over. I do think offering both is forward looking and the correct path for piet-gpu.

I have the GLSL for these mostly done. My simple experiments seem to work well so my next step is to plug it into EndClip with constants for the modes and test there.

raphlinus commented 2 years ago

I actually wasn't sure on the details. I was thinking about how to represent "three way" blends - backdrop, src, and mask - and I imagined various combinations of blend and composite. It's possible you can get most of the effects from doing just one or the other (ie assuming "over" for the blends). At the back of my mind I'm imagining some algebraic simplification before doing fine raster. I guess where I am is, do both if it's straightforward, but if it's a simplification or performance bump, also fine to restrict it to web/COLRv1 semantics.

DJMcNab commented 1 year ago

Blending is currently supported. According to @raphlinus , there may still be some follow-up needed, but this issue doesn't really capture that anyway.