Open samreid opened 9 years ago
Yes, will be handled by a similar mechanic internally, but we'll need to use framebuffers to temporarily render content with opacity.
In the future, we could add detection to see if opacity is added ONLY on a single item AND the item can be rendered with a single opacity (e.g. typically not stroked), we could skip the framebuffer approach and apply the opacity just in a shader.
Are you sure we will need to resort to frameBuffers for this? Isn't there some combination of awesome maths + webgl blending settings that will do what we need?
The tricky part is applying opacity after blending, and it would be great if there is a solution. Here's an example test case that would need to be solved: which can be reproduced in the playground:
scene.addChild( new scenery.Node( { children: [
new scenery.Circle( 40, { fill: 'red' } ),
new scenery.Node( { children: [
new scenery.Circle( 40, { fill: 'green' } ),
new scenery.Node( { children: [
new scenery.Circle( 40, { fill: 'blue' } ),
new scenery.Circle( 40, { fill: 'black', x: 20 } )
], opacity: 0.5, x: 20 } )
], opacity: 0.75, x: 20 } )
], leftTop: dot( 0, 0 ) } ) );
A black circle is rendered partially on top of a blue circle (obscuring that part completely). They are then blended partially onto a green circle (with opacity), and that result is then blended partially onto a red circle (with opacity).
Note how the blue circle doesn't appear anywhere inside the black circle, and the color of the blue-green-red intersection.
I'm pretty sure you'd need some sort of stack of pixel information for each opacity operation (e.g. in the example, your stack would be [red] => [red,green] => [red,green,blue] =>(overwrite) [red,green,black] =>(opacity) [red, black-green] =>(opacity) [dark-brown].
If you're willing to sacrifice bit-depth to emulate the stack (in this case, say 3 bits for the first spot, 3 bits for the 2nd and 2 bits for the third, there might be some jazzy way of writing only to a certain sub-range of the result (and don't overflow!) and using glBlendEquation and glBlendFunc to perform bitwise operations. E.g."
So I haven't ruled it out yet, but now on the realistic side of things I'd prefer not to (a) reduce bit-depth like crazy, (b) add unmaintainable hacks, (c) do many many draw calls and (d) rely on edge-case rounding/carry of the graphics drivers.
Rendering things into a framebuffer (easy) and then drawing the framebuffer into either the main buffer or another parent framebuffer (easy) may not be quite as fast as being able to directly render into the main buffer, but it still seems like the strongly preferred solution.
I've love to be proven wrong though!
Also since you are writing to the main buffer, recall you can't read from your current value. So if X is currently in the buffer, you can only apply the given functions in WebGL. This means designing a combination of functions that blend bits 0-2 and 3-5 (also taking into account alpha) for all combinations that those bits could be in.
Didn't we deduce that pixi is doing this properly? How is their transparency implemented?
Referenced in other issues. Planned to implement (tentatively) with framebuffers at a similar time-frame as this is implemented with Canvas (they will be similar).
Possibly related to #293. When running many simulations with ?rootRenderer=webgl or ?rootRenderer=pixi, the simulation crashes with this error:
When opacity other than 1 is used, it creates new backbone divs to apply the opacity to. This means during development of WebGL, I often have to go through all my code and put opacity:1 everywhere (including in sun). Is there a way to fix this?