w3c / svgwg

SVG Working Group specifications
Other
711 stars 133 forks source link

Photoreal gradienting: Alternatives to mesh gradient #257

Open Bablakeluke opened 8 years ago

Bablakeluke commented 8 years ago

Hello! As both a frequent SVG author and an SVG renderer implementor, the mesh gradient element appears to have limited use and an awkward recursive rendering algorithm which would severely limit shader based/ GPU rendering of SVGs. So, I'd like to open a discussion for alternatives, starting with lots of noise!

Disclaimer: I haven't been following the SVG 2 specs too closely but after some searching I was unable to find any matches, so apologies if this has come up before/ is being discussed elsewhere at the moment.

Procedural texture engines - a (very noisy) source of inspiration

If photoreal gradienting is the name of the game then I'd like to draw some attention to filter forge. It's a program for creating procedural textures as used in movies and games. Internally, filter forge uses an XML format to represent these "filters" and the results are often wonderfully realistic. The following images are entirely represented with a tiny XML file (as a "filter") and are resolution-free:

Classic floor tiles - Created in filter forge

Bricks - Created in filter forge

Knitting Patterns - Created in filter forge

Each filter (a tiny XML file) has input settings which can be changed by the user, resulting in an enormous variety of images resulting from the one filter. Some of these "variations" are pre-defined in the XML.

They're constructed by combining relatively few noise functions in a node-based editor. This is of course a totally different approach from SVG's compositing process, however, it shows that XML can be successfully used to represent photorealistic, resolution-free imagery and that having good support for noise functions is vital for achieving that. A bonus is that it can be generated entirely on the GPU using shaders too.

(Another disclaimer! I'm not in any way related to filter forge, however, I essentially built an experimental engine for generating noise based images at runtime using shaders and created an importer for filters from filter forge)

Triangle meshes Diving into another alternative, coming from the wonderful world of 3D graphics, triangle meshes are remarkably versatile and trivial to render on the GPU. I would assume this has been discussed a lot already so I won't cover it too much, however, a triangle mesh primitive could allow for much more expressive gradients. So for example, from an author perspective, the creation process could be like so:

  1. Draw a closed path
  2. Click somewhere within the fill path to create a gradient node; a point combined with a colour.
  3. The renderer triangulates the path combined with the gradient node(s)
  4. The renderer applies the gradient node colours to the vertex colours of the triangle mesh

The above could be far more intuitive than the current mesh gradient proposal as well as being trivial to implement (as under the hood it's using triangles).

SVG itself could declare one of two things (or both):

That's enough for today I think!

Bablakeluke commented 8 years ago

Here's a rough visual overview of a "gradient node" - the filled path is representative of any random path:

gradient-node

The resulting triangulation:

gradient-triangulated

Using multiple gradient nodes:

two-gradient-nodes

Their resulting triangulation:

two-gradient-nodes-triangulated

AmeliaBR commented 8 years ago

Fascinating examples, Luke.

How closely-related are these to the shaders used by WebGL? I suspect we'd have an easier time getting browsers on board if any new features where just an XML interface to rendering code they already implement.

AmeliaBR commented 8 years ago

Another thing to think of: You mention that the XML shaders use a node-based processing method, rather than a compositing method. That in itself is not a problem. SVG filter effects use a node-based processing method, but work with rendered pixel data instead of initial vector data.

The original SVG 1.2 vector-effects proposal would have provided a way to create a node-based tree of vector manipulations to control rendering. Work on that proposal stopped because browsers didn't think many of the manipulations would be easy to implement with the underlying 2D graphics libraries that they use. However, if the manipulations could be mapped to another underlying graphics library (i.e., WebGL), that might make it more feasible.

Bablakeluke commented 8 years ago

Relationship with WebGL - Triangle meshes This is fortunately about as simple as it gets; triangles are "the" primitive for 3D rendering and vertex colours are universal too. Essentially in order to draw an SVG path on the GPU at the moment, the path is typically triangulated first anyway _(except in cases where specific path rendering extensions are available on the underlying hardware such as Nvidia's "NV_pathrendering") so GPU-based implementers have the functionality needed already. I.e. drop in additional vertices during the triangulation stage or directly pipe through a raw triangle mesh.

As a quick example, Skia, the graphics library used in both Chrome and Firefox for SVG rendering, can draw triangle meshes with both the CPU and GPU already. The implementers who would need to do the most work are other CPU renderers; they'd need to perform bilinear interpolation on a triangle but fortunately that's something which is extensively documented due to it's ubiquity in 3D graphics.

Noise based textures For some clarity, "shader" here will be referring to a GPU shader; a user written program (in WebGL, they're the ones that go in script tags). Noise functions are very commonly implemented in shaders and as a result, hardware is starting to get native implementations too. Some of the underlying noise functions (from the experimental library) as shaders being displayed on a sphere looks like this:

noise-functions

Each node has it's own shader (Skia would implement these in HLSL and GLSL; mine are in Unity3D's ShaderLab language). Rendering it is simply a case of drawing a quad (2 triangles as a square) for each node using its associated shader. Some nodes, such as a mathematical 'add', require two sources - these require secondary off screen render contexts (also common in WebGL). Importantly, as the quads are just ordinary triangle arrays, they are also compatible with triangulated SVG paths (as used by GPU implementers). To help with visualising this, here's a quick example:

3levelnoise (Image from http://www.neilblevins.com/cg_education/fractal_noise/fractal_noise.html)

GPU implementation:

  1. Draw a quad (or a triangulated path) using the perlin shader in the "target" render context
  2. Draw the second quad using the perlin shader to a second render context*
  3. Draw an 'add' quad in the target context, referencing the second context. Add reads from the context it's being drawn to, adds the referenced one and outputs the result
  4. The second render context is now junk; reuse it for the next draw
  5. Draw the third quad using the perlin shader to the second render context
  6. Draw a second 'add' quad over the first, referencing the second context (as before)
    • In this example, a second render context isn't actually necessary - we could draw everything in just the one context and alpha blending would do the addition "for us". However, in more complex examples like the tiles above, noise isn't often directly combined - trees of operations are being combined. In short, that requires each branch to be drawn to it's own context first.

This "stacking" approach has some distinctively nice properties:

AmeliaBR commented 8 years ago

Thanks for that explanation.

Have you played around with the existing noise functions available as part of SVG filters? You can get some of these effects. Here are some demos I put together of paper/wood textures.

Screenshot of the textures created by the linked CodePen, including one where the texture has been applied on a multi-color checkerboard pattern

The main limitation right now is performance: because SVG filters are applied as manipulations of the rendered bitmap version of the shape, any change to the underlying shape or its position causes the entire filter (including noise functions) to be re-calculated. So noise filters + animation are a big problem for performance.

nikosandronikos commented 8 years ago

As a general comment, this is a direction I would love to go with SVG in the future. I think procedural textures in particular would be very useful and popular.

My company has been looking at Diffusion Curves as a method for rendering advanced gradients. See http://andronikos.id.au/pub/ for some talks And http://nikosandronikos.github.io/svg-adv-grad/ for a very draft proposal Diffusion curves are a nice example of a higher level abstraction for gradients. But they're expensive to render. The gradient meshes described in SVG 2 can be rendered via triangulation, so they should be able to be accelerated via the GPU - though perhaps with a one off initial expense. The triangle mesh format described by Boyé et al in A vectorial solver for free-form vector gradients is a nice format that can be GPU accelerated and provides a lot of power (e.g. a lot of other gradient representations can be converted into that format for rendering).

There's a lot of cool things we could do, but I think it is important to focus on GPU implementable algorithms, and we need either need to keep things at a high enough level that authors can understand them, or provide a higher level of abstraction in addition (though this could also be achieved via libraries). Filter Effects is an example of something that is not very author friendly.

Bablakeluke commented 8 years ago

Hey guys, apologies for the delay - I've been playing around with a few concepts amongst other projects too; the above comments are awesome! I'm really looking forward to seeing where SVG is taken in the future. @AmeliaBR Yep I've used SVG filters but they are very limited in comparison to what filter forge (and similar) can create, and I also agree that from an author point of view it's quite difficult to use in its current form. I.e. wood starts looking more like this:

Image Image

The major node categories are ~16 noise options, gradients, graphs (very useful for "tone mapping"), mathematical operators and repetition. SVG does kind-of cover these categories, but it's the current lack of diversity in those noise functions that seems to stick out the most. Performance is certainly a sticking point (Filter Forge itself is CPU rendering only, and it's very noticeable), so it's awesome to see there's a potential GPU focus; from my own experiments, images like the above are easily generated in realtime on the GPU.

As far as usability is concerned, I would imagine the major issue is that SVG filters represent a graph with a much more complex flow instead of the very familiar tree like everything else in SVG. I.e. a path adds to whatever is before it; a blend does not. So maybe it would help to represent the graph of nodes as a tree as much as possible; that would somewhat bring it into alignment with the rest of SVG. Based on the above stacking approach, that also makes implementation simpler too. There's an interesting range of options here:

<filter id='Wood'>
    <noise type="perlin" baseFrequency="0.3 0.1" numOctaves="4"/>
    <!-- Perlin noise is now on 'this' stack -->
    <blend operator="add" in2="anotherFilterID"/>
    <!-- Here we've now got noise + whatever anotherFilterID outputs -->
    <toneMap graph="graphID"/>
     <!-- It's now been tonemapped -->
</filter>

To me, the above is nice and easy to follow through. A nice bonus is ordinary SVG nodes can be used too - i.e. dropping in a path node would be understandable. It still uses some ID references, but the flow is much more well-defined. To avoid ID references entirely, it could make use of nesting inputs, essentially flipping the flow in reverse:

<!-- The filter node represents the one and only output, so start from that and work downwards -->
<filter id='Wood'>
    <toneMap>
        <graph>
            <!-- An inline tone mapping graph -->
        </graph>
        <blend operator="add">
            <!-- Adding noise to some other noise -->
            <noise type="perlin" baseFrequency="0.3 0.1" numOctaves="4"/>
            <noise type="voronoi" function="manhatten" baseFrequency="0.3"/>
        </blend>
    </toneMap>
</filter>

Lots of interesting combinations available with the above. I like this form, but something feels slightly "off" that its flow is upside down in comparison to the rest of SVG. For example, introducing multiple path nodes into the mix would therefore be a very odd experience indeed:

<filter id='Wood'>
    <path d=".."/>
     <!-- Who renders first..? -->
    <path d=".."/>
</filter>

However, at least it has a flow, and it can easily represent any number of inputs to each node.

Flipping back to the second part of this, triangle meshes, I can see a lot more advantages now if the gradient meshes can be GPU accelerated. I think it would still be quite convenient to introduce raw triangle meshes, possibly defined in the COLLADA format for easy compatibility, as it potentially allows for a polyfill option when introducing new features (i.e. if triangle meshes were available in an earlier iteration of SVG, a JS library could potentially polyfill gradient meshes by triangulating them and dropping the resulting triangles into the SVG's document).

SebastianZ commented 8 years ago

+1 to nesting filters, which makes them much more readable than having to use the in and in2 attributes. Though that's independent from this proposal here, so I've created issue #258 for it.

Sebastian

sdwarwick commented 7 years ago

I'd be interested in some feedback on the following concept: 1) It appears that the SVG 2 momentum has seriously slowed, if not halted.
2) All browsers actually have now exposed the webGL capabilities of direct rendering through javascript. The kinds of renderings using this capability far outstrip what seems possible in svg. 3) Given the structure of an SVG document, It doesn't seem to require any changes from the browser developers to have an SVG file that integrates a webGL-based script or an associated self-contained Javascript "helper" library and script.
Many things open up from here - 1) a purely javascript alternative for the declarative SVG language embedded in an SVG document structure
2) A webgl based SVG interpreter/webgl converter that could be optionally carried inside the svg structure enabling all kinds of enhancements to the declarative SVG without changes to the browsers. 3) an alternative declarative structure, as discussed above, also with a built-in interpreter and carried in the svg structure for historical consistency.

seeing the following changed my opinion of SVG forever:

https://threejs.org/examples/#webgl_animation_cloth

http://webglplayground.net/share/fluid-simulation2?gallery=1&fullscreen=0&width=800&height=600&header=1

http://webglplayground.net/share/traveling-wave-fronts?gallery=1&fullscreen=0&width=800&height=600&header=1

http://webglplayground.net/share/lsystem-tree-fractal?

the source code in javascript is so short..

Bablakeluke commented 7 years ago

@sdwarwick I can't comment on point #1 but:

In short, different technologies serving different purposes :) WebGL is for raw graphics rendering; SVG is for an abstract representation of a (scalable) graphic. For example, a texture on that cloth could be an SVG - using both SVG and WebGL together.

2: SVG isn't just for browsers - there's lots of implementations of SVG libraries that aren't inside a browser. Ultimately for a standard to be widely adopted, you want to minimize the dependencies; A JS engine and WebGL together represent a huge dependency, vs just an XML parser.

3 There are a few of these libraries out there already; search e.g. "SVG Polyfill". JS implementations of SVG will always run slower than a native one though, but there's certainly nothing stopping you from using both.

4-6 SVG does already have a JS API which can be used to build up an SVG that way.