Open trusktr opened 7 years ago
What's really cool about this idea is that we can leverage libraries like http://snapsvg.io to manipulate the SVG elements in the DOM, and they will simply render to WebGL!
Notes:
canvas
which fails if the system does not have cairo natively installed. So it is not currently possible to import Two in a reasonable manner because in any environment (f.e. Meteor, Webpack, etc) a module will not be found, or npm install
will fail.~ Two.js can interpret and draw an SVG (similar to Pixi.js+pixi-svg).WebComponent
class implement childConnectedCallback
and childDisconnectedCallback
methods. In fact, we can probably reuse logic from there. Once we have that, we can map observations to the drawing backend (Two.js or Pixi.js).Joe, congratulation!
After this post https://github.com/pocketsvg/PocketSVG/issues/94#issuecomment-315638684 I ve seen examples of your passion-project - WOW! 👍 🥇
I came to your work and this thread because I am looking for ways to accelerate SVG graphics.
I am also curating a passion-project - it is a discovery engine: a platform to search for connections in between of things, to help brainstorm / learn about the context of things that we don't know yet.
I wonder if Infamous can be of help the browser to speed up rendering composition of SVG: from what you write in this thread, it seems so.
Conceptually, could you help understanding how does Infamous work for handling rendering of DOM elements as webGL? = HOW this idea crossed your mind ? :)
My use case is like this:
I am expanding a network layout on demand - I click on a node, the graph layout fetch new nodes (represented as SVG groups) and place them on the DOM. Coordinates are moved around with translate properties, but when there are many elements, things get sloow. I would like to delegate the render of SVG elements to webGL engine, but possibly keeping the SVG elements and all their CSS styling.
I also tested a network layout using PIXI.js, and it is nice; however I believe that readability (SVG fonts are always crispy) and possibilities to directly interact with SVG elements would let a lot a lot of open opportunities for applications that are not games, but could be flavoured with a spice of "gamification" yet with content typical of a normal web page document. I mean, instead of making an app with canvas with nice look and feel, once can leverage an HTML web page for utility purpose and with outstandingly engaging look and feel and responsiveness.
I see a lot of potential for data-driven things... for "public audiences"... :]
The interest in using SVG is also because I invested a lot in tuning UX and parameters for displaying and lay-outing elements - moving around SVG elements gets slooow quite quickly, especially on mobile handsets, if we could leverage the browser to move around SVG elements fst (so fast that the 'a' disappear) we could combine utility of web + blazing experiences of webGL (typically for games purpose).
Would you be interested in taking a look to my use case, tell if using Infamous to render the SVG could be appropriate and help in estimating performance and complexity to delegate rendering of SVG to webGL ?
The layout structure I animate makes large use of groups like the one below, to represent a node of a connected graph. I would like to speed up the rendering and layout responsiveness when moving elements around.
<g id="data-3bGJprg6D4eROBYv" transform="translate(-5789,-3116)">
// move the node according to force-directed layout or other layout engines
<title>Bottlenose dolphin</title>
<circle fill="white" r="120" cx="0" cy="30" stroke-width="30" stroke="lime" id="3bGJprg6D4eROBYv" fill-opacity="1"></circle
<ellipse fill="#ffffff" rx="128" ry="128" cx="0" cy="32" stroke-width="20px" stroke="#ffc107" fill-opacity="0" class="currentNodeMarker"></ellipse>
<text xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg" x="0" y="0" text-anchor="middle" id="text-3bGJprg6D4eROBYv">
<tspan>Bottlenose dolphin</tspan></text>
<clipPath id="image-3bGJprg6D4eROBYv">
<circle fill="#ffffff" r="120" cx="0" cy="30"></circle></clipPath>
<image id="296994" clip-path="url(#image-3bGJprg6D4eROBYv)" width="480" height="480" x="-240" y="-194.4" xlink:href="https://upload.wikimedia.org/wikipedia/commons/thumb/1/10/Tursiops_truncatus_01.jpg/100px-Tursiops_truncatus_01.jpg"></image>
</g>
Hi Luigi! Sorry for the late reply, I sometimes lose the notifications in gmail, but I eventually come back to check the issues on GitHub. Thanks for the interest!
(Side note, infamous will be renamed and have a new website soon. 😄)
I wonder if Infamous can be of help the browser to speed up rendering composition of SVG
Definitely possible, but at the moment I haven't had a business need for the WebGL SVGs anymore, so that part is on pause in an old commit that could be resurrected when needed. It needs lots of work though, and I only have time for so much as one person working on this.
Conceptually, could you help understanding how does Infamous work for handling rendering of DOM elements as webGL? = HOW this idea crossed your mind ? :)
For SVG DOM, I was handing the SVG over to Pixi.js (and similar libs) and exposing the output through infamous. That's about as far as I got, but I didn't have time to optimize it more.
For regular DOM (f.e. <button>
elements), I'm not actually rendering those in WebGL, I've came up with a trick (and this works with <svg>
elements too): I render the DOM like normal in one layer, then I render the WebGL on a second layer on top of the DOM.
For example, see this codepen: HTML Buttons with Real Shadow
If you right-click on the scene and then inspect it, you'll see inside the #shadow-root
of the <i-scene>
element there are two layers. If you delete one of them, you can see what it looks like with only DOM rendering or only WebGL rendering.
The trick is to align the WebGL 3D object transforms with the DOM CSS3D transforms, so that they are "in the same space". Then in the WebGL I have a Plane
geometry to represent the flat DOM elements, so that I can use them for rendering shadows, etc.
But there are some caveats currently, for example if a DOM element is transparent, then an intersecting Mesh is not rendered on the other side. For example: https://codepen.io/trusktr/pen/NXbVOr
I've thought about ways to fix this, but that's on the back burner for later. I'm working to release detailed documentation next (including the caveats), then go from there.
Following another example that isn't as nice looking, but you can see that the DOM elements appear to intersect with the WebGL objects. This is all thanks to the two-layer trick:
Try clicking in the pink square and editing the text! It works because it is just regular DOM with contenteditable
, and I'm not rendering that part with WebGL. The WebGL layer on top adds the Cube and Sphere geometries and the lighting/shadow.
VR mode is in the works: https://forums.infamous.io/t/vr-mode-initial-experimental-api/932/1
I would like to delegate the render of SVG elements to webGL engine, but possibly keeping the SVG elements and all their CSS styling.
As I'm not focused on the SVG stuff at the moment, I would recommend a couple things for you to try:
pixi-svg
is a tool for interpreting SVG into Pixi.js objects, but it doesn't support CSS styling, you'd have to move those to attributes, and avoid using the transform
attribute. But Pixi.js itself has a 2D WebGL scene graph with transforms, so it wouldn't be difficult to traverse your <svg>
and collect all the transforms to apply. Once you've got the SVG shapes as individual Pixi Graphics objects, animating them around in 2D will be very fast compared to SVG.The feature here in infamous had the same caveats as those libs, because I was experimenting with those libs, in order to bring the SVGs into the 3D space in infamous.
I also tested a network layout using PIXI.js, and it is nice; however I believe that readability (SVG fonts are always crispy) and possibilities to directly interact with SVG elements would let a lot a lot of open opportunities for applications that are not games
Oh you already tried Pixi. I'm replying as I go. :) Text rendering can be crisp if you render it just the right way. After all it's pixels on the screen which you're in control of, though by default Pixi may not be giving you the same setup as SVG.
I thought Pixi.js scenes can be as interactive as SVG DOM. The only thing missing is text selection. It has the tools for detecting mouse over, mouse out, etc. What else are you looking for there? Just the text selection? Someone would have to implement that in WebGL (one of my pipe-dream features for Infamous).
I mean, instead of making an app with canvas with nice look and feel, once can leverage an HTML web page for utility purpose and with outstandingly engaging look and feel and responsiveness.
That's my goal with Infamous. 😄 It just takes time.
if we could leverage the browser to move around SVG elements fst (so fast that the 'a' disappear) we could combine utility of web + blazing experiences of webGL (typically for games purpose).
I'm with you on that one! One of the reason I started Infamous. :) It continues to be my free-time project as I work towards that.
Would you be interested in taking a look to my use case, tell if using Infamous to render the SVG could be appropriate and help in estimating performance and complexity to delegate rendering of SVG to webGL?
In the end, I the feature wasn't any better than Pixi or Two example except for the fact that you could have an SVG DOM to interact with. Maybe I'll be able to resurrect that at some point. What I was doing was observing an <svg>
element with MutationObserver
, then updating the WebGL whenever the DOM changed. It also had performance problems.
Ideally, I'd make the <svg>
display none, then I'd use my newer element-behaviors
to effectively make all of the SVG elements be similar to custom elements, and use the life-cycle hooks for more fine-grained observation of the changes that happen to the SVG DOM which would make the observation much faster.
At the moment I have been focused on touching up the API and making it more developer friendly, followed by documentation for the upcoming website. Plus I focused on re-factoring the WebGL renderer and adding VR mode and Mixed Mode (regular DOM+WebGL mixed together), which will be the focus of the new docs.
Another thing you could try, is placing the SVG groups inside individual <svg>
elements, and placing those inside my <i-node>
elements. This would result in pieces of SVG that are animated in 3D space with CSS 3D accelerated transforms. It may not be as fast as WebGL, but it may be faster than SVG (not sure though, it'd need to be tested).
Plus then you could add shine, shadow, etc, to the SVG surfaces (think of them as rectangular surfaces, in the future I'll have more shapes).
Basically, it would be like this:
<i-scene>
<i-node position="30 40 50" rotation="20 30 40" size="100 100">
<svg>
...
</svg>
</i-node>
<i-scene>
Then it would be possible to move the nodes around in 3D space using position
and rotation
attributes, etc. The idea is similar to the above Button demo, but instead of putting a button
element in the scene you'd put an svg
element in there.
Here's a simple example of that, placing an SVG inside of an i-node
element and rotating it in 3D space: https://codepen.io/trusktr/pen/EObVZj
In the future, the WebGL rendering will work as I've shown in that demo, with almost the same markup (but instead of with i-node
it may be with i-svg-node
in order to opt-in to the WebGL rendering instead of DOM rendering of the SVG content).
We can take pixi.js, and one of the following existing SVG-to-pixi tools in the community, and add a feature to infamous where we can place
<svg>
elements (and all the child drawing elements like<line>
,<path>
,<circle>
, etc) inside of a<motor-node>
element, and infamous can render it in WebGL (not just DOM+CSS3D like currently) for super fast performance.For example, we can write HTML like:
(This may not be the actual final API, it isn't implemented yet)
We can leverage one of the following (or make our own based on the techniques they use):