servo / pathfinder

A fast, practical GPU rasterizer for fonts and vector graphics
Apache License 2.0
3.59k stars 201 forks source link

Write a Godot/Unity/etc. plugin #147

Open pcwalton opened 5 years ago

pcwalton commented 5 years ago

Several people have expressed interest in an HTML canvas-like API for Unity and so forth, and Pathfinder would be a great solution for this.

toolness commented 5 years ago

Okay, I think I am going to try giving this a shot (for Unity at least). Will report back in a few days!

pcwalton commented 5 years ago

Awesome! Let me know if you have any questions!

pcwalton commented 5 years ago

I think you will probably want to first make a C API binding for Pathfinder's canvas API (#12).

toolness commented 5 years ago

Ok, sounds good!

I am not super familiar with Pathfinder itself, so I am learning as I go. After looking at examples/canvas_minimal it looks like the CanvasRenderingContext2D is largely independent of the Pathfinder renderer; it seems like this is the only place any "hand-off" occurs between the canvas and the renderer:

https://github.com/pcwalton/pathfinder/blob/a5d373cb91a6f49a7f0019e1f4a4b9180d840bda/examples/canvas_minimal/src/main.rs#L77-L78

So I'm curious how this hand-off should work in a potential C API. You mentioned on Twitter that I need to wire up RenderCommand to Unity’s CommandBuffer abstraction. and it looks like RenderCommand is defined here:

https://github.com/pcwalton/pathfinder/blob/a5d373cb91a6f49a7f0019e1f4a4b9180d840bda/renderer/src/gpu_data.rs#L29-L37

I'm also noticing that the SceneProxy created in the minimal canvas example has all kinds of methods to output RenderCommands.

Would a C API then have some kind of function that serializes the state of a CanvasRenderingContext2D into RenderCommands that client code gobbles up? The Unity plugin would wire these up into Unity's CommandBuffer abstraction, while other clients would do other things with them?

I thought it might be neat to have a basic proof-of-concept C program that uses the C API to render a canvas drawing to a window via SDL2. I thought doing that might be a nice stepping-stone to building the Unity plugin, but I'm not sure.

Anyways, let me know what you think!

Update: I watched your pathfinder talk from the Bay Area Rust meetup on March 20, 2018 (which I can't permalink to because something very bad has happened to Air Mozilla), as well as your Pathfinder 1 talk from March 2017 and now I understand things better.

toolness commented 5 years ago

Ok, here's some more questions:

  1. From the March 2017 talk (which I think is describing Pathfinder 1) you describe a delta coverage step followed by an accumulation step (the latter of which requires a compute shader). Is that still how Pathfinder 3 works at any point? I notice in the architecture document that the GPU steps of the pipeline describe an alpha coverage framebuffer, but I'm not sure if this has any resemblance to a delta coverage framebuffer.

  2. In your March 2018 talk you describe Pathfinder 2, and outline two different renderers that can be used: a stencil-and-cover one, and a mesh-based one. You also mention a tool that can be used to create the meshes so that non-Rust programs can just run Pathfinder's GPU-based code without having to deal with running any CPU-bound Rust code, and that a Rocket-based web server also generates meshes and delivers them to web-based clients that just run the shaders on the meshes in WebGL. Does this distinction between stencil-and-cover vs. mesh-based still exist?

  3. You mention something called "tiling" in https://github.com/pcwalton/pathfinder/issues/107#issuecomment-469326554. Is this the same thing as the tesselation you describe in your Pathfinder 1 talk, when you describe tessellating the edges of shapes into small lines and then expanding them to their bounding boxes? Or is it something different?

Anyhow, I realize that this information isn't really needed for me to do the plugin work but since it seems like I'm going to have to call Pathfinder's shaders from client code, and pass all the data stored in the command buffers to the GPU, I'd like to know the basics of what they're actually doing... unless I'm completely misinterpreting what I need to do here, which is also entirely possible. 😳

pcwalton commented 5 years ago

So there are two approaches you might consider for Unity. One would be to write an implementation that processes RenderCommands. The other would be to hook up Unity's CommandBuffer to the Device trait. I'm not sure off the top of my head which is better—it depends on how well Unity works with external rendering code—but it will be easier to get started with the second one.

Regarding how Pathfinder 3 works, Nical is writing up a document that should hopefully answer your questions: https://docs.google.com/document/d/146WIsAu7aYC_uvinpCURLS1K8TTSVREAorBi0GCIAMw/edit#heading=h.fr0f8bu4ga2h

toolness commented 5 years ago

Oh awesome! Yes, when I was looking at the code I actually thought it might be easier to bridge things at the Device trait level, so I'm glad you think that's an option too.

I will get started with that and report back!

And the Pathfinder 3 google doc looks extremely useful--thanks for the link.

toolness commented 5 years ago

Okay, here's an update on my progress so far.

I investigated Unity's CommandBuffer API and the main challenge with using it--or any other scriptable API in Unity, really--appears to be that it's quite hard to dynamically compile a shader in Unity in the way that Pathfinder's Device trait expects. Its documentation also discourages the use of GLSL, since using Cg/HLSL results in Unity automatically converting the shader to GLSL for any platforms that require it, while the opposite conversion doesn't seem to exist. There's ways for me to work around those limitations, I think, but they all seem to involve a lot of work.

But then I researched more about Unity's low-level native plug-in interface, and specifically about the ability for such plug-ins to directly communicate with whatever low-level graphics back-end Unity is currently using. This means that if Unity is using OpenGL, we should be able to simply use Pathfinder's existing OpenGL-based machinery to render things directly, without having to deal directly with connecting RenderCommands to CommandBuffers or implementing our own Device trait for Unity. The downside will be that the plugin will only work with whatever graphics back-ends Unity and Pathfinder have in common (i.e., just OpenGL for now), but it should also involve a lot less work and be easier to maintain.

That's my current hunch, at least. I've found a nice Native code (C++) rendering plugin example for Unity that has examples of all the sorts of Unity-Native Code communication we'd need to try out such a strategy, and I'm currently working on a very simple Rust-based plugin that I've got talking to Unity and getting information about the current renderer from it. My next step will be to have it actually render Pathfinder's house example to Unity. 🤞

Does that sound reasonable? Or is only initially supporting OpenGL-based Unity projects a no-go?

I also had a question about the original use case: what kinds of things do users want to do with a Pathfinder Unity plugin? For instance, do they want to be able to render a canvas into a texture that can then be used in Unity? Or do they want to render the canvas directly onto objects, bypassing any kind of textures, like a sort of decal system? Or do they simply want to take over the whole framebuffer with a 2D canvas? Or something else entirely?

pcwalton commented 5 years ago

I think it sounds like a good plan to start with OpenGL. We can always do an HLSL port later.

I'm not sure what people are going to need, but I was thinking that allowing drawing HUD or UI objects with HTML canvas would be an interesting thing to start with.

quentinbaradat commented 5 years ago

Pathfinder wouldn't be suitable for 2d vector graphics animation ?

Sometimes the game environment is in 3D and the characters are in 2D.

pcwalton commented 5 years ago

@quentinbaradat Pathfinder certainly should work for that use case! I'm open to whatever people think is useful. We just need to start somewhere :)

toolness commented 5 years ago

Okay I made some progress!

image

The above screenshot shows the plugin rendering the house example during OnPostRender. The house is drawn using the same Rust code in the canvas_minimal example, i.e. it's "hard-coded" into the plugin because I've solely been focusing on getting the Unity/OpenGL/Pathfinder communication working for now.

Some notes:

Anyways, the code is at toolness/pathfinder-unity-fun if you want to try it out yourself. I think the next steps are to figure out what's behind some of the bugs I mentioned above, and to start exposing the C canvas APIs you added in e04cc273eec352e3a5da1a96b3f026cbfdc676b6 so the house can be drawn in C# instead of hard-coded into the plugin.

pcwalton commented 5 years ago

Sweet! This is great!

quentinbaradat commented 5 years ago

This is awesome !

I would be very interested to use this work and contribute to the #159 Flash front-end enhancement to draw and animate shapes and sprites (from swf files) in Unity.

I also noticed the same "weird visual artifact" when I run the canvas_minimal demo on Windows.

pcwalton commented 5 years ago

I also noticed the same "weird visual artifact" when I run the canvas_minimal demo on Windows.

I can't reproduce this—could you file an issue? It's likely something similar to #134.

quentinbaradat commented 5 years ago

This artifact no longer occurs on my side. I just did a pull. It was fixed between 639a8f3 and 4327d75 .

unfixed

And now

image

@toolness also works with an old commit. A pull command would fix this :-)

toolness commented 5 years ago

Awesome, thanks @quentinbaradat, I just pulled the latest master and everything works great now!

Also just issued #167 to fix the resizing problem and @pcwalton merged it, so that's taken care of.

Now I'm working on porting the Canvas API, which I'm doing in a WIP PR over at https://github.com/toolness/pathfinder-unity-fun/pull/1. I'm writing notes down there about the various technical decisions I'm making there, if anyone wants to read it. Advice is welcome!!

pcwalton commented 5 years ago

@toolness That's cool! I was thinking about demoing a simple HUD in Pathfinder: something like this perhaps.

The fact that paths are consumed when filled/stroked is bad. I should just fix that :)

pcwalton commented 5 years ago

@toolness So I looked at taking paths by reference. Unfortunately, that would result in more copying in some cases. So I think the best thing to do is to actually copy the paths on your side (i.e. in the C# bindings) when they're filled or stroked.

toolness commented 5 years ago

Ok cool! I've modified the PFPath to clone internally as you suggested. Any suggestions for what to do about PFCanvas, though? I've noticed it doesn't seem to be cloneable, even in the Rust API... I could just do what I was originally doing with PFPath, and make PFCanvas remember if it's been consumed or not. Or I could just create a new canvas with the same size as the previous one whenever it gets consumed by a scene, but this feels weird, since its internal state would be reset... any other ideas?

I'm also happy to build the example out into a simple HUD like the screenshot you posted, that would be fun!

toolness commented 5 years ago

Okay, I've got a very basic Unity Pathfinder API running now--the demo still looks the same, but all the canvas/path drawing is occurring in C# rather than Rust.

I also modified rendering so it always checks to see if the current framebuffer ID has changed--which Unity seems to do sometimes, even if the window size stays the same--and if so, it updates the Pathfinder renderer to target it. This appears to fix the OPENGL NATIVE PLUG-IN ERROR messages that were showing up in the Unity status bar and console.

toolness commented 5 years ago

Ok, here's what I've got so far:

Update: see my tweet for an animated version of this.

The instructions centered at the top are rendered by Pathfinder (it's hard to see in the screenshot, but those are smart quotes around the "p", so Unicode is working too).

I've also got the moire demo (albeit with a very low circle count of 2) rendering into a Unity Render Texture which is being mapped onto the surface of a rotating cube.

Some other notes:

If anyone wants a development build of this demo app, or if they'd like to try out the plugin in Unity, let me know and I'll publish something. Just keep in mind that this currently only works on Windows, and the Unity project has to be configured to use OpenGL.

pcwalton commented 5 years ago

Cool! I'm working on some changes to make this use case significantly easier. Specifically, what I want to do is to composite tiles directly onto 3D surfaces, rendering tiles at different scales and caching them from frame to frame. They can then be exposed as RenderTextures so that you can compose them easily into a Unity scene. Also, I expect this to improve performance quite a bit due to the caching. CPU usage should drop dramatically; in my test branch I've seen amortized CPU usage drops of 90% or more.

This is requiring some significant restructuring, but the results should be well worth it.

pcwalton commented 5 years ago

BTW, if you haven't created an optimized build yet, it will certainly be very slow. :)

toolness commented 5 years ago

Whoa that's awesome!!!! Looking forward to that work :)

I will also try creating an optimized build and report back.

By the way, I noticed there are some features of HTML5 Canvas that aren't directly implemented by Pathfinder, such as saving/restoring the rendering state and transformation matrices. Are these features you want implemented in Pathfinder, or should I implement them on the plugin/C# side? If you want them in Pathfinder, I could submit a PR.

pcwalton commented 5 years ago

Feel free to implement any and all features you need upstream.