servo / pathfinder

A fast, practical GPU rasterizer for fonts and vector graphics
Apache License 2.0
3.58k stars 198 forks source link

WebGPU backend #306

Open tangmi opened 4 years ago

tangmi commented 4 years ago

Hi, I've been playing around with implementing a WebGPU backend using wgpu-rs. I'm basing my current work a lot off the metal backend (in my mind, WebGPU is a very slightly more spartan version of Metal...).

I have some questions:

My current progress is here: https://github.com/servo/pathfinder/compare/master...tangmi:webgpu-backend, and I've run into some issues like: WebGPU doesn't support some attribute types pathfinder uses (e.g. Char1, Short1, etc) and needing SPIR-V (wgsl, maybe?) shaders and reflection info (which either needs a runtime solution or a way to pack that info into the compiled shader resources).

Thanks for your time and for working on this awesome project!

s3bk commented 4 years ago

WebGPU would definitly be nice!

pcwalton commented 4 years ago

Great work! Yes, it'd be great to have this.

Is there any chance we could get those attribute types added to WebGPU? I can work around it if necessary, but I'd like to see if there's a reason for not supporting these.

tangmi commented 4 years ago

~It looks like some of the texture formats should be supported by the spec (https://gpuweb.github.io/gpuweb/#texture-formats), but e.g. WebGPU doesn't seem to have support for many 3-component formats, like rgb8~

Edit: whoops, I had a brain fart, the vertex attribute formats are here: https://gpuweb.github.io/gpuweb/#vertex-formats

I always forget what's allow or not in shaders, but if we (unsafely) specify these vertex attributes to be bigger than they are so that they're "overlapping" with the subsequent attribute but the shader only addresses the "safe" components of the attribute, we may be fine?

cart commented 4 years ago

I'm currently working on a custom pathfinder backend for my project, which also uses wgpu under the hood. I don't use the wgpu apis directly, but I'm hoping that my work can feed directly into this effort. I think im pretty close.

The biggest issues I've encountered so far:

  1. Vulkan shader layout requires explicit bindings. These cannot be generated from the existing GLSL shaders. My fix here is to define explicit bindings in the "source" glsl shaders. The other non-explicit shaders can then be generated using this approach: GLSL (explicit bindings) --GlslLangValidator--> SPIRV --Spirv-Cross--> GLSL 330/430
  2. Char1/Short1. As mentioned above wgpu does not support these types, nor can it according to @kvark. The workaround here is to add a cargo feature and some additional logic to pathfinder that uses u32 instead of u8/u16 when the feature is set.
  3. Combined Sampler/Textures are not supported by wgpu, but they are used everywhere in pathfinder. The fix here is to explicitly define them in the source shaders. Platforms that require combined samplers (ex: glsl 330) will automatically have them combined when we use the compilation pipeline outlined in (1).

I am currently wrapping up a branch for (2) and I will submit it for review in the near future. The changes would be non-breaking for the existing backends and wouldn't change the data layout. I think the most controversial change will be how vertex attributes are defined. For my own sanity I decided to automatically calculate vertex attribute descriptor offsets / strides. I think this is significantly more legible and protects against multiple classes of errors.

// excerpt from FillVertexArray
let fill_attrs= &[
    device.get_vertex_attr(&fill_program.program, "FromSubpx").unwrap(),
    device.get_vertex_attr(&fill_program.program, "ToSubpx").unwrap(),
    device.get_vertex_attr(&fill_program.program, "FromPx").unwrap(),
    device.get_vertex_attr(&fill_program.program, "ToPx").unwrap(),
    device.get_vertex_attr(&fill_program.program, "TileIndex").unwrap(),
];

static FILL_BUFFER: Lazy<VertexBufferDescriptor> = Lazy::new(|| {
    let mut descriptor = VertexBufferDescriptor{
        index: 1,
        divisor: 1,
        vec![
            VertexAttrDescriptor::datatype_only(VertexAttrClass::FloatNorm, VertexAttrType::U8, 2),
            VertexAttrDescriptor::datatype_only(VertexAttrClass::FloatNorm, VertexAttrType::U8, 2),
            VertexAttrDescriptor::datatype_only(VertexAttrClass::Int, VertexAttrType::U8, 1),
            VertexAttrDescriptor::datatype_only(VertexAttrClass::Int, VertexAttrType::U8, 1),
            VertexAttrDescriptor::datatype_only(VertexAttrClass::Int, VertexAttrType::U16, 1),
        ]
    };
    descriptor.calc_stride_and_offset();
    descriptor
});

FILL_BUFFER.configure_vertex_attrs(device, &vertex_array, fill_attrs);
device.bind_buffer(&vertex_array, quad_vertex_indices_buffer, BufferTarget::Index);

@pcwalton: do these changes sound acceptable to you? anything you would change?

tangmi commented 4 years ago

@cart how did you end up dealing with "non-opaque uniforms outside a block"? Making all the uniforms into interface blocks will be a breaking change for the gl/webgl backends, even with spirv_cross' emit_uniform_buffer_as_plain_uniforms option. Additionally, many of the vertex and fragment shaders don't share the exact same set of uniforms, causing linking errors (e.g. "struct type mismatch between shaders for uniform (named globals)")

Apologies if this is pretty basic GL knowledge--most of my experience is in HLSL (and up-to-date GLSL literature seems hard for me to find :grin:)

cart commented 4 years ago

I used "vulkan style" shaders, compiled them to spirv, then used spirv-cross with the following options: https://github.com/cart/pathfinder/blob/df6982e0f9733b59aa1249176ce4ec773b5d9097/shaders/Makefile#L58

The gl backend worked just fine. However my custom backend is still only partially operational. I was burning too much time on this and my short term motivation was text rendering for my engine, so I decided to cut my losses and just integrate skribo in the short term. However once I've finished up some more pressing concerns (doing an initial release of the engine) I'd love to get this working if I can.

pr for context: https://github.com/servo/pathfinder/pull/333

Moxinilian commented 4 years ago

Has there been progress on this since? I am too interested in using pathfinder in a WebGPU context, but I'm not exactly sure what is going on nor what I should do to get it working.

pcwalton commented 4 years ago

For char1/short1 I would just change the attributes to u32 and unpack manually in the vertex shader for all platforms.

cart commented 4 years ago

My engine-specific backend is still on hold because I still have some other projects that are higher priority for me. But if anyone wants to look at my backend code for ideas on what a wpgu backend might look like, let me know. The engine isn't open source yet, but I'm happy to share a gist. Keep in mind that I never got it fully working.

tangmi commented 4 years ago

I've been hacking on this when I find the time, but no big progress to report. I'm still mostly stuck on the shaders-- @pcwalton would you consider using uniform buffers for pathfinder (as opposed to separate uniforms)? I believe while most APIs support free-standing uniform/constants, many prefer--at least in documentation--the buffer-based methods (I believe there's a perf argument as well, but I'll admit I don't know more than being told, "that's how GPUs work").

Unfortunately WebGPU doesn't seem to support the individual uniforms. If we stick to the current API, we'd have to modify the all the shaders with boilerplate to wrap each uniform in an interface block and rely on something like spirv-cross to flatten the buffers for non-WebGPU backends... I don't personally like any attempt I made at this, but I'd love your thoughts on the best way to make shaders work for WebGPU.

As a mostly unimportant aside: uniform blocks/constant buffers are technically a D3D10 feature level (well, SM4, I guess) so it could still fit with the Device::feature_level() reporting!

rsms commented 3 years ago

It would be nice if the WebGPU backend could interface with any standard webgpu implementation, including dawn-wire. I'm working on a project where we separate the renderer from the client drawing (similar to a web browser) across a network connection. For that we use Dawn's "wire" implementation on the client which exposes a standard WebGPU API but doesn't actually allocate devices etc on the local machine (instead that is done by the remote-controlled renderer.)

kvark commented 3 years ago

@rsms the portable native WebGPU API is the C Header (https://github.com/webgpu-native/webgpu-headers), not the "wire", FYI. wgpu-rs currently targets the Web and wgpu-core, but we fully intend to make it target any webgpu-native implementation as well (i.e. Dawn or wgpu-native), it just wasn't a big priority so far.

rsms commented 3 years ago

@kvark Lovely! Looking forward to that whenever you find the time :–)

nickdbush commented 2 years ago

Wondering if anyone is looking into this still, and what would be needed as wgpu matures. Would be happy to take a preliminary look into this if we feel that it be conducive to the overall direction of the project.

s3bk commented 2 years ago

My feeling is that Pathfinder is nearing the need for a rewrite.

The existing code works reasonably well, but has a few problems:

So I would actually propose that in the not too distant future a work-group is formed that makes a proposal for Pathfinder 4. Then sponsors need to be found so this quite large project can be financed.

nickdbush commented 2 years ago

@s3bk I'd love to help out with these efforts. Perhaps starting out with more concretely identifying the learnings from the current version of the project and therefore the (business) case for supporting such a rewrite.

Keavon commented 2 years ago

I would be very interested in getting involved with discussions pertaining to a rewrite since my open source Rust project Graphite will be needing to either write a custom high-performance vector renderer or find one that's suitable. If we could be involved in discussions about the design or capabilities for a Pathfinder rewrite, I'm sure that we'd learn a lot at a minimum and, ideally, be able to use such a future version of Pathfinder in Graphite if it can support our needs. (I don't actually have a concrete idea quite yet about what our technical requirements will be of our vector renderer, that's something I'll need to explore and possibly consult the ideas of other knowledgeable folks. But it will need to integrate with the rest of our node graph driven raster render engine.)

nickdbush commented 2 years ago

Hi @Keavon, I've put a doc together in the matrix room for Pathfinder outlining some of my ideas so that we can start to hash something out. The use case I've set out sounds like it strongly maps onto your requirements for Graphite so I hope it will be of interest!

https://docs.google.com/document/d/13eZUpofJCirEBNInFGWZry4GHMXpSdwk_rKkbj7oA24/edit?usp=sharing

siriux commented 2 years ago

This seem really interesting for me. For my use case, one important feature would be the ability to be integrated and extended in a larger engine. In particular (apart of the things already mentioned in the document) the possibility to get low level access to the geometry and other properties to allow real-time animation of a scene though different types of transformations (affine, deform, ...). And also to be able to update various properties in real-time (like stroke) reusing as much as possible from previous frames.

I'm not sure if this is well aligned with the goals for pathfinder 4, but if that's the case I could try to find some time to help in the development/testing.

Also, about the rewrite for pathfinder 4, will it use the same core rasterizing ideas as pathfinder 3 or are you thinking about any improvements?

s3bk commented 2 years ago

@siriux For the scenegraph, I think one can take SVG animations as a starting point.

Since I don't know a faster rasterizer, I would start with the same algorithm.

siriux commented 2 years ago

I was thinking more about game-like animations based on deforming the mesh than animating paths directly, mainly for performance reasons. But I´ll have another look into SVG animations as there might be something I´m missing.

On Wed, Dec 22, 2021 at 12:52 PM Sebastian @.***> wrote:

@siriux https://github.com/siriux For the scenegraph, I think one can take SVG animations as a starting point.

Since I don't know a faster rasterizer, I would start with the same algorithm.

— Reply to this email directly, view it on GitHub https://github.com/servo/pathfinder/issues/306#issuecomment-999519061, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFOU6RHIXR7GFIJUMN6Y6TUSG3XBANCNFSM4MRVFCCA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you were mentioned.Message ID: @.***>