So far, we've taken an incremental approach to moving our existing rendering algorithm into Bevy:
In #960, we ported the renderer to run in a ViewNode, which allowed us to have full low-level control over a wgpu render pass.
To more closely integrate with Bevy, #964 uses Bevy's "mid-level" render apis to express things in terms of render phase items. However, this has some friction outlined on the PR, namely that we write data into a single Draw instance and our renderer assumes we are drawing from a single set of matching gpu buffers.
While the approach taken in #964 works, we're interested in ditching our rendering code entirely, and writing our vertex data directly into Bevy meshes. What this would look like is that each PrimitiveRender would result in a new logical mesh with a matching Bevy StandardMaterial that could be used for texturing and (in future api extensions) other material-y things like emissives, pbr, etc.
This should be understood as a "logical" mesh because our immediate mode api presents some difficulties here: Bevy's assets are asynchronous, with the assumption that you'll pay the cost of uploading a persistent mesh (a sophisticated 3d model, etc) up front and work with that.
As such, we'll need to experiment with caching mesh assets to dynamically provision for Nannou's render primitives. There's a few approaches we might take here:
Allocate new meshes as needed, rendering new primitives into the first available mesh. This has the benefit of being very simple and performing well when a sketch is relatively stable, but likely has poor worst-case characteristics in terms of memory use and performance.
Sort primitives by the size of their vertex data to try to match them with equivalently sized existing gpu resources.
Try to figure out some kind of ad hoc persistence identifier, a hash of the draw data used to create it, etc., and try to optimistically associate primitives with meshes. This is likely error prone and could run into some pathological scenarios that are hard to handle.
While starting with a growable cache should be fine, we'll also need to figure out a long term strategy for cache eviction, i.e. if a mesh hasn't been used in N frames.
TODO:
Consider any performance pitfalls. We're mostly thinking about re-allocated buffers for vertex data, but is changing material uniforms problematic?
Understand asynchronous characteristics of assets better. Even with caching, we probably still want blocking apis?
Bevy's meshes don't have great apis for "building up" a mesh like we do, will this be a problem?
So far, we've taken an incremental approach to moving our existing rendering algorithm into Bevy:
ViewNode
, which allowed us to have full low-level control over awgpu
render pass.Draw
instance and our renderer assumes we are drawing from a single set of matching gpu buffers.While the approach taken in #964 works, we're interested in ditching our rendering code entirely, and writing our vertex data directly into Bevy meshes. What this would look like is that each
PrimitiveRender
would result in a new logical mesh with a matching BevyStandardMaterial
that could be used for texturing and (in future api extensions) other material-y things like emissives, pbr, etc.This should be understood as a "logical" mesh because our immediate mode api presents some difficulties here: Bevy's assets are asynchronous, with the assumption that you'll pay the cost of uploading a persistent mesh (a sophisticated 3d model, etc) up front and work with that.
As such, we'll need to experiment with caching mesh assets to dynamically provision for Nannou's render primitives. There's a few approaches we might take here:
While starting with a growable cache should be fine, we'll also need to figure out a long term strategy for cache eviction, i.e. if a mesh hasn't been used in N frames.
TODO: