Closed tychedelia closed 8 months ago
Doing a little more research — seems like we'll also need to implement a node in the render graph so we can manage some features of the render pass itself. There's an example of that here, but the main 3d pipeline source in the engine itself may be a better resource.
Have started doing a poc of directly porting the existing rendering algorithm into bevy's mid-level render api. Most of the existing code maps pretty directly to the different pieces of bevy's api, but there's a bit of complexity when it comes to rendering the view within bevy's existing render graph. Namely, it requires using bevy's camera system (i.e. camera = view).
In my previous comment (https://github.com/nannou-org/nannou/issues/954#issuecomment-1897902549) I mentioned that we might need to implement our own render node, but going this far basically means we live entirely outside of bevy's renderer, and I'm concerned will make it more difficult to take advantage of features like windowing. It also may lead to strange interactions if users want to both use our draw api as well as bevy's mesh api.
One option I'm exploring is to just use bevy's orthographic camera and hooking into their view uniform for our shaders. This is pretty straightforward, but may mean we also need to do things like spawn lights, etc.
Another alternative is to explore just using bevy's existing pbr mesh pipeline. A simple example of what this might look like:
fn setup(
mut commands: Commands,
mut meshes: ResMut<Assets<Mesh>>,
mut materials: ResMut<Assets<StandardMaterial>>,
) {
commands.insert_resource(AmbientLight {
color: Color::WHITE,
brightness: 1.0,
});
commands.spawn(Camera3dBundle {
transform: Transform::from_xyz(0.0, 0.0, -10.0).looking_at(Vec3::ZERO, Vec3::Z),
projection: OrthographicProjection {
..Default::default()
}
.into(),
..Default::default()
});
let tris = vec![
Vec3::new(-5.0, -5.0, 0.0).to_array(),
Vec3::new(-5.0, 5.0, 0.0).to_array(),
Vec3::new(5.0, 5.0, 0.0).to_array(),
];
let indices = vec![0, 1, 2];
let colors = vec![
Color::RED.as_linear_rgba_f32(),
Color::RED.as_linear_rgba_f32(),
Color::RED.as_linear_rgba_f32(),
];
let uvs = vec![Vec2::new(1.0, 0.0); 3];
let mesh = Mesh::new(PrimitiveTopology::TriangleList)
.with_inserted_attribute(Mesh::ATTRIBUTE_POSITION, tris)
.with_inserted_attribute(Mesh::ATTRIBUTE_COLOR, colors)
.with_inserted_attribute(Mesh::ATTRIBUTE_UV_0, uvs)
.with_indices(Some(Indices::U32(indices)));
println!("{:?}", mesh);
let mesh_handle = meshes.add(mesh);
commands.spawn(PbrBundle {
mesh: mesh_handle,
/// the pbr shader will multiply our vertex color by this, so we just want white
material: materials.add(Color::rgb(1.0, 1.0, 1.0).into()),
transform: Transform::from_xyz(0.0, 0.0, 0.0),
..default()
});
}
The issue here is that we either need to cache geometry or clear it the meshes every frame. This may or may not be a big deal but bevy definitely doesn't assume that meshes are drawn in a kind of immediate mode.
I think it's worth trying to complete an as-is port of the renderer to the mid-level bevy api just to see what it looks like, but my experience so far is definitely generating more questions. Ultimately, seeing actual could will probably help clarify!
TL:DR;
In my previous comment (https://github.com/nannou-org/nannou/issues/954#issuecomment-1897902549) I mentioned that we might need to implement our own render node, but going this far basically means we live entirely outside of bevy's renderer,
I love the idea of attempting to use bevy's camera and fitting the Draw API in at the highest level possible in order to work nicely alongside other bevy code, but I wouldn't be too surprised if it turns out we do need to target some mid or lower level API instead due to the way that Draw
kind of builds a list of "commands" that translate to fairly low-level GPU commands (e.g. switching pipelines depending on blend modes, setting different bind groups, changing the scizzor, etc).
and I'm concerned will make it more difficult to take advantage of features like windowing.
True, one thing that comes to mind is that today by default we target an intermediary texture for each window (rather than targeting the swapchain texture from the draw pipeline directly) where the idea is that we can re-use the intermediary texture between frames 1. for that processing-style experience of drawing onto the same canvas and 2. for the larger colour channel bit-depth. I wonder if enough bevy parts are exposed to allow us to have a similar setup as a plugin :thinking:
The issue here is that we either need to cache geometry or clear it the meshes every frame. This may or may not be a big deal but bevy definitely doesn't assume that meshes are drawn in a kind of immediate mode.
Yeah currently I think our draw API just reconstructs meshes each frame anyways, but I think we do re-use buffers where we can, but maybe not so crazy to reconstruct meshes each frame? Hopefully this turns out to gel OK with bevy :pray:
Looking forward to seeing where your bevy spelunking takes this !!
@mitchmindtree Some notes more notes from my research.
True, one thing that comes to mind is that today by default we target an intermediary texture for each window (rather than targeting the swapchain texture from the draw pipeline directly) where the idea is that we can re-use the intermediary texture between frames 1. for that processing-style experience of drawing onto the same canvas and 2. for the larger colour channel bit-depth. I wonder if enough bevy parts are exposed to allow us to have a similar setup as a plugin 🤔
Bevy's view logic uses the same intermediate texture pattern, maintaining two internal buffers in order to prevent tearing, etc. You can disable the clear color to get the sketch like behavior.
Color depth isn't configurable, but using an hdr camera provides the same bit depth as our default (Rgba16Float
) . Otherwise, bevy uses Rgba8UnormSrgb
. Maybe they'd accept a contribution here, although I'd bet these two options work for a great majority of users.
They don't support MSAA 16x, not sure why.
In terms of pipeline invalidation, you can see all the options that would cause a pipeline switch in bevy's mesh pipeline. Basically, the key is generated and used to fetch the pipeline, so if the key changes, a new pipeline is created. I believe this supports everything we track: topology, blend state, and msaa.
Scissoring seems to be the main thing that isn't supported by default in the mesh pipeline. I think it might be simple to implement as a custom node in the render graph though? Definitely need to do more investigation here. It's supported in their render pass abstraction, just isn't used in the engine or in any examples.
I'm like... 70% of the way through implementing our existing rendering logic, but as I read through the bevy source in doing so, I'm continually like, oh they're doing the exact same thing already.
Yeah currently I think our draw API just reconstructs meshes each frame anyways, but I think we do re-use buffers where we can, but maybe not so crazy to reconstruct meshes each frame?
Yeah, I don't think the performance would be worse than our existing pattern, so this is likely totally fine.
Hmm. 🤔 Much to consider. I'm definitely enjoying getting into the fiddly wgpu bits of the renderer, but it would also be great to reduce the amount of custom rendering code we need to maintain as that's kinda the whole point of this refactor.
It lives!
Will push my PoC to a branch in a bit. Here's some details about what I've done:
ViewNode
, which means we hook into Bevy's windowing. So we attach our nannou specific components to a view, and are able to target that. This works really well and integrates cleanly with the renderer. This render sits at the end of bevy's core 3d pass. Still need to experiment more with mixing in bevy meshes just to see what happens, but it potentially "just works", which would be so cool.There's a few outstanding issues to deal with in my PoC:
ViewNode
👍.TLDR: Sans some outstanding questions about feature parity, this approach is working surprisingly well, and while it still requires us to manage some wgpu stuff, the surface area is reduced a lot and improved by some patterns bevy offers. It would still be really interesting to explore hooking into bevy's pbr mesh stuff completely, but this is definitely a viable approach that demonstrates some of the benefits of our refactor.
The bevy asset system is actually incredibly helpful for getting user uploaded textures to work. When a user uploads a texture, bevy by default creates a Sampler
, Texture
, TextureView
, etc. This means that we can just import these already instantiated into our render pipeline. Configuration (i.e. for the sampler) is handled by bevy, so we may need to figure out how to manage additional configuration options there. One thing to note is that assets upload asynchronously, so there's a bit of additional complexity there.
Closing this and opening a new ticket to move us to the "mid level" render APIs.
As a first step in getting the draw api working for #953, we need to define nannou's wgpu infrastructure in terms of nannou's mid-level render api. The closest examples are the 3d gizmos pipeline or the manual 2d mesh example.
Our goal for this ticket should be to submit some raw vertex data with attributes to the bevy render world to be drawn. We won't worry about cameras/windowing/etc just yet except to get an example working.
Many of the wgpu utilities will need to be refactored to target bevy's wgpu wrapper types, but otherwise should be able to be converted mostly in place.
The mid-level render api is mostly an ecs dependency-injectified version of our existing render code. We should be able to use a lot of the boilerplate and the existing shaders as is.