bevyengine / bevy

A refreshingly simple data-driven game engine built in Rust
https://bevyengine.org
Apache License 2.0
35.04k stars 3.44k forks source link

Document new renderer #3999

Open djeedai opened 2 years ago

djeedai commented 2 years ago

I'm opening this issue to collate all the new renderer (from v0.6) questions I have, which I don't think are currently documented.

I started adding a custom render node by copying some code from the sprite renderer, but I've reached a point where that code is too simple and doesn't cover things I want to do. It was also recently refactored to use render commands and is now even more difficult to understand for a novice.

Hopefully someone with more insight can comment on some of those. I'm happy to make PRs if I can get to a point where I understand myself those things. Thanks! šŸ™

Views

Several aspects of the render graph refer to views, but this concept is not documented as far as I know.

  1. How are views produced? I assume there should be one view per "point of view to render", that is typically one of main camera, additional effect cameras (mirror and water reflection, portal effect, ...), shadow-casting lights, etc. However, now adding a second PerspectiveCameraBundle (for example) doesn't do what one would expect because there's a single active camera in an app world and only that one renders, effectively "shadowing" all other ones (see https://github.com/bevyengine/bevy/pull/3528). There's also no way to specify which camera(s) an entity/mesh/... renders to. I believe there should be a single primary camera (the one that eventually blits to the window/screen) but multiple active ones (all the ones actively rendering something).

  2. What is the view entity passed to Draw::draw() Using a single camera I always get entity 0v0 (id:0, gen:0). Adding a second camera I always get 1v0, and never 0v0 anymore.

  3. Is RenderStage::Queue view-dependent? Are previous ones (Extract and Prepare) view-independent? I thought I read something about this but the doc for RenderStage doesn't mention it.

  4. How to handle multiple views? I believe in the "queue" stage a custom render node should iterate over the RenderPhase<Transparent3d> (for example) to get all views? I know I can get the current camera's parameters via ViewUniformOffset and that view entity mentioned above, but whatever I do I seem to have only a single view, so I cannot confidently test any view-dependent code.

Render graph

  1. What is Transparent3d::distance (and same on others)? How can my custom render node provide a relevant value for this? For now I always set it to distance: 0.0 and everything seems to work fine.

  2. What is Transparent3d::entity? I have no idea what I'm supposed to store there nor why.

  3. What is the VIEW_ENTITY input of the 2D/3D graphs? The 2D and 3D render graphs declare a VIEW_ENTITY input that doesn't seem to be connected/used anywhere. What's the point? Is that related to the views section above in any way?

Concepts explanations / tutorials

  1. Explain main app vs. sub-app / render app?

  2. How to extend the renderer with a new Node?

  3. What's Draw and draw functions? Why should they be used? The docs only mention the difference with RenderCommands, which makes things even more confusing (what is a render command in the first place? why are there 2 concepts for the same thing?).

StarArawn commented 2 years ago

Hi! I think documenting this stuff is a good idea. In the meantime here's a few answers based off of my own knowledge:

What is Transparent3d::distance (and same on others)? How can my custom render node provide a relevant value for this? For now I always set it to distance: 0.0 and everything seems to work fine.

Distance is used to sort each draw call so that transparency is rendered correctly. Normally this is the distance between the object and the camera. This type of algorithm is called the painters algorithm. Things that are farthest from the camera are rendered first and then things closer to the camera are rendered last. This is to avoid transparency artifacts that occur. You can read more about that here: https://www.khronos.org/opengl/wiki/Transparency_Sorting

What is Transparent3d::entity? I have no idea what I'm supposed to store there nor why.

Entity is typically the render world entity associated with that draw call. In some cases this is the entity from the game world if one game entity corresponds to one render entity. In the case of sprites batches are created and there is a new entity that represents the batch of sprites. Its useful because later on we can lookup specific entity data like this: let sprite_batch = query_batch.get(item.entity()).unwrap();

Explain main app vs. sub-app / render app?

In bevy the main app is where users create entities in the game world. Bevy uses a secondary app that creates a per-frame world. Essentially the render world can be considered a representation of everything that is currently being rendered for the current frame. After the frame is rendered the render world is cleared out. Bevy has a special Extract stage which lets you read from the game world and write into the render world. This architecture was modeled after this paper: https://advances.realtimerendering.com/destiny/gdc_2015/Tatarchuk_GDC_2015__Destiny_Renderer_web.pdf

What's Draw and draw functions? Why should they be used? The docs only mention the difference with RenderCommands, which makes things even more confusing (what is a render command in the first place? why are there 2 concepts for the same thing?).

The Draw trait represents a draw call in the past we use to bundle together other commands inside of the draw call, but now we have separate RenderCommands for this functionality. I think its mostly put in place so you can re-use render commands across different pipelines/draw calls. :) A render command would be anything that interacts directly with the render pass. I.E. set_render_pipeline.

djeedai commented 2 years ago

Thanks a lot @StarArawn for your detailed answer. Here's a few follow-up questions/remarks:

Distance is used to sort each draw call so that transparency is rendered correctly. Normally this is the distance between the object and the camera.

That makes sense when you have convex objects, but here this is generic over a "draw call". What happens if one draw call renders the entire terrain, visible from as close and as far as one can see? Or, if you have 2 spheres centered at the same point, with different radius? They're at the same distance from the camera. So isn't this rather an optimization for the depth buffer rejection, on a best effort basis (and so the distance is an approximation)?

Also, the painter's algorithm is back-to-front, for hidden surface removal. With a depth buffer, you'd rather want to render opaque surfaces front-to-back to leverage depth rejection and avoid overdraw. And in the case of Transparent3d I believe they're sorted back-to-front rather, for proper opacity (alpha) accumulation. But the latter is just a guess.

Could we define Transparent3d::distance like this?

/// Approximate distance from the camera to the primitive(s) being drawn, for the purpose of sorting draw calls.
///
/// The distance provides an estimation of how far the primitive(s) rendered by this draw call are, to enable sorting
/// draw calls within the current view. For `Transparent3d`, draw calls are ordered back-to-front to blend transparent
/// objects correctly.

Entity is typically the render world entity associated with that draw call.

Ok but if I store it in Transparent3d::entity, where do I read it back from? Because Draw::draw() gives the view entity, but that's a different one. What's item in your example item.entity()?

Bevy has a special Extract stage which lets you read from the game world and write into the render world

Yes I know this paper and Natalya's work, and the extract stage makes sense. What I struggle with are the other stages, and the details about them, especially regarding view-dependent vs. view-independent. The one-liner of what they do is already documented, but that's not enough for me to understand how to use them properly, and especially how to handle multiple views correctly, which is needed if only to make shadows work I believe.

in the past we use to bundle together other commands inside of the draw call, but now we have separate RenderCommands for this functionality.

Unfortunately I don't understand, that raises more questions than it clarifies things for me. In Draw::draw() I have access to everything I need, and in particular I have the render pass that I can use to set buffers / pipelines / bind groups and generate draw calls. Why would I write 20 lines to define a struct just to be able to call pass.set_vertex_buffer()? And then make more of those and define a tuple to essentially call 5-6 methods on the render pass? I don't see the advantage here; that looks overly complicated and abstracted to the novice eye. Is that for re-usability? Is Draw::draw() somehow deprecated in favor of render commands? Or they fit different usages?

thefakeplace commented 6 months ago

w.r.t. the VIEW_ENTITY input not being used anywhere, I'm porting some code to the new renderer and noticed the same thing. VIEW_ENTITY now seems to acquired via a special fn RenderGraphContext::view_entity.