rerun-io / rerun

Visualize streams of multimodal data. Fast, easy to use, and simple to integrate. Built in Rust using egui.
https://rerun.io/
Apache License 2.0
6.27k stars 293 forks source link

Treat camera projected primitives in 3D space more like 2D objects #1025

Open Wumpf opened 1 year ago

Wumpf commented 1 year ago

Consider a camera projection (typically perspective, but orthographic shouldn't make a difference) in 3D space. image Today, the renderer renders all shapes that are part of the camera projection (there might not only be images but also lines, points or even complex meshes) just like anything else in the scene. I.e. the user is expected to set the respective projection matrix as world matrix for all these objects.

This works nicely for the most part but there are are some things we need to fix (the actual tasks of this issue!):

Open questions:


Note that the seemingly straight forward solution would be to have a literal 2d surface that we render in 3D space. There is a whole lot of drawbacks with this though as well:

Wumpf commented 1 year ago

Since we're also hitting issues in compositing pure 2D views we need a more radical approach:

there's quite a few open questions on how the 3D->2D works exactly. Since there can be many 2D views in a 3D scene, we need to avoid allocating new render targets for each of them, so we should try to render them directly in the main color pass, not repeating the currently defined render phases.

The major challenge here is that we'd like to use a different depth range in each 2D views. We can restart the depth buffer, but we'd need to stencil out the actually seen part of the 2D plane in the 3D scene. A literal stencil buffer would solve this nicely but conflicts with our current depth buffer setup. Otherwise, we might need to manipulate the depth buffer directly on base of the 3D depth buffer. This is feasable but likely to run into compatibility issues (treat depth as color and vice versa?) and potentially slow as we need to handle MSAA samples on both depth buffers.

As long as we stick with depth offsetting we don't need to worry about above depth buffering issues too much. We still need to define the dual camera setup as described in the original post though. It seems advantageous to still use an explicit camera setup, only that our compositing abilities would be more limited, forcing the use of the same color & depth buffer in the entire process (apart maybe from hard clears?).

Wumpf commented 1 year ago

Via https://discord.com/channels/1062300748202921994/1151053465989165076/1151091958970843136

One slightly different representation of 2D in 3D is in form of a projective texture: image

This seems to be another case that is best solved with a render-to-texture indirection. Maybe this is the path to go forward paired with special optimizations for the many images case?