rerun-io / rerun

Visualize streams of multimodal data. Fast, easy to use, and simple to integrate. Built in Rust using egui.
https://rerun.io/
Apache License 2.0
5.6k stars 253 forks source link

Depth -> PointCloud visualization slows down whole computer #3713

Open pd-nisse opened 8 months ago

pd-nisse commented 8 months ago

Is your feature request related to a problem? Please describe. We log depth data for several cameras. We also have a 3D world view with a world map. rerun automatically projects the cameras into the world space, including turning the depth mask from each camera into a point cloud.

That feature is pretty neat, but it seems to cost a lot of resources. On my MacOS Sonoma 14.0, the OS becomes unresponsive with 4+ cameras and the mouse moves at 1 fps or less.

Describe the solution you'd like Ideally, the depth->point cloud rendering becomes super fast and doesn't eat a lot of resources. If that is not possible, I'd like to have controls that we can disable this projection by default. Then users can turn it on by clicking the "eye" icon if they'd like to use it anyways.

emilk commented 8 months ago

Does it help to look away from the points? If so, you are likely GPU-bound, else CPU-bound.

Does zooming in help?

What is the resolution of the cameras?

What do you see if you open the profiler (Ctrl+Shift+P)?

pd-nisse commented 8 months ago

Hey @emilk , I tested it and when moving the points outside the 3D camera view, the performance gradually increases until all is outside which is then "normal" performance I'm used to. Does this mean I'm GPU bound? I am on a M1 MBP. Zooming in does also help.

Wumpf commented 8 months ago

This means it is GPU bound but in a way that will bring every GPU to its knees eventually 😞, that is until we find and implement a solution for the issue at hand: Excessive overdraw. When one zooms out a dense point cloud it can happen very easily that a lot of particles fall on the same handful of pixel while still being large enough that they aren't discarded. What happens then is that the GPU has to evaluate and write all these pixels out in (even strictly ordered!) series making it impossible to parallize anything. This is a very common problem for any kind of particle-like rendering when one zooms either too far out (lots of overdraw on a single pixel) or too far in for large/constant size particle (less overdraw per pixel, but lots of pixels are affected by the issue; gpu will run out of parallel workloads).

The general case is very hard to solve (game engines usually resort to reducing detail), but (!) for depth point cloud we should be able to predict this problem ahead of time and draw fewer & larger particles before the problem becomes rampant.

(so this is indeed a duplicate of #1730, but I'll keep it open since it points out the immediate bandaid of being able to disable the depth cloud feature ahead of time)

Wumpf commented 8 months ago

for future reference: A big source of slowdown for our point rendering is that we use alpha testing. We should explore rendering "raw geometry" instead (maybe also something we can do when we know that pixels are small enough? Difficulty is that it's usually not a statement we can make for all particles at once)