Open emilk opened 1 year ago
related to
I have been tackling rendering of point clouds for some time and here are some resources I've used to speed up rendering of point clouds:
Only partially upload the point cloud and render it progressively using random or weighted subsets of the cloud: https://publik.tuwien.ac.at/files/publik_282669.pdf
Using spacial hashing the cloud could be batched into smaller subsections on which various LOD techniques could be used: https://dl.acm.org/doi/pdf/10.1145/3543863 with related http://www.crs4.it/vic/data/papers/spbg04-lpc.pdf
While a bit specific to Unreal Engine 4, there are some insights in this master thesis project ( https://cgvr.cs.uni-bremen.de/theses/finishedtheses/point_cloud_rendering_in_unreal/thesis_FINAL_WEB.pdf ) about point order
We should be a bit more specific with the "huge" part. From what we learned so far most users are more than happy in the region of <8mio points which seems to be in the region where we don't need acceleration structures for rendering.
We need typically 24bytes per point right now (position, radius, color, index - could compress away the index!), so 8mio points would be merely 184Mb. Given that upload speeds can be assumed to be at the very least 16GB/s (that's PCIe 3.0 16x; 5.0 is common and has 64GB/s and integrated GPUs are off the chart there) we get 273Mb/frame for 60fps. So even if we upload ever frame we should still hit at least 30fps at that size if we do everything correctly.
Right now we have a hardcoded limited 4mio; point cloud renderer should be dynamic with this!
potree does large point clouds:
The current state is around 1.5M points @30 fps on my M1 MacBook Pro.
We are bounded primarily by
A rough roadmap:
potree does large point clouds:
From the same authors, a drastically different approach by accumulating data across frames, removing the need for hierarchical datastructures, LOD, caching, etc: https://github.com/m-schuetz/Skye
examples/python/open_photogrammetry_format/main.py
, showing just the 3D view, on --release
, averaged over a few frames, MacBook Pro M1
0.8.2, ~47 ms/frame
0.9.0, 15.3 ms/frame
What about just rasterize it on the server side and serve as video stream.
Pixel rendering on the cloud is something we'd like to do in the future, but is orthogonal to the issue at hand.
The Rerun Viewer is currently limited to only being able to display ~two million points at a time, and will have pretty terrible frame rate at 1 million points already.
The reason for this is that we upload the point cloud every frame. This is very simple, but obviously very wasteful.
A solution is to detect if the same points are being rendered this frame as the previous, and if so not re-upload them. This is similar to how we already handle tensors.