Is your feature request related to a problem? Please describe.
We have a rgbd to pointcloud to rgbd rendering pipeline, that uses open3d to build pointclouds from rgbd images and then projects these pointclouds into different rgbd frames. We can't efficiently make a mesh out of new rgbd images, because the mesh building process is too slow.
Since not much changes in new images, we can easily do updates to the pointcloud efficiently. But we can't use a mesh for rendering because making a mesh out of a pointcloud requires a full conversion from scratch on each update.
Describe the solution you'd like
I would like to be able to update a mesh from points in a point cloud, only changing the region of space that is different based on updated pointclouds.
The current new reconstruction system gives live fusion of RGBD images, but unfortunately mesh is not supported yet due to the complexity of integration with Filament.
Is your feature request related to a problem? Please describe. We have a rgbd to pointcloud to rgbd rendering pipeline, that uses open3d to build pointclouds from rgbd images and then projects these pointclouds into different rgbd frames. We can't efficiently make a mesh out of new rgbd images, because the mesh building process is too slow.
Since not much changes in new images, we can easily do updates to the pointcloud efficiently. But we can't use a mesh for rendering because making a mesh out of a pointcloud requires a full conversion from scratch on each update.
Describe the solution you'd like I would like to be able to update a mesh from points in a point cloud, only changing the region of space that is different based on updated pointclouds.
Describe alternatives you've considered Kimera: https://github.com/MIT-SPARK/Kimera does this, as does https://github.com/tum-vision/fastfusion.
Additional context In general, improvements to running a realtime real rgbd to simulated rgbd rendering would be very useful.