[X] This issue is not a duplicate. Before opening a new issue, please search existing issues.
[X] This issue is not a question, bug report, or anything other than a feature request directly related to this project.
Proposal
I have used the zed examples for the spatial mapping case where I am trying to fuse the point cloud and/or mesh over multiple time instances. Currently, it seems that the fused point cloud functionality is getting updated to the dynamic scene (or the last 500 millsecond time stamp of the scene). I have the following two feature requests:
Can functionality be built in so as to fuse the point clouds obtained from the same camera but at different instants of time (or specifically the detected object of interest) even during dynamic scene using iterative closest point algorithms such as PointtoPoint or PointtoPlane or even the colored icp registration algorithms?
Can functionality be built to enable point cloud registrations obtained from two different cameras having an overlapping region between them?
Use-Case
We have two fixed zed cameras mounted on a moving platform and goes around the object of interest. The idea is to detect the object of interest in either camera across multiple time stamps, use the 3d detected region for point cloud registration and obtain an accurate point cloud fused from both cameras as well as from multiple time instants.
Anything else?
I think point cloud registration algorithms would be more suitable than volumetric fusion algorithms for accurate point cloud reconstruction. This is the reason for a new feature request. Thanks
Preliminary Checks
Proposal
I have used the zed examples for the spatial mapping case where I am trying to fuse the point cloud and/or mesh over multiple time instances. Currently, it seems that the fused point cloud functionality is getting updated to the dynamic scene (or the last 500 millsecond time stamp of the scene). I have the following two feature requests:
Use-Case
We have two fixed zed cameras mounted on a moving platform and goes around the object of interest. The idea is to detect the object of interest in either camera across multiple time stamps, use the 3d detected region for point cloud registration and obtain an accurate point cloud fused from both cameras as well as from multiple time instants.
Anything else?
I think point cloud registration algorithms would be more suitable than volumetric fusion algorithms for accurate point cloud reconstruction. This is the reason for a new feature request. Thanks