Hello, I am reconstructing a scene using an existing pretrained model. It is found that point cloud stacking occurs during the point cloud fusion process. In addition, I output the predicted depth of each frame of image to generate a single frame point cloud. Indeed, the point cloud of each frame is also stacked. I want to know how to solve this situation? Which processing link has a problem, my pose comes from a VIO SLAM system.
The point cloud of a single frame will overlap. This is the result of displaying three frames of point clouds together:
The fused point cloud stacking is more serious:
Hello, I am reconstructing a scene using an existing pretrained model. It is found that point cloud stacking occurs during the point cloud fusion process. In addition, I output the predicted depth of each frame of image to generate a single frame point cloud. Indeed, the point cloud of each frame is also stacked. I want to know how to solve this situation? Which processing link has a problem, my pose comes from a VIO SLAM system. The point cloud of a single frame will overlap. This is the result of displaying three frames of point clouds together:
The fused point cloud stacking is more serious:
![image](https://github.com/nianticlabs/simplerecon/assets/28702691/101c89bd-5c79-447f-8152-bc948cb5bec4)