Closed chapa17 closed 2 years ago
If you are using a purely self-supervised model, there is no metric scale in the depth map. Is that what you mean when you say different scale, and that the pointcloud is really bad?
I guess I have figured out the cause. It is the cloud pixels in the 3D cloud that are stretched out and not in proper position. Which seems to be a drawback of self-supervised methods anyways.
Thank you for your help!
I am sorry to reopen the issue again. The point cloud I have generated using reconstruct() method does not lie exactly on the ground plane. By ground plane I mean that when I visualize the point cloud in Blender, it does not lie above the grid. My understanding is that the point cloud lies on the ground plane grid. I have attached the snapshot of my point cloud and an example point cloud. The values which are generated are not aligned. Generally X-axis faces front direction, Y-axis faces left direction and Z-axis faces top direction. But in my case all are in different direction
Point cloud generated from packnet-sfm
Example point cloud
Can you please let me know where I went wrong?
Hello everyone,
I want to construct 3D point cloud for which I require 3D points. I am using
def reconstruct()
fromcamera.py
file but I am getting a point cloud which has some different scale. The depth map looks quit good but the point cloud is really bad. I am not able to figure out whether it is the problem with my implementation or it is the same.If anyone has tried reconstruction I would be happy to learn.
Thank you!