nianticlabs / manydepth

[CVPR 2021] Self-supervised depth estimation from short sequences
Other
607 stars 84 forks source link

how to get the RGBD map like the video shows #45

Open heroacool opened 2 years ago

heroacool commented 2 years ago

the demo video demo shows the rgbd map. image

I'm currious about how to get this rgbd map.
A possible method is depth image + intrinsic -> pointcloud -> aggragate all pointclouds with poses -> voxelization -> rgbd map. Could anybody know how to generate this rgbd map?

ChauChorHim commented 2 years ago

the demo video demo shows the rgbd map. image

I'm currious about how to get this rgbd map. A possible method is depth image + intrinsic -> pointcloud -> aggragate all pointclouds with poses -> voxelization -> rgbd map. Could anybody know how to generate this rgbd map?

Hi, I don't know if you solve it or not, but the pipeline you mentioned is basically correct. The difficult point here is how to concatenate pointclouds. So, the first stuffs you need to prepare are the pointcloud of each depth in the camera coordinate. At the same time, you should have the absolute pose for each camera. I don't use the pose generated by the pose encoder. In my dataset, I use the pose generated by the antenna gps. Anyway, if you don't have a more accurate pose source, maybe the pose from pose encoder is an option. Then, you'd better set the first camera's pose as the initial pose and then transform the other pose into this coordinate system. Here are the mathematical computation.

R_ac = R_ab R_bc R_bc = inverse(R_ab) R_ac

T_ac = T_ab + R_ab T_bc T_bc = inverse(R_ab) (T_ac - T_ab)

where a is the absolute coordinate, b is the first frame's coordinate and c is the other coordinate.

Use R_bc and T_bc, you could transform all other pointcloud into the same coordinate of the first camera coordinate.

iariav commented 2 years ago

Has anyone managed to produce the RGBD point cloud as in the demo and can share a script on how to achieve this?