castacks / DytanVO

[ICRA'23] DytanVO: Visual Odometry in Dynamic Environments
BSD 3-Clause "New" or "Revised" License
173 stars 20 forks source link

no image depth or 3d point cloud output #8

Closed stihuangyuan closed 1 year ago

stihuangyuan commented 1 year ago

thank for your good work. there is noly poses per frame ouput in vo_trajectory_from_folder file, but no pointcloud or depth avaliable,how could i get?

SecureSheII commented 1 year ago

Hi, thanks for your interest. Since this is VO, camera motion is expected to be the only output per the definition of VO. You can catch the depth as below, which is the intermediate output in the segmentation network.

https://github.com/castacks/DytanVO/blob/76ea83d6fea780bfc59f1a58416e5017d0762752/Network/rigidmask/VCNplus.py#L615-L617

You can then postprocess the image inputs and the corresponding depth maps by unprojection to generate point cloud.