Brummi / MonoRec

Official implementation of the paper: MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera (CVPR 2021)
MIT License
587 stars 85 forks source link

Can I infer the model without image_depth_annotated? #9

Closed kxhit closed 3 years ago

kxhit commented 3 years ago

Hi! Thanks for the great work and open-sourced code!

I successfully run the "test_monorec.py" and "create_pointcloud.py" scripts. When I try to run "create_pointcloud.py" on KITTI sequence 11-21, I found "image_depth_annotated" is needed which is processed from the lidar data. I checked the code and found this is needed in function "preprocess_depth_dso()".

I wish to know what's the purpose of using "image_depth_annotated" here and can I infer the model without "image_depth_annotated"? I guess we can, as the required inputs for inferring should only be [sequential mono images, estimated poses and sparse depths] from an existing VO system. Please give me some hints if I can do this by easily modifying some code and let me know if I misunderstood anything. Thanks!

Brummi commented 3 years ago

Hi! Thanks for trying out our code! Thanks for pointing out this issue! Inference never uses (ground truth) depth from the dataset. Since we always had some sort of depth available for the sequences in our experiments, we didnt implement an option to load KITTI data without depth.

This is a small hack fix it temporarily: set lidar_depth=False and dso_depth=True in the config file and comment out lines 242 - 246 in kitti_odometry_dataset. (You need to undo this if you want to evaluate the method)

kxhit commented 3 years ago

Thanks for your quick reply! It works now!