TRI-ML / packnet-sfm

TRI-ML Monocular Depth Estimation Repository
https://tri-ml.github.io/packnet-sfm/
MIT License
1.21k stars 241 forks source link

The input and gt depth visulization. #208

Open haoweiz23 opened 2 years ago

haoweiz23 commented 2 years ago

Hi, thanks for your work. I generate input and gt depth via follow code:

self.dataset = SynchronizedSceneDataset(path, split=split, datum_names=cameras, backward_context=back_context, forward_context=forward_context, generate_depth_from_datum='lidar', ) and I get below rgb images and depths of 6 camera.

image image

This depth image is different from the depth gt you provided in your paper. I think this is sparse point cloud or lidar results. How can I get correct depth like figure1 in 3D packing by ?

image

Thanks.

VitorGuizilini-TRI commented 2 years ago

How are you coloring the depth maps for visualization?

haoweiz23 commented 2 years ago

How are you coloring the depth maps for visualization?

Thanks for your reply. The visualize code is similar as https://github.com/TRI-ML/DDAD/blob/master/notebooks/DDAD.ipynb

For resize function, I also tried the "resize_depth_preserve" function provided in your transform codes and got the similar results. By the way, I found both the Packnet-sfm or FSM have not provide the ground truth depth map in their paper. So I guess you may only take the sparse depth map rather than dense depth map for visualization and metric computation?