Closed maliksyria closed 2 years ago
any answer ?
Our infer.py
script still does not support depth completion, for now you can use eval.py
and provide a checkpoint and configuration file, you should get the correct numbers.
Hello!
May I know how to use eval.py
and PackNetSAN01_HR_sup_D.ckpt
to get the reported depth completion results?
Hi, I find out how to get the reported results.
Besides input_depth_type: ['lidar']
and name: 'SemiSupCompletionModel'
,
it is important to modify resize_depth
to resize_depth_preserve
.
https://github.com/TRI-ML/packnet-sfm/blob/6e3161f60e7161115813574557761edaffb1b6d1/packnet_sfm/datasets/transforms.py#L91
Otherwise, the results differ a lot. Can you @VitorGuizilini-TRI update this in the repo?
Hello! I've read your last work of " Sparse Auxiliary Networks for Unified Monocular Depth Prediction and Completion " which seems a very interesting and impressive one. Also, I've noticed that in the last few days you have committed several changes to the entire code so that it can handle the new architecture of the proposed solution. In the paper it's mentioned that this network is about to predict depth if there is no sparse auxiliary data, or to complete the depth where this data exists. However, still can't find how to infer using RGB image alongside with depth sparse information and therefore to do the complete depth task ? Thanks in advance