Closed kunlileo closed 4 years ago
python ./tools/test_kitti_metric.py \ --dataroot ./datasets/KITTI \ --dataset kitti \ --cfg_file lib/configs/resnext101_32x4d_kitti_class.yaml \ --load_ckpt ./kitti.pth
You can try the above script. But you should change the --dataroot and --dataset. In ./tools/test_any_images.py, I have scaled the output depth with 60000 for visualization. If you scale the output depth with 80 again, the max depth will be greater than 65535.
Besides, you can save the depth image in a coded color map for visualization:
import matplotlib.pyplot as plt plt.imsave('path to save', depth, cmap='rainbow')
It is easier for you to check if the depth output is right.
python ./tools/test_kitti_metric.py --dataroot ./datasets/KITTI --dataset kitti --cfg_file lib/configs/resnext101_32x4d_kitti_class.yaml --load_ckpt ./kitti.pth
You can try the above script. But you should change the --dataroot and --dataset. In ./tools/test_any_images.py, I have scaled the output depth with 60000 for visualization. If you scale the output depth with 80 again, the max depth will be greater than 65535.
Besides, you can save the depth image in a coded color map for visualization:
import matplotlib.pyplot as plt plt.imsave('path to save', depth, cmap='rainbow')
It is easier for you to check if the depth output is right.
Hi Wei Yin,
i am quite surprised to have received your reply in such a short Time. Thank you very much.
I have already adjusted the codes in "test_any_images.py" in the way, where i left out the scaling with 60000. Instead, I just took the pred_depth from the pretrained model's output and multiplied it with 80. The result is what i attached above. But I will try your suggestion with "test_kitti_metric.py" tomorrow and keep you informed.
Greetings from Germany.
Kun
python ./tools/test_kitti_metric.py --dataroot ./datasets/KITTI --dataset kitti --cfg_file lib/configs/resnext101_32x4d_kitti_class.yaml --load_ckpt ./kitti.pth
You can try the above script. But you should change the --dataroot and --dataset. In ./tools/test_any_images.py, I have scaled the output depth with 60000 for visualization. If you scale the output depth with 80 again, the max depth will be greater than 65535.
Besides, you can save the depth image in a coded color map for visualization:
import matplotlib.pyplot as plt plt.imsave('path to save', depth, cmap='rainbow')
It is easier for you to check if the depth output is right.
Hi Wei Yin,
it turns out I made a very stupid mistake. I misstook the output of your model for Disparity, instead of Depth. After adjusting my visualization script, I get the right result, which shows more promising pointcloud than PSMNet. Terrific Work and merry chrismas!
(Pointcloud generated by disparity map from PSMNet) (Pointcloud generated by depth map from VNL)
Merry Chrismas!
Hi YvanYin ,
it is really terrific what you and your team achieved with the new model. My respect to you.
I am currently working on a project, in which I could use precise Depth results generated, e.g by your work. But i am facing a problem with the output. As you addressed a above, when using the on kitti pretrained model to estimate the depth, we need to multiply the results with 80. But even if when I do so, the results are still not right. I used your KITTI pretrained model to predict the depth of the images from KITTI Object Detection. When I transform this depth into xyz coordinates, the pixel point cloud doesn't look right at all. As a comparision, when I use the PSMNet to estimate the Depth for those images, the results look acceptable. I was hoping you may know the reason of it and can help me out. I used the "test_any_image.py" script and adjusted the Path for '--dataroot', '--cfg_file' and '--load_ckpt' in "parse_arg_base". For your understanding, I attached the results generated by PSMNet (1st Image) and your Model (2nd Image). (PSMNet)
(VNL)
Thanks in advanced.