TiagoCortinhal / SalsaNext

Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving
MIT License
417 stars 102 forks source link

How to use 'visualization.py' #46

Open AkinoriKotani opened 3 years ago

AkinoriKotani commented 3 years ago

Hello!! I'm sorry to ask you a lot of questions during your busy schedule. I have some questions about how to project LiDAR segmentation results onto a camera image.

  1. Is it correct that the program that projects the LiDAR segmentation results onto RGB images uses visualization.py ?

  2. If you are using visualization.py, how do you actually project them onto RGB images? It seems that it can be done by adding basedir, sequence, uncerts, and so on. And then, I think that it is possible to run it with $ python visualization.py, but it cannot be done well.

basedir = '~/SalsaNext/pred-valid'
sequence = '08'
uncerts = ' '
preds = 'predictions'
gt = ' '
img = '~/Dataset/KITTI/dataset/sequences/08/image_2/'
lidar = '~/Dataset/SemanticKITTI/dataset/sequences/08/'
projected_uncert = ' '
projected_preds = ' '

Each value is tentatively defined as above. and these blanks, I could not understand which path to specify. As a result of executing it for the time being, it became as follows.

$ python3 visualization.py
Ground truth poses are not avaialble for sequence 08.
Traceback (most recent call last):
  File "visualization.py", line 78, in <module>
    color_map_dict = yaml.safe_load(open("color_map.yml"))['color_map']
FileNotFoundError: [Errno 2] No such file or directory: 'color_map.yml'

Thank you for your cooperation.