PRBonn / lidar-bonnetal

Semantic and Instance Segmentation of LiDAR point clouds for autonomous driving
http://semantic-kitti.org
MIT License
961 stars 206 forks source link

How to use pre-trained model to test my own data ? #69

Closed he-guo closed 3 years ago

he-guo commented 4 years ago

Hi , everyone ! I want to perform semantic segmentation on the data Points_Cloud2 I obtained from VLP-16, and then mark the original data. What should I do? Any suggestions are greatly appreciated .

jbehley commented 4 years ago

Hi @he-guo,

to use a different sensor, you have to modify the projection from 3D point cloud to the range image in the architecture configuration, see https://github.com/PRBonn/lidar-bonnetal/blob/master/train/tasks/semantic/config/arch/darknet53.yaml

https://github.com/PRBonn/lidar-bonnetal/blob/423311109bb6a075c8d44f545b624b42a61f8d42/train/tasks/semantic/config/arch/darknet53.yaml#L81-L88

There the values for fov_up and fov_down must be modified. (the name is not used, as far as I know.) The size of the resulting range image should also be modified, i.e., width and height (at least the height to get a dense range image).

In case of a velodyne VLP-16 these values should work (but I did not test these values):

   fov_up: 15 
   fov_down: -15  
   img_prop: 
     width: 2048 
     height: 16 

The width might also be 1024 or 512.

The resulting range image should look "dense", i.e., there should ideally no gaps between the pixels. The projection method currently uses a regular vertical angle.

he-guo commented 4 years ago

Thank you very much for your advice. I'm trying

jbehley commented 3 years ago

I think this should be solved. If you still have doubts, then please re-open the issue.

RichExplor commented 3 years ago

非常感谢您的建议。我正在努力

I'm so sorry, Please have you tried VLP-32?

amkurup commented 3 years ago

There the values for fov_up and fov_down must be modified. (the name is not used, as far as I know.) The size of the resulting range image should also be modified, i.e., width and height (at least the height to get a dense range image).

In case of a velodyne VLP-16 these values should work (but I did not test these values):

   fov_up: 15 
   fov_down: -15  
   img_prop: 
     width: 2048 
     height: 16 

The width might also be 1024 or 512.

The resulting range image should look "dense", i.e., there should ideally no gaps between the pixels. The projection method currently uses a regular vertical angle.

@jbehley Thanks for your answer above. I would like to know how to calculate fov_up and fov_down. How did you come up with 15 and -15 respectively? Similarly, why 3 and -25 in your original work? I understand from the paper that f = fov_up + fov_down. But should we calculate this based on our own sensor?

Thanks

jbehley commented 3 years ago

These are the field-of-view from the specification or covering the "opening angles" of the sensor. The Velodyne HDL-64E has an asymmetric field of view. And these are values taken from your sensor. If you want to use the pre-trained model it will probably not work well.

BenjaminYoung29 commented 3 years ago

@jbehley Hi, I've been trying to use the network on values taken from my lidar (not velodyne). This lidar's horizontal view angle is only 120°. And the range image projected from my point cloud is wield. I have modified fov, fdown as well as width. It didn't work. Any idea? Thanks very well. 微信截图_20210325153031

jbehley commented 3 years ago

in our projection, we assume that the lidar gives a full 360* view. Therefore you have to account for this if your LiDAR provides only 120 degree.