Closed connorlee77 closed 2 years ago
Hi, the laser scanner provides, x,y,z and intensity. We assume providing the height z measured from the laser scanners yields better direct estimates on ground estimation and upright standing objects. We did not employ more elaborate methods as "Learning Rich Features from RGB-D Images for Object Detection and Segmentation". Best, Mario
Hi, how did you guys project the LIDAR data into the rgb data frame? It's a bit opaque in the paper as well as the supplement.
Hi, how did you guys project the LIDAR data into the rgb data frame? It's a bit opaque in the paper as well as the supplement.
I also did not figure that out and would like to know it, too. I tried for a day to tweak the dataset viewer in such a way that it can project the LiDAR point cloud "on the fly" onto the image frames instead of loading the precomputed visualizations from disk, but I could not get it working and gave up.
In the supplemental material, you guys mentioned height as part of the LIDAR input. Do you guys have a script to calculate this from the LIDAR data? Currently, it seems like it is only depth and pulse intensity. Thanks!