xingyul / flownet3d

FlowNet3D: Learning Scene Flow in 3D Point Clouds (CVPR 2019)
MIT License
359 stars 83 forks source link

How do you get the point clouds of second frame for KITTI? #28

Open sulashi opened 4 years ago

sulashi commented 4 years ago

Thank you for your great work and code release! I found that the disparity of KITTI scene flow dataset is based on frame one, so that the disparity of pixels for the second frame can not be gotten directly. How dou you get them, from the raw lidar data or some method else?

If you use the raw lidar data, how could you get so dense depth values? If you did not use the raw lidar data, why do you use 150 frames rather than 200?