Closed KevinDu1 closed 4 years ago
If you want to test a custom dataset using a checkpoint that is trained on KITTI dataset, you need to do a conversion from your custom coordinates to KITTI camera coordinates, where the model is trained. In other words, you might need to do transformation (rotation, translation, etc.) to make the axis of your point cloud align with the KITTI camera frame.
Our model should be robust against the translation offset. So a quick test I would suggest is 1. convert your point cloud coordinates to be x-axis (vehicle left to right), y-axis (sky to ground), z-axis (vehicle front to far away). 2. replace the "dataset.get_cam_points_in_image_with_rgb" in run.py with your own dataset function. Read the run.py and kitti_dataset.py should be helpful.
Thank you for your reply! I also have my own camera data, if I use the pretrained model,can I use my camera information directly and not to do the transformation between my lidar and kitti camera?
The model actually does not use image data for prediction. However, the current code uses the image for visualization, i.e. get rgb color to paint the points, draw bounding box on the image, check if the bound box is in the image, etc. You might need to go through run.py and remove those dependencies.
I have a dataset including lidar and camera, and I want to add labels to it using your great work, in other words using this model to test my dataset, how do I do it?