open-mmlab / OpenPCDet

OpenPCDet Toolbox for LiDAR-based 3D Object Detection.
Apache License 2.0
4.46k stars 1.27k forks source link

Using OpenPCDet with your own data #1601

Open adri1cc opened 2 months ago

adri1cc commented 2 months ago

Hi, I'm new to 3D object detection and I discovered OpenPCDet a few days ago, it really caught my attention.

I tried to use it with other LiDAR point clouds but I can't see how should I proceed. I want to use an already trained model, such as PV-RCNN, trained with the KITTY dataset, and test it on my own point clouds ( .npy files) to detect the vehicles and the pedestrians.

At first I thought using demo.py on my files would work, but although I can visualize my files with it, it does not detect anything in my point clouds, even though some cars for example should be easy to detect in comparison to some objects demo.py detects in the KITTY testing dataset. But maybe demo.py only works with the KITTI dataset I trained the model with?

I tried to work with the custom_dataset document, but it seems to be adressed to people who want to train a model with custom datasets, not only test on it.

Could anyone give me some advices on how to proceed? How do I make demo.py detect objects in my point clouds?

adri1cc commented 2 months ago

Update

I managed to use demo.py to detect cars in my point cloud. I run the command:python demo.py --cfg_file cfgs/custom_models/second.yaml --ckpt ../checkpoints/pv_rcnn_8369.pth --data_path ../data/custom/points/my_data.npy The _custommodels/second.yaml config file is the only one that works with my data, if I use _custom_models/pvrcnn.yaml or _kitty_models/pvrcnn.yaml I will get the visualization but cars won't be detected.

The detection isn't perfect though. Half of my cars aren't detected, and for the cars detected there are multiple detection boxes on them. I don't know if I can correct this by modifying config parameters or if it's only linked to the model used.

L-Reichardt commented 2 months ago

Some points which I faced which might help:

  1. KITTI dataset configuration sets the minimum x-range to 0, meaning only half the point cloud is infered on (visualized in this issue). You might need to adjust the pointcloud range in the dataset .yaml
  2. Adjusting the pointcloud range can cause Issues, degrading the inference results of a pretrained network, as effectively now it sees double the information. I found much better results by infering twice, rotating the pointcloud by 180° for the second inference.
  3. Use a network from a model pretrained on a different dataset, supporting 360°, such as NuScenes.
  4. Train yourself with the correct pointcloud range setting.
Murdism commented 1 month ago

I am facing the same issue. As @L-Reichardt mentioned, Normalization and changing the range help, but the accuracy of detection remains generally very low. I am working with the Ouster OS1-64, and the intensity values are in 16-bit format. Normalizing these values to the range 0-1 causes a significant degradation in the quality of the detection. Some people suggest using a pretrained model with zero intensity values instead. Another option is to retrain the model on your dataset.

github-actions[bot] commented 1 week ago

This issue is stale because it has been open for 30 days with no activity.