Open FYYLHH opened 1 year ago
Sorry for late reply. This repository already includes a pretrained PointPillars model with kitti dataset (pillars.pth) If you want to retrain a new model, it would better to visit an official github repository of PointPillars. Thanks.
Please refer to the training instruction in following repository.
Okay, I roughly understand what you mean, but please allow me to ask one more question. What are the evaluation indicators for the integration of cameras and LiDAR detection frames in this project?
You mean evaluation criterion such as MSE? If so, I evaluated sensor fusion algorithm using MSE for x & y coordinates respectively. Values such as IoU assigned in launch file were the value with the best performance in my simulation environment. You can adjust these value to appropriate value within your test environment. Also, ground truth position values of an object were extracted from CarMaker simulator.
Thanks.
Yes, I am referring to evaluation criteria such as MSE. I would like to ask if it is possible to generate visual representations such as graphs. It would be great if the evaluation indicators could be presented in an intuitive graphical or graphical form. In addition, the point pixels detection of point clouds mentioned earlier is not very effective when using 16 line LiDAR. About one-third of the cars in the same frame of point clouds will have corresponding detection boxes, and these 3D detection enthusiasts have not completely surrounded the car's point clouds. I also want to ask how to handle this issue. thanks
Representation as graphical form You can deal with this using matplotlib library. Since the evaluation criterion can be replaced with another one instead of MSE, I didn't include a graph visualizing code. You can write a python code using matplotlib if you have your own ground truth dataset.
3D detection performance issue I didn't fully understand your issue. You mean that PointPillars model does not perform well when using 16-Channel LiDAR such as VLP-16? Additional figures about the issue might help me to understand.
Thanks.
As shown in the picture, the point cloud of LiDAR cannot be detected well. When playing ROSBAG, I have individual detection boxes, but the detection boxes cannot completely envelop the point cloud belonging to the car. So overall, there are such issues,
Thanks for the detailed explanation!
Different environment could affect a performance of DL model, thus it seems like retraining is the most effective way. Unfortunately, I just trained my model using kitti dataset. In case of dataset generation or detection model issue, it would be better to upload an issue at the PointPillars repository or search for some instructions of PointPillars, instead of this repository.
P.S. pillars_detect.py in this repository includes preprocessing, model input, and postprocessing. I saw that some kind of LiDAR data should be normalized in different way from that of this repository. Regarding this, please check other instructions about pillars model.
Thanks.
Hello, good afternoon. I would like to ask, if you want to retrain Pintillars, where should you train? I saw that it seems possible in SRC/SRC, but PRED_ What is the purpose of the fusion folder? I can't find a place to train pointpilots now. Can you help me clarify the path? Thank you very much.