s-duuu / pred_fusion

Object Trajectory Prediction using ROS, YOLOv5, PointPillars, CRAT-Pred
14 stars 2 forks source link

I can't find a place to train pointpilots now. #7

Open FYYLHH opened 1 year ago

FYYLHH commented 1 year ago

Hello, good afternoon. I would like to ask, if you want to retrain Pintillars, where should you train? I saw that it seems possible in SRC/SRC, but PRED_ What is the purpose of the fusion folder? I can't find a place to train pointpilots now. Can you help me clarify the path? Thank you very much.

s-duuu commented 1 year ago

Sorry for late reply. This repository already includes a pretrained PointPillars model with kitti dataset (pillars.pth) If you want to retrain a new model, it would better to visit an official github repository of PointPillars. Thanks.

s-duuu commented 1 year ago

Please refer to the training instruction in following repository.

https://github.com/zhulf0804/PointPillars

FYYLHH commented 1 year ago

Okay, I roughly understand what you mean, but please allow me to ask one more question. What are the evaluation indicators for the integration of cameras and LiDAR detection frames in this project?

s-duuu commented 1 year ago

You mean evaluation criterion such as MSE? If so, I evaluated sensor fusion algorithm using MSE for x & y coordinates respectively. Values such as IoU assigned in launch file were the value with the best performance in my simulation environment. You can adjust these value to appropriate value within your test environment. Also, ground truth position values of an object were extracted from CarMaker simulator.

Thanks.

FYYLHH commented 1 year ago

Yes, I am referring to evaluation criteria such as MSE. I would like to ask if it is possible to generate visual representations such as graphs. It would be great if the evaluation indicators could be presented in an intuitive graphical or graphical form. In addition, the point pixels detection of point clouds mentioned earlier is not very effective when using 16 line LiDAR. About one-third of the cars in the same frame of point clouds will have corresponding detection boxes, and these 3D detection enthusiasts have not completely surrounded the car's point clouds. I also want to ask how to handle this issue. thanks

s-duuu commented 1 year ago
  1. Representation as graphical form You can deal with this using matplotlib library. Since the evaluation criterion can be replaced with another one instead of MSE, I didn't include a graph visualizing code. You can write a python code using matplotlib if you have your own ground truth dataset.

  2. 3D detection performance issue I didn't fully understand your issue. You mean that PointPillars model does not perform well when using 16-Channel LiDAR such as VLP-16? Additional figures about the issue might help me to understand.

Thanks.

FYYLHH commented 1 year ago

As shown in the picture, the point cloud of LiDAR cannot be detected well. When playing ROSBAG, I have individual detection boxes, but the detection boxes cannot completely envelop the point cloud belonging to the car. So overall, there are such issues,

  1. If there are 10 car point clouds in a frame of point cloud, there will only be 3-5 detection boxes
  2. Some detection boxes cannot wrap point clouds well, and most of them randomly create an envelope box based on the center point of the car So I think the weights trained on pointpigs are not good enough. I hope to retrain pointpigs, but if I randomly train a weight to replace the weight in your model, I will report an error. I hope you can provide a better solution or any suggestions. In addition, I would like to build a point cloud dataset to retrain pointpilots, but I have not found any relevant tutorial references. If you have a suitable one, could you please provide me with one. thanks

image

s-duuu commented 1 year ago

Thanks for the detailed explanation!

Different environment could affect a performance of DL model, thus it seems like retraining is the most effective way. Unfortunately, I just trained my model using kitti dataset. In case of dataset generation or detection model issue, it would be better to upload an issue at the PointPillars repository or search for some instructions of PointPillars, instead of this repository.

P.S. pillars_detect.py in this repository includes preprocessing, model input, and postprocessing. I saw that some kind of LiDAR data should be normalized in different way from that of this repository. Regarding this, please check other instructions about pillars model.

Thanks.