s-duuu / pred_fusion

Object Trajectory Prediction using ROS, YOLOv5, PointPillars, CRAT-Pred
14 stars 2 forks source link

Some personal questions about the model #2

Open FYYLHH opened 1 year ago

FYYLHH commented 1 year ago

First of all, thank you very much for replying to me amidst your busy schedule. Secondly, I have the following questions. 1. I have reproduced the code for fusion recognition using the Kitti dataset, and changed the topic. It is possible that the display page did not perform as well as the testbag you provided because the internal and external parameters were not modified. Therefore, I would like to ask if there are any other files that need to be modified besides the parameters in fusion.py for the internal and external parameters 2. If I connect my own camera and LiDAR, I also need to modify the calibration file 3. What are the execution commands for the part of the code for tracking and fusion prediction? After executing the launch file, I can only see the code for fusion recognition, without the process of trace tracking. Can you explain it. Thank you very much for the help your warehouse has provided me, and I am anxiously looking forward to your early reply. Best wishes.

s-duuu commented 1 year ago

Thanks for the questions.

  1. I figured out that the rosparam of fusion.py is not connected with the actual code. Sorry for that. In line 250 of fusion.py, IoU threshold is a constant as 0.2. You can modify this.
  2. Yes, you should modify the intrinsic matrix (3x3) of camera and the extrinsic matrix (3x4) between LiDAR and camera. You can change the calibration result in line 202, 203 of fusion.py
  3. Do you mean that rviz is not executed? Then you can manually execute rviz on terminal and set topics in the GUI. integrated.launch file includes YOLO node, PointPillars node, fusion node, tracker node, and prediction node (+ rviz). If you still stuck on the problem after manually executing rviz, please give me a additional screenshot for the problem.

Actually, this repository is an initial version of my project, so other users might feel uncomfortable to use it. I really feel sorry for that. I'll modify the code in user friendly way ASAP. Thanks.

s-duuu commented 1 year ago
  1. You can change the IoU threshold. 0.2 is just for the simulation environment I used. You should find the optimal IoU threshold considering the test environment and your detection models.
  2. In line 203 of fusion.py, there is a variable named extrinsic_matrix. This is a rotation and translation matrix between LiDAR and camera sensor. From this matrix, you can project the LiDAR point into an image plane. You can use other packages for calculating the matrix, and just assign the calculated matrix in this variable using numpy.
  3. You should change the weights and ckpt path in integrated.launch file. Almost parameteres such as path or thresholds are in the integrated.launch file, so please check the launch file carefully.

Thanks.

s-duuu commented 1 year ago
  1. Firstly, our system just use two bounding boxes. I think the three boxes you mentioned might be YOLO & PointPillars & Fusion boxes. I think their sensor fusion algorithms may result in the average bounding box, so there can be three bounding boxes. However, in this system, the sensor fusion result is same with PointPillars detection result. I figured out that the accuracy of PointPillars 3D detection is significant, and I just wanted to filter False Positive detections using sensor fusion (by calculating IoU).
  2. You can make ground truth dataset and evaluate the sensor fusion using MSE between the fusion result and ground truth. However, as I mentioned, this sensor fusion algorithm is just for filtering the False Positive detections, so you can evaluate the algorithm by calculating False Positive decrease rate.

Thanks.

FYYLHH commented 1 year ago

Thank you for your response. I have been following your warehouse and if there are any updates, I hope you can write some explanatory documents, which will be very friendly to us. In addition, regarding my own question, if I want to add a fusion detection box myself, I only need to change some of the code in fusion.py. Do I still need to change the launch file? thank you

s-duuu commented 1 year ago

You can just change the fusion.py source code.

SHIELDgo commented 1 year ago

首先,非常感谢您在百忙之中回复我。其次,我有以下问题。 1. 我使用 Kitti 数据集重现了融合识别的代码,并更改了主题。显示页面的性能可能不如您提供的测试包,因为内部和外部参数未修改。因此,我想问一下除了内部和外部参数 2 的 fusion.py 中的参数之外,还有其他文件需要修改如果我连接自己的相机和激光雷达,我还需要修改校准文件 3.用于跟踪和融合预测的代码部分的执行命令是什么?执行启动文件后,我只能看到融合识别的代码,没有跟踪跟踪的过程。你能解释一下吗?非常感谢您的仓库为我提供的帮助,我急切地期待您的早日回复。 愿你安好。

Hello, have you had any success with the kitti dataset? I am currently trying to use kitti as well, but I am having some issues. Could you provide your internal and external parameters about kitti dataset. Thanks.

s-duuu commented 1 year ago

Could you please provide a specific situation you are going through? After that, I can give you an appropriate configuration as a solution for the issue.

Thanks.