Open yjl326 opened 5 months ago
Hello.
We guess that the problem is maybe because of the box expansion at the last part of our run_mtmc.py.
We added this code because the GT boxes in AIC19 is annotated lot bigger than the objects meanwhile the detector regresses tight bounding boxes.
We are not sure since we do not have the AIC22 dataset, but it is possible that in AIC22 the GT boxes are refined tightly.
So the predicted boxes do not match the GTs, which is why the performance is as it is.
We hope our answer helps you to solve the problems.
Thank you for your attention.
Thank you very much for your reply, based on your reply I commented out the code you mentioned and re-ran run_mtmc.py but unfortunately I got a lower IDF1 score.
Considering that I was using the AIC22 Track1 dataset, the dataset may have changed, so I went to the AICity Challenge website to download the AIC19 dataset, but unfortunately there is no longer an option for datasets prior to 2021 in the dataset request form on the AICity Challenge website.
I would be grateful if you could provide me with the ground_truth_validation.txt file of AIC19 or the download link of AIC19 to help me validate the model.My email is 2112215133@mail2.gdut.edu.cn. Also I would like to ask if your naming convention for video frame images is the same as the one in my image and if the images of video frames are counted from 0001.
Hi, if you only made predictions for S02 scene, the other scene information in ground_truth_validation is not necessary for evaluating the prediction accuracy.
Hello authors, I have tested using your code on the S02 dataset of AIC22, the detector used is yolov7-e6, I successfully got the mtmc_resnet50_ibn_a_gap.txt file, but when I evaluated the trace results using the eval.py file in the AIC22 eval folder, I got Very low IDF1 values where the IDP values are reasonable but the IDR values are very unreasonable, I am confused about this, I would like to ask you how you evaluated the final trace resolution, looking forward to your reply!