mikel-brostrom / boxmot

BoxMOT: pluggable SOTA tracking modules for segmentation, object detection and pose estimation models
GNU Affero General Public License v3.0
6.74k stars 1.72k forks source link

Using different Reid models to evaluate DeepOcsort, but getting the same evaluation results #1573

Closed shixinwang00 closed 3 months ago

shixinwang00 commented 3 months ago

Search before asking

Question

Hi,I'm using different Reid models to evaluate DeepOcsort, but getting the same evaluation results? The command is as follows: Osnet python3 tracking/val.py --benchmark MOT17-mini --yolo-model yolov8s.pt --reid-model tracking/weights/osnet_x0_25_msmt17.pt --tracking-method deepocsort --verbose --source ./assets/MOT17-mini/train "HOTA": 39.228, "MOTA": 20.755, "IDF1": 36.122 Clip python3 tracking/val.py --benchmark MOT17-mini --yolo-model yolov8s.pt --reid-model tracking/weights/clip_market1501.pt --tracking-method deepocsort --verbose --source ./assets/MOT17-mini/train "HOTA": 39.228, "MOTA": 20.755, "IDF1": 36.122

mikel-brostrom commented 3 months ago

MOT17-mini is comprised of 2 sequences with very few frames each. Chances that the ReID having any effect are minimal. Try on the full MOT17. There you will most certainly see differences

shixinwang00 commented 3 months ago

I re-evaluated on the MOT17, and the results obtained using the two ReID models are indeed different. Thank you very much!

yaoshanliang commented 3 months ago

Same question. I tried each reid model on my custom dataset with 20 clips. However, the results are same using different reid models. Any potential issues on my dataset?

--reid-model clip_market1501.pt
--reid-model osnet_x0_25_msmt17.pt
mikel-brostrom commented 3 months ago

Could be that you dataset is too "easy" for the ReID model to make ay difference