Open agirbau opened 2 years ago
Hello!
First, thanks for your great work, it's amazing! I was wondering around the repo, and I found this piece of code on mot_evaluator.py, which seems to be changing thresholds and buffers depending on the sequence it's evaluating. Is this correct? I was wondering if these numbers follow some pattern over the different sequences (e.g. MOT-05 and 06 being sequences with a of camera motion). Thanks!
hello,agirbau Have you figured out why using different track_buffer and thresh in different sequences ? I'm curious about it ,too. Please let me know ,if you know the answer. Thank you!
My guess is that they tested several thresholds/hyperparameters to have the best results on the test sequences.
This CVPR paper seems to suggest the same in Table 3 "... since ByteTrack uses different thresholds for different sequences of the test set and interpolation we recomputed their results...".
My guess is that they tested several thresholds/hyperparameters to have the best results on the test sequences.
This CVPR paper seems to suggest the same in Table 3 "... since ByteTrack uses different thresholds for different sequences of the test set and interpolation we recomputed their results...".
Thanks for the reminder.
Hello!
First, thanks for your great work, it's amazing! I was wondering around the repo, and I found this piece of code on mot_evaluator.py, which seems to be changing thresholds and buffers depending on the sequence it's evaluating. Is this correct? I was wondering if these numbers follow some pattern over the different sequences (e.g. MOT-05 and 06 being sequences with a of camera motion). Thanks!
https://github.com/ifzhang/ByteTrack/blob/4829efd6ffc2882fd4e0fe7ea5457e90e36e898e/yolox/evaluators/mot_evaluator.py#L138-L160