Closed HanGuangXin closed 2 years ago
Thank you very much for finding the bug! I tried to add Re-ID features in ByteTrack and the results drop a lot. Very pleasure if you can make a PR!
Hi, I'm thinking about separate track_id
annotations from variable targets
. And set targets
to torch.float16
as current code, but keep track_id
to be torch.float32
.
If you think it is ok, I should manage to make a PR tonight, and more information will be provided there.
Sure, thank you very much!
@ifzhang PR is made, you can check it or merge it?
I have merged the PR, thanks very much!
Thank you very much for finding the bug! I tried to add Re-ID features in ByteTrack and the results drop a lot. Very pleasure if you can make a PR!
@ifzhang Hi! Maybe you can re-run the experiments with ReID now, if the old results are related to this bug. I'm looking forward to see another awesome work of you!
Hi, I found an underlying bug when the model is trained with FP16. In
yolox/core/trainer.py
, when we gettargets
with shape[batchsize, 1000, class_id + tlwh + track_id]
, thetrack_id
is correct. But whentargets
is converted to FP16, thetrack_id
will lose the precision, resulting in wrong labels for reid. And seriously I think it is not easy to find the bug.Although this bug will not effect the ByteTrack performance, which just uses detection annotations, but it will severely harm the ReID performance when trying to combine ByteTrack with ReID module in JDE paradigm.
I can make a PR if you think it is needed :)