Closed AhmedKhaled945 closed 2 years ago
@AhmedKhaled945 Hi, thanks for your appreciation of our work :) The demand is a bit unclear. I think that only a trained yolox model is not enough for four tracking tasks. For example, you may need to add a target prior / a mask branch for SOT / MOTS respectively. Besides, Unicorn also adopts BYTE for association in MOT17. To evaluate Unicorn on MOT17, please use the following command. python3 tools/track.py -f exps/default/\<exp_name> -c \<ckpt path> -b 1 -d 1
If the answer does not solve your problem, welcome to give a more detailed description of the demand. Thanks.
Yes i was talking about association, so in here this means it will give me the same output as ByteTracker for association, or did you perform any enhancements on it? Regards,
@AhmedKhaled945 Hi, yes, the output forms of Unicorn and ByteTrack are the same. On MOT17, we use the same association strategy as ByteTrack.
Okay so does this repo solves the problem of large objects occlusion, if an object was occluded completely for more than 2 second, can it re-detect the id and associate back to the original object id before occlusion? Thanks in advance.
Hi, in my opinion, it may need appearance information to recover id from long-term occlusion because the lack of observation may cause Kalman Filter unreliable. Unicorn also supports associating with learned appearance embedding. The following command shows how to do this on MOT17.
python3 tools/track_omni.py -f exps/default/${exp_name} -c
@AhmedKhaled945 Hi, thanks for your appreciation of our work :) The demand is a bit unclear. I think that only a trained yolox model is not enough for four tracking tasks. For example, you may need to add a target prior / a mask branch for SOT / MOTS respectively. Besides, Unicorn also adopts BYTE for association in MOT17. To evaluate Unicorn on MOT17, please use the following command. python3 tools/track.py -f exps/default/
-c -b 1 -d 1 If the answer does not solve your problem, welcome to give a more detailed description of the demand. Thanks.
Very promising work! Does this prior need to be trained on dataset or we provide it simply using bounding box from yolox? Sorry i couldn't get enough information about prior from paper.
@trathpai Hi, thanks for your appreciation of our work. The target prior is generated by propagating the target map from the reference frame. The target map from the reference frame is obtained using the reference box (value=1 inside the box, value=0 for others). The propagation is implemented by multiplying pixel-wise correspondence (HWxHW) by the target map (HWx1). The pixel-wise correspondence is learned from data.
Hi, in my opinion, it may need appearance information to recover id from long-term occlusion because the lack of observation may cause Kalman Filter unreliable. Unicorn also supports associating with learned appearance embedding. The following command shows how to do this on MOT17. python3 tools/track_omni.py -f exps/default/${exp_name} -c -b 1 -d 1 # using the association strategy in QDTrack
@trathpai Thanks, i looked into it, it seems that it needs to process all the detection first then give it to the qdtracker, is it a requirement? or can it work in realtime (frame by frame), Regards,
@trathpai Hi, thanks for your appreciation of our work. The target prior is generated by propagating the target map from the reference frame. The target map from the reference frame is obtained using the reference box (value=1 inside the box, value=0 for others). The propagation is implemented by multiplying pixel-wise correspondence (HWxHW) by the target map (HWx1). The pixel-wise correspondence is learned from data.
Thankyou for the explanation. Would you say pixel wise correspondence is object agnostic or somewhat generalizable to other object types?
Hi, in my opinion, it may need appearance information to recover id from long-term occlusion because the lack of observation may cause Kalman Filter unreliable. Unicorn also supports associating with learned appearance embedding. The following command shows how to do this on MOT17. python3 tools/track_omni.py -f exps/default/${exp_name} -c -b 1 -d 1 # using the association strategy in QDTrack
@trathpai Thanks, i looked into it, it seems that it needs to process all the detection first then give it to the qdtracker, is it a requirement? or can it work in realtime (frame by frame), Regards,
@MasterBin-IIAU
@AhmedKhaled945 Hi, in fact, Unicorn does work in an online fashion, which means it can detect and track frame by frame rather than detecting on all frames first.
@trathpai Hi, pixel-wise correspondence is class-agnostic and object-agnostic. So it can generalize well to different types of tracked objects. This property is well-suited to SOT&VOS, which needs to track any possible objects given in the reference frame.
Hello @MasterBin-IIAU I am trying to do it now, so i have a trained yolox model, trained it using yolox main repo, when i used my exp file with my ckpt, it gave me an error at this line,
outputs = model(img, mode="whole")
saying that mode is not a valid parameter, so is this a change that i can incorporate into my exp file or model, or should i retrain it with your exp base file (unicorn_det.py) for example, and just change the model size settings if i want,
And if i need to retrain it, which pretrained weights should i use from the model_zoo, also which exp file is the base for these weights? Thanks in advance,
@MasterBin-IIAU My Pretrained model is yolox_x architecture.
Hello, thanks for this interesting project, wanted to ask how can i apply the tracker to a custom-trained yolox model of my own, I have the model and i already integrated it with ByteTrack, is there any script of readme that can help me with this?