wangxiao5791509 / TNL2K_evaluation_toolkit

Towards More Flexible and Accurate Object Tracking with Natural Language: Algorithms and Benchmark (CVPR 2021)
https://sites.google.com/view/langtrackbenchmark/
43 stars 7 forks source link

About the absent annotation of training set #10

Closed botaoye closed 2 years ago

botaoye commented 2 years ago

Hi, thanks for your work. I download your dataset from OneDrive and find that there is no absent and attribution annotation for the training set, which is different from your description in the paper. Is this expected? image

wangxiao5791509 commented 2 years ago

@suyeH Hi, the language annotation is placed in the dataset folders. The absent labels can be found in this github. All the things have been released. To the best of my knowledge, some researchers have successfully developed their own language-based trackers based on this dataset. Good luck!

botaoye commented 2 years ago

@wangxiao5791509 Thanks for your reply.

The absent labels can be found in this github.

Do you mean this annos.tar.gz file or do I miss something? Since this file only contain the absent labels for the test set.

wangxiao5791509 commented 2 years ago

@suyeH Hi, sorry for the misunderstanding of your question, currently, we only have the absent label for the testing subset. If you think the training set also needs such labels, we can provide them in the journal extension (it needs some time).

botaoye commented 2 years ago

Ok, I get it. Actually, the absent label can be generated by bounding boxes. Thanks for your reply and looking forward to your extended journal version paper.