Closed wdrink closed 2 years ago
@wangjk666 Hi, yep, the whole training can be divided into three stages: object detection pre-training, box-level tracking training (SOT-MOT), and mask-level tracking training (VOS-MOTS). In fact, we already provided scripts in train.md to reproduce our results. By default, Unicorn is trained using 16 GPUs (2 nodes with 8 GPUs). So please use the Multiple-node Training commands. The batch size given in the train.md is just what we adopted.
Ok, thanks a lot for your reply.
Have a good day :)
BTW, several links that you shared to download BDD100K dataset are inaccessible, e.g., "https://bdd-data-storage-release.s3.us-west-2.amazonaws.com/bdd100k/2021/bdd100k_ins_seg_labels_trainval.zip"
@wangjk666 Hi, the data can also be downloaded from the web browser. The site is https://bdd-data.berkeley.edu/portal.html#download
Hi, I am trying to reproduce the results of the box-level tracking training (as mentioned above), however, it seems that the program got stuck in the dataset building, i.e., after "creating index... index created!", there was no update. I wonder whether you have encountered this problem, or do you have any suggestions?
@wangjk666 Hi, it does need some time to load training data. From my experience, it may take 1-5 minutes depending on the computing power of your CPU. Please try waiting for a while. If it does not work, please determine which dataset causes the stuck.
Hi, thanks for your awesome work.
According to the code in this repo, you perform 3-stages training: detection, tracking, tracking-plus-mask, right? Could you please provide the scripts (with specific hyper-parameters, e.g., batch size) to reproduce the results shown in your paper?