Open noahcao opened 3 years ago
Hello.I want to ask about that how did you find the dataset annotation file. I counldn't find the corresponding file instance_train_sub.json' or 'instance__val_sub.json' in the Codalab page(https://competitions.codalab.org/competitions/20128#participate-get_data). I only download the image data.But, for annotation file, I just got the test.json or val.json or train.json file. And in those json file, I couldn't fine the annotation information. Is the website where I downloaded the data set wrong?
Hi @StiphyJay, just use the train.json and val.json file of the 2019 version.
Hi @noahcao I have not tried that.
Hello, please can you help? There is no annotation in the valid.json on the website? How did you evaluate on valid split? https://competitions.codalab.org/competitions/20128#participate-get_data
@noureldien Please refer to Models section in README.
@noahcao Hi, would you were able to do box tracking?
Kind Regards
Hi,
I tried to train a model without --masks on the default Youtube VIS dataset. I used all parameters as default except for decreasing the num_frames to 18 and num_queries to 180 correspondingly to fit my GPUs. But the loss curves show no model optimization gains as beblow
Thus, I wonder if you have tested training model without --masks flag (namely, box tracking instead of mask tracking) on any datasets? Is the current VisTR class usable (not VisTRsegm)?
thanks!