Epiphqny / VisTR

[CVPR2021 Oral] End-to-End Video Instance Segmentation with Transformers
https://arxiv.org/abs/2011.14503
Apache License 2.0
740 stars 96 forks source link

Train without --masks flag #55

Open noahcao opened 3 years ago

noahcao commented 3 years ago

Hi,

I tried to train a model without --masks on the default Youtube VIS dataset. I used all parameters as default except for decreasing the num_frames to 18 and num_queries to 180 correspondingly to fit my GPUs. But the loss curves show no model optimization gains as beblow

WechatIMG167

Thus, I wonder if you have tested training model without --masks flag (namely, box tracking instead of mask tracking) on any datasets? Is the current VisTR class usable (not VisTRsegm)?

thanks!

StiphyJay commented 3 years ago

Hello.I want to ask about that how did you find the dataset annotation file. I counldn't find the corresponding file instance_train_sub.json' or 'instance__val_sub.json' in the Codalab page(https://competitions.codalab.org/competitions/20128#participate-get_data). I only download the image data.But, for annotation file, I just got the test.json or val.json or train.json file. And in those json file, I couldn't fine the annotation information. Is the website where I downloaded the data set wrong?

Epiphqny commented 3 years ago

Hi @StiphyJay, just use the train.json and val.json file of the 2019 version.

Epiphqny commented 3 years ago

Hi @noahcao I have not tried that.

noureldien commented 3 years ago

Hello, please can you help? There is no annotation in the valid.json on the website? How did you evaluate on valid split? https://competitions.codalab.org/competitions/20128#participate-get_data

Epiphqny commented 3 years ago

@noureldien Please refer to Models section in README.

Adeelyousaf commented 2 years ago

@noahcao Hi, would you were able to do box tracking?

Kind Regards