Hello, I am reproducing the results of your paper, but I have encountered some problems. Due to my limited level of in-depth learning, I ask you for help and report an error. I have put the screenshot below. Another question is where to put the mask picture. Thank you for your help.
This is my data format
VisTR
├── data
│ ├── train
│ ├--------JPEGImages
│ ├------------.jpg
│ ├── val
│ ├--------JPEGImages
│ ├------------.jpg
│ ├── annotations
│ │ ├── instances_train_sub.json
│ │ ├── instances_val_sub.json
├── models
This is the command line I entered:
python -m torch.distributed.launch --nproc_per_node=1 --use_env main.py --backbone resnet101 --ytvos_path /media/dmia/code1/why/VisTR/data --masks --pretrained_weights /media/dmia/code1/why/VisTR/models/resnet101.pth
Hello, I am reproducing the results of your paper, but I have encountered some problems. Due to my limited level of in-depth learning, I ask you for help and report an error. I have put the screenshot below. Another question is where to put the mask picture. Thank you for your help. This is my data format VisTR ├── data │ ├── train │ ├--------JPEGImages │ ├------------.jpg │ ├── val │ ├--------JPEGImages │ ├------------.jpg │ ├── annotations │ │ ├── instances_train_sub.json │ │ ├── instances_val_sub.json ├── models
This is the command line I entered: python -m torch.distributed.launch --nproc_per_node=1 --use_env main.py --backbone resnet101 --ytvos_path /media/dmia/code1/why/VisTR/data --masks --pretrained_weights /media/dmia/code1/why/VisTR/models/resnet101.pth