-
-
When running inference with the model I get the following error
The model and loaded state dict do not match exactly
size mismatch for roi_head.0.bbox_head.fc_cls.weight: copying a param with sh…
-
Hi,
I launched the evaluation on LVIS with the following command:
`python tools/lazyconfig_train_net.py \
--num-gpus 8 --num-machines 1 --machine-rank 0 \
--config-file projects/ViTDet/conf…
-
Hi,
Thank you for the good work.
Could you confirm if the released model checkpoints are pre-trained on ShapeNet or Objaverse?
Thanks!
-
Your work is of great academic value and significance, and I am very grateful for the contributions you have made. I would like to ask you about the specific operational steps for implementing the Gra…
-
Thank you for your work. I want to ask how to generate instances_train2017_seen_2_oriorder_cat_info.json ? Where is instances_train2017_seen_2_oriorder_cat_info.json? I am looking forward to your repl…
-
### Question
I tried to train the moe-llava(phi2 clip-vit-L-336) follow the official tutorial.
I've finished the first stage pretraining successfully. (scripts/v1/phi2/pretrain.sh)
But raise CUDA O…
-
After I configure a linux server, install all the libraries, download the mode from the model zoo, and then run the command under "Step 3: For testing our model, download the best pretrained model wei…
-
Hi CO-DETR team, thank you for your great work!
I am a little confused on which checkpoint should I use to reproduce the SOTA results on LVIS. In the link you provided it seems that there are diffe…
-
Hi, thank you for your valuable contribution!
I appreciate your work on the ovsam model. In your paper, you mentioned that the model can currently segment and recognize around 22,000 classes. Howev…