zwq456 / CLIP-VIS

Official Pytorch Implementation of CLIP-VIS: Adapting CLIP for Open-Vocabulary Video Instance Segmentation.
Apache License 2.0
34 stars 2 forks source link

RuntimeError #2

Closed SuleBai closed 4 months ago

SuleBai commented 4 months ago

Hi, thanks for your great work.

When I use the config configs/clipvis_ConvNeXt-B.yaml, there is an error below.

RuntimeError: Pretrained weights (laion2b_s29b_b131k_ft_soup) not found for model convnext_base_w_320.
Available pretrained tags (['laion_aesthetic_s13b_b82k', 'laion_aesthetic_s13b_b82k_augreg'].

And I check on the openclip website(https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_results.csv), it seems the laion2b_s29b_b131k_ft_soup is not valid for the convnext_base_w_320.

Besides, is the checkpoint for the setting trained on COCO and YTVIS2019 available?

SCYF123 commented 4 months ago

I'm sorry for this mistake. The pretrained weights is laion_aesthetic_s13b_b82k.

SuleBai commented 4 months ago

Thanks for your reply. Besides, is the checkpoint for the setting trained on COCO and YTVIS2019 available?

SCYF123 commented 4 months ago

Hi, @SuleBai I have update the Readme file, please check it for checkpoint.

SuleBai commented 4 months ago

Thanks for your reply.