autodistill / autodistill-dinov2

DINOv2 module for use with Autodistill.
https://docs.autodistill.com
Other
10 stars 3 forks source link

size mismatch for pos_embed #4

Open ccmCCMfk opened 10 months ago

ccmCCMfk commented 10 months ago

Hello, I have a question about how to use DINOv2. Could you please help me?I instantiated a vit_small ViT model and tried to load the pretrained weights using the load_pretrained_weights function from utils. Here's the code I wrote: self.vit_model = vits.dict'vit_small' load_pretrained_weights(self.vit_model,'https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_pretrain.pth', None) However, I encountered the following error: Traceback (most recent call last): File "/data/PycharmProjects/train.py", line 124, in model = model(aff_classes=args.num_classes) File "/data/PycharmProjects/models/locate.py", line 89, in init load_pretrained_weights(self.vit_model, pretrained_url, None) File "/data/PycharmProjects/models/dinov2/dinov2/utils/utils.py", line 32, in load_pretrained_weights msg = model.load_state_dict(state_dict, strict=False) File "/home/ustc/anaconda3/envs/locate/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1605, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for DinoVisionTransformer: size mismatch for pos_embed: copying a param with shape torch.Size([1, 1370, 384]) from checkpoint, the shape in current model is torch.Size([1, 257, 384]).

Could you please help me understand what might be causing this issue? Thank you for your assistance.