Open yingdundun opened 1 week ago
openai/clip-vit-large-patch14
Thank you for your reply, but there is a new question :
raise RuntimeError(f"Error(s) in loading state_dict for {model.class.name}:\n\t{error_msg}") RuntimeError: Error(s) in loading state_dict for CLIPTextModel: size mismatch for text_model.embeddings.token_embedding.weight: copying a param with shape torch.Size([49408, 768]) from checkpoint, the shape in current model is torch.Size([49408, 512]). size mismatch for text_model.embeddings.position_embedding.weight: copying a param with shape torch.Size([77, 768]) from checkpoint, the shape in current model is torch.Size([77, 512]). size mismatch for text_model.encoder.layers.0.self_attn.k_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).the dimession...
The dimensions don't match, and I tried the three weight files you provided, but I still get an error. I'm looking forward to your reply.
We will check this issue. We are a bit busy with other matters recently. Please understand.
You can set the parameter version as "openai/clip-vit-large-patch14" to download from hugging face.