Closed gaoyifanginpg closed 3 days ago
Hi @gaoyifanginpg , thanks for your attention to our work!
I could not reproduce the error. The output of inference.py
for the model TinyCLIP-auto-ViT-63M-32-Text-31M-LAIONYFCC400M
is
Label probs: tensor([[9.9997e-01, 1.6608e-05, 1.0735e-05]]).
Could you please check whether the checkpoint file is complete? Which version of PyTorch did you use?
cd ~/.cache/clip/
md5sum TinyCLIP-auto-ViT-63M-32-Text-31M-LAIONYFCC400M.pt
# ef3d4c57c67e5c08234e5cf8111bdeb2 TinyCLIP-auto-ViT-63M-32-Text-31M-LAIONYFCC400M.pt
# print the version of pytorch
python -c "import torch; print(torch.__version__)"
It's now solved!
The reason is that I didn't install dependencies before running. After installing dependencies once more, no error occurred.
Thanks for your timely reply. Best regards.
when executing inference.py with arch = 'TinyCLIP-auto-ViT-63M-32-Text-31M' model, _, preprocess = open_clip.create_model_and_transforms(arch, pretrained='LAIONYFCC400M')
an error occurs:
Missing key(s) in state_dict: "_text_encoder.positional_embedding", "_text_encoder.text_projection", "_text_encoder.transformer.resblocks.0.ln_1.weight", "_text_encoder.transformer.resblocks.0.ln_1.bias", "_text_encoder.transformer.resblocks.0.attn.in_proj_weight", "_text_encoder.transformer.resblocks.0.attn.in_proj_bias", "_text_encoder.transformer.resblocks.0.attn.out_proj.weight", "_text_encoder.transformer.resblocks.0.attn.out_proj.bias", "_text_encoder.transformer.resblocks.0.ln_2.weight", "_text_encoder.transformer.resblocks.0.ln_2.bias", "_text_encoder.transformer.resblocks.0.mlp.c_fc.weight", "_text_encoder.transformer.resblocks.0.mlp.c_fc.bias", "_text_encoder.transformer.resblocks.0.mlp.c_proj.weight", "_text_encoder.transformer.resblocks.0.mlp.c_proj.bias",.....
I haven't changed anything in the code, please take a look at why this is the case. Thank you!