Open dongzhiwu opened 1 year ago
Please check if you have changed the parameters of here.
Please check if you have changed the parameters of here.
sorry, I modified the clip file before, which caused this problem. It worked.
Have you reproduced the successful demo results? When I reproduced the demo, I encountered a strange output resultWhen I reproduced the demo, I encountered a strange output result.
@dongzhiwu
@xuai05 I also encountered this.
hellow, i download models, and run the demo.py,
File "demo.py", line 12, in
model,preprocess = llama.load("/mnt/home/foundation_model/LLaMA-Adapter/weights/7fa55208379faf2dd862565284101b0e4a2a72114d6490a95e432cf9d9b6c813_BIAS-7B.pth", llama_dir, device)
File "/mnt/home/foundation_model/LLaMA-Adapter/llama_adapter_v2_multimodal/llama/llama_adapter.py", line 244, in load
model.load_state_dict(ckpt['model'], strict=False)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1483, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for LLaMA_adapter:
size mismatch for clip_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([768, 512]).