OpenGVLab / LLaMA-Adapter

[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
GNU General Public License v3.0
5.75k stars 374 forks source link

llama-adapterV2 multi modal demo error #72

Open dongzhiwu opened 1 year ago

dongzhiwu commented 1 year ago

hellow, i download models, and run the demo.py,

File "demo.py", line 12, in model,preprocess = llama.load("/mnt/home/foundation_model/LLaMA-Adapter/weights/7fa55208379faf2dd862565284101b0e4a2a72114d6490a95e432cf9d9b6c813_BIAS-7B.pth", llama_dir, device) File "/mnt/home/foundation_model/LLaMA-Adapter/llama_adapter_v2_multimodal/llama/llama_adapter.py", line 244, in load model.load_state_dict(ckpt['model'], strict=False) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1483, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for LLaMA_adapter: size mismatch for clip_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([768, 512]).

shilinyan99 commented 1 year ago

Please check if you have changed the parameters of here.

dongzhiwu commented 1 year ago

Please check if you have changed the parameters of here.

sorry, I modified the clip file before, which caused this problem. It worked.

xuai05 commented 8 months ago

Have you reproduced the successful demo results? When I reproduced the demo, I encountered a strange output resultWhen I reproduced the demo, I encountered a strange output result. 1709517598030

xuai05 commented 8 months ago

@dongzhiwu

OldStone0124 commented 8 months ago

@xuai05 image I also encountered this.