tencent-ailab / IP-Adapter

The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.
Apache License 2.0
4.48k stars 293 forks source link

RuntimeError: Error(s) in loading state_dict for MLPProjModel: #303

Open masaisai111 opened 4 months ago

masaisai111 commented 4 months ago

RuntimeError: Error(s) in loading state_dict for MLPProjModel: Missing key(s) in state_dict: "proj.3.weight", "proj.3.bias". Unexpected key(s) in state_dict: "norm.weight", "norm.bias", "perceiver_resampler.proj_in.weight", "perceiver_resampler.proj_in.bias", "perceiver_resampler.proj_out.weight", "perceiver_resampler.proj_out.bias", "perceiver_resampler.norm_out.weight", "perceiver_resampler.norm_out.bias", "perceiver_resampler.layers.0.0.norm1.weight", "perceiver_resampler.layers.0.0.norm1.bias", "perceiver_resampler.layers.0.0.norm2.weight", "perceiver_resampler.layers.0.0.norm2.bias", "perceiver_resampler.layers.0.0.to_q.weight", "perceiver_resampler.layers.0.0.to_kv.weight", "perceiver_resampler.layers.0.0.to_out.weight", "perceiver_resampler.layers.0.1.0.weight", "perceiver_resampler.layers.0.1.0.bias", "perceiver_resampler.layers.0.1.1.weight", "perceiver_resampler.layers.0.1.3.weight", "perceiver_resampler.layers.1.0.norm1.weight", "perceiver_resampler.layers.1.0.norm1.bias", "perceiver_resampler.layers.1.0.norm2.weight", "perceiver_resampler.layers.1.0.norm2.bias", "perceiver_resampler.layers.1.0.to_q.weight", "perceiver_resampler.layers.1.0.to_kv.weight", "perceiver_resampler.layers.1.0.to_out.weight", "perceiver_resampler.layers.1.1.0.weight", "perceiver_resampler.layers.1.1.0.bias", "perceiver_resampler.layers.1.1.1.weight", "perceiver_resampler.layers.1.1.3.weight", "perceiver_resampler.layers.2.0.norm1.weight", "perceiver_resampler.layers.2.0.norm1.bias", "perceiver_resampler.layers.2.0.norm2.weight", "perceiver_resampler.layers.2.0.norm2.bias", "perceiver_resampler.layers.2.0.to_q.weight", "perceiver_resampler.layers.2.0.to_kv.weight", "perceiver_resampler.layers.2.0.to_out.weight", "perceiver_resampler.layers.2.1.0.weight", "perceiver_resampler.layers.2.1.0.bias", "perceiver_resampler.layers.2.1.1.weight", "perceiver_resampler.layers.2.1.3.weight", "perceiver_resampler.layers.3.0.norm1.weight", "perceiver_resampler.layers.3.0.norm1.bias", "perceiver_resampler.layers.3.0.norm2.weight", "perceiver_resampler.layers.3.0.norm2.bias", "perceiver_resampler.layers.3.0.to_q.weight", "perceiver_resampler.layers.3.0.to_kv.weight", "perceiver_resampler.layers.3.0.to_out.weight", "perceiver_resampler.layers.3.1.0.weight", "perceiver_resampler.layers.3.1.0.bias", "perceiver_resampler.layers.3.1.1.weight", "perceiver_resampler.layers.3.1.3.weight". size mismatch for proj.0.weight: copying a param with shape torch.Size([1024, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for proj.0.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for proj.2.weight: copying a param with shape torch.Size([3072, 1024]) from checkpoint, the shape in current model is torch.Size([768, 512]). size mismatch for proj.2.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([768]).

xiaohu2015 commented 4 months ago

it seems you load wrong model, can you give more information?