pkuliyi2015 / sd-webui-stablesr

StableSR for Stable Diffusion WebUI - Ultra High-quality Image Upscaler
https://iceclear.github.io/projects/stablesr/
Other
1.05k stars 54 forks source link

RuntimeError: Error(s) in loading state_dict for EncoderUNetModelWT #36

Open Experienment-Gu opened 1 year ago

Experienment-Gu commented 1 year ago

RuntimeError: Error(s) in loading state_dict for EncoderUNetModelWT: Missing key(s) in state_dict: "time_embed.0.weight", "time_embed.0.bias", "time_embed.2.weight", "time_embed.2.bias", "input_blocks.0.0.weight", "input_blocks.0.0.bias", "input_blocks.1.0.in_layers.0.weight", "input_blocks.1.0.in_layers.0.bias", "input_blocks.1.0.in_layers.2.weight", "input_blocks.1.0.in_layers.2.bias", "input_blocks.1.0.emb_layers.1.weight", "input_blocks.1.0.emb_layers.1.bias", "input_blocks.1.0.out_layers.0.weight", "input_blocks.1.0.out_layers.0.bias", "input_blocks.1.0.out_layers.3.weight", "input_blocks.1.0.out_layers.3.bias", "input_blocks.1.1.norm.weight", "input_blocks.1.1.norm.bias", "input_blocks.1.1.qkv.weight", "input_blocks.1.1.qkv.bias", "input_blocks.1.1.proj_out.weight", "input_blocks.1.1.proj_out.bias", "input_blocks.2.0.in_layers.0.weight", "input_blocks.2.0.in_layers.0.bias", "input_blocks.2.0.in_layers.2.weight", "input_blocks.2.0.in_layers.2.bias", "input_blocks.2.0.emb_layers.1.weight", "input_blocks.2.0.emb_layers.1.bias", "input_blocks.2.0.out_layers.0.weight", "input_blocks.2.0.out_layers.0.bias", "input_blocks.2.0.out_layers.3.weight", "input_blocks.2.0.out_layers.3.bias", "input_blocks.2.1.norm.weight", "input_blocks.2.1.norm.bias", "input_blocks.2.1.qkv.weight", "input_blocks.2.1.qkv.bias", "input_blocks.2.1.proj_out.weight", "input_blocks.2.1.proj_out.bias", "input_blocks.3.0.op.weight", "input_blocks.3.0.op.bias", "input_blocks.4.0.in_layers.0.weight", "input_blocks.4.0.in_layers.0.bias", "input_blocks.4.0.in_layers.2.weight", "input_blocks.4.0.in_layers.2.bias", "input_blocks.4.0.emb_layers.1.weight", "input_blocks.4.0.emb_layers.1.bias", "input_blocks.4.0.out_layers.0.weight", "input_blocks.4.0.out_layers.0.bias", "input_blocks.4.0.out_layers.3.weight", "input_blocks.4.0.out_layers.3.bias", "input_blocks.4.1.norm.weight", "input_blocks.4.1.norm.bias", "input_blocks.4.1.qkv.weight", "input_blocks.4.1.qkv.bias", "input_blocks.4.1.proj_out.weight", "input_blocks.4.1.proj_out.bias", "input_blocks.5.0.in_layers.0.weight", "input_blocks.5.0.in_layers.0.bias", "input_blocks.5.0.in_layers.2.weight", "input_blocks.5.0.in_layers.2.bias", "input_blocks.5.0.emb_layers.1.weight", "input_blocks.5.0.emb_layers.1.bias", "input_blocks.5.0.out_layers.0.weight", "input_blocks.5.0.out_layers.0.bias", "input_blocks.5.0.out_layers.3.weight", "input_blocks.5.0.out_layers.3.bias", "input_blocks.5.1.norm.weight", "input_blocks.5.1.norm.bias", "input_blocks.5.1.qkv.weight", "input_blocks.5.1.qkv.bias", "input_blocks.5.1.proj_out.weight", "input_blocks.5.1.proj_out.bias", "input_blocks.6.0.op.weight", "input_blocks.6.0.op.bias", "input_blocks.7.0.in_layers.0.weight", "input_blocks.7.0.in_layers.0.bias", "input_blocks.7.0.in_layers.2.weight", "input_blocks.7.0.in_layers.2.bias", "input_blocks.7.0.emb_layers.1.weight", "input_blocks.7.0.emb_layers.1.bias", "input_blocks.7.0.out_layers.0.weight", "input_blocks.7.0.out_layers.0.bias", "input_blocks.7.0.out_layers.3.weight", "input_blocks.7.0.out_layers.3.bias", "input_blocks.7.0.skip_connection.weight", "input_blocks.7.0.skip_connection.bias", "input_blocks.7.1.norm.weight", "input_blocks.7.1.norm.bias", "input_blocks.7.1.qkv.weight", "input_blocks.7.1.qkv.bias", "input_blocks.7.1.proj_out.weight", "input_blocks.7.1.proj_out.bias", "input_blocks.8.0.in_layers.0.weight", "input_blocks.8.0.in_layers.0.bias", "input_blocks.8.0.in_layers.2.weight", "input_blocks.8.0.in_layers.2.bias", "input_blocks.8.0.emb_layers.1.weight", "input_blocks.8.0.emb_layers.1.bias", "input_blocks.8.0.out_layers.0.weight", "input_blocks.8.0.out_layers.0.bias", "input_blocks.8.0.out_layers.3.weight", "input_blocks.8.0.out_layers.3.bias", "input_blocks.8.1.norm.weight", "input_blocks.8.1.norm.bias", "input_blocks.8.1.qkv.weight", "input_blocks.8.1.qkv.bias", "input_blocks.8.1.proj_out.weight", "input_blocks.8.1.proj_out.bias", "input_blocks.9.0.op.weight", "input_blocks.9.0.op.bias", "input_blocks.10.0.in_layers.0.weight", "input_blocks.10.0.in_layers.0.bias", "input_blocks.10.0.in_layers.2.weight", "input_blocks.10.0.in_layers.2.bias", "input_blocks.10.0.emb_layers.1.weight", "input_blocks.10.0.emb_layers.1.bias", "input_blocks.10.0.out_layers.0.weight", "input_blocks.10.0.out_layers.0.bias", "input_blocks.10.0.out_layers.3.weight", "input_blocks.10.0.out_layers.3.bias", "input_blocks.11.0.in_layers.0.weight", "input_blocks.11.0.in_layers.0.bias", "input_blocks.11.0.in_layers.2.weight", "input_blocks.11.0.in_layers.2.bias", "input_blocks.11.0.emb_layers.1.weight", "input_blocks.11.0.emb_layers.1.bias", "input_blocks.11.0.out_layers.0.weight", "input_blocks.11.0.out_layers.0.bias", "input_blocks.11.0.out_layers.3.weight", "input_blocks.11.0.out_layers.3.bias", "middle_block.0.in_layers.0.weight", "middle_block.0.in_layers.0.bias", "middle_block.0.in_layers.2.weight", "middle_block.0.in_layers.2.bias", "middle_block.0.emb_layers.1.weight", "middle_block.0.emb_layers.1.bias", "middle_block.0.out_layers.0.weight", "middle_block.0.out_layers.0.bias", "middle_block.0.out_layers.3.weight", "middle_block.0.out_layers.3.bias", "middle_block.1.norm.weight", "middle_block.1.norm.bias", "middle_block.1.qkv.weight", "middle_block.1.qkv.bias", "middle_block.1.proj_out.weight", "middle_block.1.proj_out.bias", "middle_block.2.in_layers.0.weight", "middle_block.2.in_layers.0.bias", "middle_block.2.in_layers.2.weight", "middle_block.2.in_layers.2.bias", "middle_block.2.emb_layers.1.weight", "middle_block.2.emb_layers.1.bias", "middle_block.2.out_layers.0.weight", "middle_block.2.out_layers.0.bias", "middle_block.2.out_layers.3.weight", "middle_block.2.out_layers.3.bias", "fea_tran.0.in_layers.0.weight", "fea_tran.0.in_layers.0.bias", "fea_tran.0.in_layers.2.weight", "fea_tran.0.in_layers.2.bias", "fea_tran.0.emb_layers.1.weight", "fea_tran.0.emb_layers.1.bias", "fea_tran.0.out_layers.0.weight", "fea_tran.0.out_layers.0.bias", "fea_tran.0.out_layers.3.weight", "fea_tran.0.out_layers.3.bias", "fea_tran.1.in_layers.0.weight", "fea_tran.1.in_layers.0.bias", "fea_tran.1.in_layers.2.weight", "fea_tran.1.in_layers.2.bias", "fea_tran.1.emb_layers.1.weight", "fea_tran.1.emb_layers.1.bias", "fea_tran.1.out_layers.0.weight", "fea_tran.1.out_layers.0.bias", "fea_tran.1.out_layers.3.weight", "fea_tran.1.out_layers.3.bias", "fea_tran.2.in_layers.0.weight", "fea_tran.2.in_layers.0.bias", "fea_tran.2.in_layers.2.weight", "fea_tran.2.in_layers.2.bias", "fea_tran.2.emb_layers.1.weight", "fea_tran.2.emb_layers.1.bias", "fea_tran.2.out_layers.0.weight", "fea_tran.2.out_layers.0.bias", "fea_tran.2.out_layers.3.weight", "fea_tran.2.out_layers.3.bias", "fea_tran.2.skip_connection.weight", "fea_tran.2.skip_connection.bias", "fea_tran.3.in_layers.0.weight", "fea_tran.3.in_layers.0.bias", "fea_tran.3.in_layers.2.weight", "fea_tran.3.in_layers.2.bias", "fea_tran.3.emb_layers.1.weight", "fea_tran.3.emb_layers.1.bias", "fea_tran.3.out_layers.0.weight", "fea_tran.3.out_layers.0.bias", "fea_tran.3.out_layers.3.weight", "fea_tran.3.out_layers.3.bias", "fea_tran.3.skip_connection.weight", "fea_tran.3.skip_connection.bias". Time taken: 17.57sTorch active/reserved: 2515/2600 MiB, Sys VRAM: 4125/12288 MiB (33.57%)

pkuliyi2015 commented 1 year ago

Please select the correct srmodule ckpt file in the script. Its size is around 400M.

WSJUSA commented 1 year ago

same issue, think I have correct SD and SR ckpts Screenshot_2023-07-08_12-13-48

JackeyDeng commented 1 year ago

hello, I met the same problem when loading my self-trained stablesr model, have you solved the problem?

202030481266 commented 1 year ago

It's OK. You should use the webui_*.ckpt.

WSJUSA commented 1 year ago

It's OK. You should use the webui_*.ckpt.

Do you mean for the SR Model type use this webui_768v_139.ckpt instead of the stablesr_768v_000139.ckpt

202030481266 commented 1 year ago

@WSJUSA yeap