d8ahazard / sd_dreambooth_extension

Other
1.86k stars 283 forks source link

[Bug]: Cannot load unet because keys are missing during preview generation #1268

Closed Tsubashi closed 1 year ago

Tsubashi commented 1 year ago

Is there an existing issue for this?

What happened?

After upgrading to 1.0.14, training runs fail when attempting to create preview images. The exception thrown is Cannot load from <temp_dir>\unet because the following keys are missing: followed by a whole lot of keys (see full text for details).

Full Error Text Exception training model: 'Cannot load from C:\Users\cscott\AppData\Local\Temp\tmpczgbs_9y\unet because the following keys are missing: up_blocks.0.resnets.0.time_emb_proj.weight, down_blocks.2.resnets.0.conv2.weight, down_blocks.2.resnets.1.conv2.bias, down_blocks.2.attentions.1.transformer_blocks.0.attn1.to_v.weight, down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_q.weight, up_blocks.2.attentions.2.transformer_blocks.0.attn1.to_q.weight, up_blocks.2.attentions.0.transformer_blocks.0.attn2.to_v.weight, up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_out.0.weight, up_blocks.3.resnets.1.time_emb_proj.weight, down_blocks.1.resnets.1.conv2.weight, up_blocks.2.attentions.1.transformer_blocks.0.attn1.to_out.0.weight, down_blocks.2.attentions.0.transformer_blocks.0.attn1.to_k.weight, down_blocks.0.attentions.0.transformer_blocks.0.attn2.to_k.weight, up_blocks.3.resnets.0.conv1.weight, up_blocks.2.resnets.1.conv1.weight, down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_out.0.weight, up_blocks.2.attentions.2.transformer_blocks.0.attn2.to_out.0.weight, up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_k.weight, up_blocks.3.resnets.1.conv1.weight, up_blocks.1.resnets.0.conv2.bias, down_blocks.0.resnets.0.conv1.bias, up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_out.0.bias, mid_block.attentions.0.transformer_blocks.0.ff.net.0.proj.weight, up_blocks.2.resnets.2.time_emb_proj.weight, mid_block.resnets.1.conv2.weight, mid_block.attentions.0.transformer_blocks.0.attn1.to_out.0.weight, up_blocks.3.attentions.2.transformer_blocks.0.attn1.to_v.weight, up_blocks.2.attentions.0.transformer_blocks.0.attn1.to_k.weight, down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_q.weight, up_blocks.0.resnets.2.conv_shortcut.bias, up_blocks.2.resnets.2.conv_shortcut.weight, up_blocks.3.resnets.0.conv_shortcut.bias, up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_k.weight, down_blocks.0.attentions.0.transformer_blocks.0.attn2.to_out.0.bias, down_blocks.1.resnets.1.conv1.weight, down_blocks.0.attentions.1.transformer_blocks.0.attn1.to_q.weight, down_blocks.0.attentions.0.transformer_blocks.0.attn1.to_out.0.weight, up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_k.weight, up_blocks.2.resnets.0.conv2.bias, up_blocks.3.resnets.0.conv1.bias, up_blocks.3.attentions.2.transformer_blocks.0.attn1.to_out.0.weight, up_blocks.2.attentions.1.transformer_blocks.0.ff.net.0.proj.weight, up_blocks.2.resnets.0.time_emb_proj.weight, down_blocks.0.attentions.0.transformer_blocks.0.attn2.to_q.weight, up_blocks.3.attentions.0.transformer_blocks.0.attn1.to_out.0.bias, up_blocks.3.resnets.2.conv1.bias, up_blocks.2.attentions.2.transformer_blocks.0.attn2.to_out.0.bias, up_blocks.3.attentions.2.transformer_blocks.0.attn1.to_out.0.bias, up_blocks.2.attentions.0.transformer_blocks.0.attn1.to_out.0.weight, up_blocks.0.resnets.1.conv1.weight, mid_block.attentions.0.transformer_blocks.0.attn2.to_out.0.weight, down_blocks.0.attentions.1.transformer_blocks.0.attn1.to_out.0.bias, up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_v.weight, down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_out.0.bias, down_blocks.2.resnets.0.time_emb_proj.bias, down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_out.0.bias, down_blocks.0.resnets.1.time_emb_proj.bias, mid_block.attentions.0.transformer_blocks.0.attn2.to_v.weight, up_blocks.0.resnets.1.conv_shortcut.bias, down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_v.weight, up_blocks.0.resnets.1.conv2.weight, down_blocks.0.resnets.1.conv1.weight, down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_out.0.bias, mid_block.attentions.0.transformer_blocks.0.attn1.to_out.0.bias, up_blocks.0.resnets.1.time_emb_proj.bias, mid_block.resnets.0.time_emb_proj.bias, up_blocks.3.attentions.2.transformer_blocks.0.attn2.to_k.weight, up_blocks.0.resnets.2.conv2.bias, up_blocks.3.attentions.2.transformer_blocks.0.ff.net.0.proj.bias, up_blocks.1.resnets.2.conv2.bias, down_blocks.1.resnets.0.time_emb_proj.weight, down_blocks.2.attentions.0.transformer_blocks.0.attn1.to_v.weight, up_blocks.3.resnets.2.conv2.bias, down_blocks.2.attentions.1.transformer_blocks.0.attn1.to_out.0.bias, down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_q.weight, up_blocks.3.attentions.2.transformer_blocks.0.attn2.to_v.weight, up_blocks.1.resnets.1.conv1.weight, mid_block.resnets.1.conv2.bias, up_blocks.3.resnets.1.conv2.bias, mid_block.resnets.0.conv2.bias, up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_q.weight, up_blocks.0.resnets.0.conv_shortcut.bias, up_blocks.3.resnets.2.time_emb_proj.bias, up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_q.weight, up_blocks.3.attentions.1.transformer_blocks.0.attn1.to_v.weight, down_blocks.3.resnets.0.time_emb_proj.weight, up_blocks.0.resnets.2.conv_shortcut.weight, up_blocks.2.attentions.1.transformer_blocks.0.attn2.to_out.0.weight, up_blocks.3.resnets.1.conv2.weight, down_blocks.2.resnets.0.conv_shortcut.weight, down_blocks.0.attentions.0.transformer_blocks.0.attn1.to_k.weight, down_blocks.0.attentions.0.transformer_blocks.0.ff.net.0.proj.bias, up_blocks.2.attentions.2.transformer_blocks.0.attn1.to_out.0.bias, down_blocks.0.attentions.1.transformer_blocks.0.attn1.to_out.0.weight, up_blocks.3.attentions.0.transformer_blocks.0.attn1.to_q.weight, up_blocks.2.resnets.1.conv2.weight, up_blocks.3.resnets.1.conv1.bias, mid_block.resnets.1.time_emb_proj.bias, mid_block.attentions.0.transformer_blocks.0.attn1.to_q.weight, down_blocks.3.resnets.0.time_emb_proj.bias, up_blocks.3.resnets.2.conv_shortcut.bias, up_blocks.2.attentions.0.transformer_blocks.0.attn2.to_out.0.bias, down_blocks.1.attentions.0.transformer_blocks.0.ff.net.0.proj.weight, up_blocks.2.resnets.2.conv1.bias, down_blocks.0.attentions.0.transformer_blocks.0.attn1.to_v.weight, up_blocks.3.resnets.2.conv1.weight, down_blocks.0.attentions.1.transformer_blocks.0.attn2.to_q.weight, down_blocks.2.attentions.1.transformer_blocks.0.attn1.to_out.0.weight, up_blocks.3.attentions.1.transformer_blocks.0.attn2.to_v.weight, down_blocks.3.resnets.1.time_emb_proj.weight, up_blocks.2.attentions.1.transformer_blocks.0.attn1.to_q.weight, up_blocks.2.attentions.1.transformer_blocks.0.attn2.to_v.weight, down_blocks.1.resnets.1.time_emb_proj.weight, up_blocks.2.resnets.0.conv2.weight, up_blocks.3.attentions.2.transformer_blocks.0.attn1.to_k.weight, up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_out.0.weight, up_blocks.3.resnets.2.conv_shortcut.weight, down_blocks.2.resnets.1.conv1.bias, down_blocks.2.attentions.1.transformer_blocks.0.attn1.to_q.weight, up_blocks.2.attentions.2.transformer_blocks.0.attn2.to_q.weight, up_blocks.3.attentions.2.transformer_blocks.0.attn2.to_out.0.bias, down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.weight, down_blocks.0.resnets.0.conv2.bias, up_blocks.1.attentions.1.transformer_blocks.0.ff.net.0.proj.bias, down_blocks.2.resnets.0.conv_shortcut.bias, up_blocks.2.resnets.2.conv1.weight, up_blocks.3.resnets.0.conv2.bias, up_blocks.1.resnets.1.conv1.bias, down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_k.weight, down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_out.0.weight, down_blocks.0.attentions.1.transformer_blocks.0.ff.net.0.proj.bias, down_blocks.0.attentions.0.transformer_blocks.0.attn1.to_out.0.bias, up_blocks.3.attentions.0.transformer_blocks.0.attn2.to_v.weight, down_blocks.3.resnets.1.conv1.bias, down_blocks.2.attentions.1.transformer_blocks.0.attn1.to_k.weight, up_blocks.3.resnets.0.time_emb_proj.bias, up_blocks.0.resnets.2.time_emb_proj.bias, up_blocks.1.resnets.1.time_emb_proj.weight, down_blocks.2.resnets.0.conv1.bias, down_blocks.1.resnets.1.conv1.bias, up_blocks.0.resnets.0.time_emb_proj.bias, up_blocks.0.resnets.2.time_emb_proj.weight, up_blocks.2.attentions.0.transformer_blocks.0.attn1.to_q.weight, up_blocks.3.attentions.2.transformer_blocks.0.attn1.to_q.weight, up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_out.0.weight, up_blocks.2.resnets.0.conv_shortcut.weight, up_blocks.2.attentions.2.transformer_blocks.0.attn2.to_k.weight, down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_out.0.weight, up_blocks.2.resnets.2.conv_shortcut.bias, down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.weight, up_blocks.3.attentions.0.transformer_blocks.0.attn2.to_q.weight, up_blocks.2.resnets.1.conv2.bias, up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_out.0.bias, up_blocks.2.resnets.0.conv1.weight, up_blocks.0.resnets.1.conv1.bias, down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_k.weight, down_blocks.2.resnets.0.time_emb_proj.weight, mid_block.resnets.0.time_emb_proj.weight, mid_block.attentions.0.transformer_blocks.0.attn1.to_k.weight, down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_out.0.bias, up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_out.0.bias, down_blocks.2.resnets.1.time_emb_proj.weight, up_blocks.3.attentions.0.transformer_blocks.0.attn1.to_out.0.weight, down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_out.0.bias, down_blocks.1.resnets.0.conv1.bias, down_blocks.2.attentions.0.transformer_blocks.0.attn1.to_q.weight, mid_block.attentions.0.transformer_blocks.0.attn2.to_q.weight, up_blocks.3.attentions.0.transformer_blocks.0.attn2.to_k.weight, up_blocks.2.attentions.2.transformer_blocks.0.ff.net.0.proj.weight, up_blocks.2.attentions.2.transformer_blocks.0.attn2.to_v.weight, down_blocks.2.attentions.0.transformer_blocks.0.ff.net.0.proj.bias, up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_v.weight, up_blocks.1.resnets.1.conv2.weight, up_blocks.3.resnets.1.time_emb_proj.bias, up_blocks.1.attentions.0.transformer_blocks.0.ff.net.0.proj.bias, down_blocks.0.attentions.1.transformer_blocks.0.attn2.to_k.weight, down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_v.weight, down_blocks.1.attentions.1.transformer_blocks.0.attn1.to_q.weight, up_blocks.0.resnets.0.conv2.weight, up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_q.weight, mid_block.attentions.0.transformer_blocks.0.attn2.to_k.weight, up_blocks.1.resnets.1.conv_shortcut.weight, up_blocks.3.attentions.0.transformer_blocks.0.attn1.to_k.weight, up_blocks.2.attentions.1.transformer_blocks.0.attn1.to_out.0.bias, up_blocks.1.resnets.2.conv_shortcut.bias, up_blocks.1.resnets.0.conv_shortcut.bias, down_blocks.0.attentions.0.transformer_blocks.0.attn2.to_out.0.weight, down_blocks.1.attentions.1.transformer_blocks.0.attn1.to_out.0.weight, down_blocks.0.resnets.0.conv2.weight, up_blocks.3.resnets.1.conv_shortcut.weight, up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_out.0.weight, mid_block.resnets.1.time_emb_proj.weight, down_blocks.1.attentions.1.transformer_blocks.0.attn1.to_v.weight, up_blocks.0.resnets.0.conv2.bias, up_blocks.3.attentions.1.transformer_blocks.0.attn1.to_k.weight, down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_out.0.weight, up_blocks.2.resnets.0.conv_shortcut.bias, up_blocks.2.attentions.2.transformer_blocks.0.ff.net.0.proj.bias, up_blocks.2.resnets.1.conv_shortcut.weight, down_blocks.3.resnets.1.conv2.bias, up_blocks.2.resnets.1.time_emb_proj.weight, down_blocks.0.attentions.1.transformer_blocks.0.attn1.to_v.weight, down_blocks.0.resnets.1.time_emb_proj.weight, up_blocks.3.attentions.1.transformer_blocks.0.attn2.to_q.weight, up_blocks.2.resnets.0.conv1.bias, up_blocks.0.resnets.1.conv_shortcut.weight, down_blocks.0.resnets.0.time_emb_proj.weight, up_blocks.1.attentions.1.transformer_blocks.0.ff.net.0.proj.weight, up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_out.0.weight, down_blocks.0.resnets.1.conv2.weight, up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.weight, down_blocks.2.resnets.0.conv2.bias, up_blocks.2.resnets.1.conv1.bias, down_blocks.2.attentions.1.transformer_blocks.0.ff.net.0.proj.bias, up_blocks.1.resnets.0.time_emb_proj.weight, up_blocks.2.resnets.2.time_emb_proj.bias, down_blocks.0.attentions.1.transformer_blocks.0.attn2.to_out.0.bias, down_blocks.1.attentions.1.transformer_blocks.0.attn1.to_out.0.bias, up_blocks.3.attentions.0.transformer_blocks.0.attn2.to_out.0.weight, up_blocks.3.attentions.2.transformer_blocks.0.attn2.to_out.0.weight, up_blocks.0.resnets.1.conv2.bias, up_blocks.1.resnets.0.time_emb_proj.bias, up_blocks.3.attentions.0.transformer_blocks.0.ff.net.0.proj.bias, up_blocks.1.resnets.0.conv_shortcut.weight, down_blocks.3.resnets.0.conv2.weight, down_blocks.0.resnets.0.time_emb_proj.bias, up_blocks.3.resnets.1.conv_shortcut.bias, down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_v.weight, up_blocks.1.resnets.2.conv2.weight, up_blocks.3.attentions.0.transformer_blocks.0.attn2.to_out.0.bias, up_blocks.1.resnets.1.conv2.bias, down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_k.weight, down_blocks.0.resnets.1.conv1.bias, up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_v.weight, up_blocks.2.attentions.2.transformer_blocks.0.attn1.to_k.weight, down_blocks.0.attentions.1.transformer_blocks.0.ff.net.0.proj.weight, up_blocks.3.attentions.1.transformer_blocks.0.ff.net.0.proj.bias, up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_k.weight, down_blocks.1.attentions.1.transformer_blocks.0.ff.net.0.proj.weight, down_blocks.1.resnets.1.conv2.bias, up_blocks.1.resnets.2.conv1.weight, up_blocks.1.resnets.2.conv_shortcut.weight, up_blocks.0.resnets.2.conv1.weight, up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_q.weight, mid_block.resnets.1.conv1.bias, up_blocks.2.attentions.0.transformer_blocks.0.ff.net.0.proj.weight, down_blocks.1.resnets.0.conv1.weight, up_blocks.3.attentions.1.transformer_blocks.0.attn1.to_out.0.weight, up_blocks.3.attentions.0.transformer_blocks.0.attn1.to_v.weight, up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_v.weight, down_blocks.2.resnets.1.conv2.weight, up_blocks.3.resnets.0.time_emb_proj.weight, up_blocks.0.resnets.0.conv_shortcut.weight, down_blocks.2.attentions.0.transformer_blocks.0.attn1.to_out.0.bias, up_blocks.3.attentions.1.transformer_blocks.0.attn1.to_q.weight, up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.weight, up_blocks.1.resnets.1.time_emb_proj.bias, down_blocks.0.resnets.0.conv1.weight, down_blocks.1.resnets.0.conv_shortcut.weight, down_blocks.2.resnets.1.time_emb_proj.bias, mid_block.resnets.0.conv1.bias, up_blocks.3.attentions.1.transformer_blocks.0.attn1.to_out.0.bias, down_blocks.0.attentions.1.transformer_blocks.0.attn1.to_k.weight, down_blocks.3.resnets.0.conv1.bias, up_blocks.0.resnets.0.conv1.weight, up_blocks.3.resnets.0.conv2.weight, down_blocks.0.attentions.0.transformer_blocks.0.attn1.to_q.weight, up_blocks.2.attentions.1.transformer_blocks.0.attn2.to_out.0.bias, mid_block.resnets.0.conv2.weight, up_blocks.2.attentions.0.transformer_blocks.0.attn2.to_k.weight, down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_q.weight, up_blocks.2.attentions.1.transformer_blocks.0.attn2.to_q.weight, up_blocks.3.resnets.2.conv2.weight, down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_v.weight, down_blocks.3.resnets.1.conv2.weight, up_blocks.1.resnets.0.conv2.weight, down_blocks.1.resnets.0.conv_shortcut.bias, up_blocks.1.resnets.2.time_emb_proj.weight, mid_block.attentions.0.transformer_blocks.0.attn1.to_v.weight, up_blocks.2.attentions.1.transformer_blocks.0.ff.net.0.proj.bias, down_blocks.3.resnets.0.conv2.bias, down_blocks.1.resnets.0.time_emb_proj.bias, up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_out.0.bias, down_blocks.1.attentions.0.transformer_blocks.0.ff.net.0.proj.bias, mid_block.resnets.1.conv1.weight, mid_block.resnets.0.conv1.weight, up_blocks.1.resnets.0.conv1.weight, down_blocks.1.resnets.1.time_emb_proj.bias, up_blocks.1.attentions.0.transformer_blocks.0.ff.net.0.proj.weight, down_blocks.1.attentions.1.transformer_blocks.0.attn1.to_k.weight, down_blocks.0.attentions.1.transformer_blocks.0.attn2.to_v.weight, up_blocks.3.attentions.1.transformer_blocks.0.attn2.to_out.0.weight, down_blocks.1.attentions.1.transformer_blocks.0.ff.net.0.proj.bias, up_blocks.2.attentions.0.transformer_blocks.0.attn1.to_out.0.bias, down_blocks.0.attentions.1.transformer_blocks.0.attn2.to_out.0.weight, up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_out.0.bias, down_blocks.2.resnets.1.conv1.weight, up_blocks.3.attentions.1.transformer_blocks.0.attn2.to_out.0.bias, down_blocks.0.resnets.1.conv2.bias, up_blocks.2.attentions.1.transformer_blocks.0.attn1.to_k.weight, down_blocks.3.resnets.1.conv1.weight, up_blocks.1.attentions.2.transformer_blocks.0.ff.net.0.proj.bias, up_blocks.2.attentions.1.transformer_blocks.0.attn2.to_k.weight, up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_k.weight, down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_v.weight, up_blocks.3.attentions.2.transformer_blocks.0.attn2.to_q.weight, down_blocks.0.attentions.0.transformer_blocks.0.ff.net.0.proj.weight, up_blocks.3.resnets.0.conv_shortcut.weight, down_blocks.2.attentions.0.transformer_blocks.0.ff.net.0.proj.weight, up_blocks.1.resnets.2.time_emb_proj.bias, up_blocks.2.attentions.2.transformer_blocks.0.attn1.to_v.weight, up_blocks.3.resnets.2.time_emb_proj.weight, up_blocks.3.attentions.1.transformer_blocks.0.attn2.to_k.weight, up_blocks.2.resnets.1.time_emb_proj.bias, up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_v.weight, up_blocks.1.attentions.2.transformer_blocks.0.ff.net.0.proj.weight, down_blocks.3.resnets.1.time_emb_proj.bias, up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_v.weight, down_blocks.2.resnets.0.conv1.weight, up_blocks.2.resnets.2.conv2.bias, down_blocks.3.resnets.0.conv1.weight, up_blocks.0.resnets.2.conv2.weight, up_blocks.2.attentions.1.transformer_blocks.0.attn1.to_v.weight, mid_block.attentions.0.transformer_blocks.0.ff.net.0.proj.bias, up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_out.0.weight, up_blocks.2.attentions.2.transformer_blocks.0.attn1.to_out.0.weight, up_blocks.1.resnets.1.conv_shortcut.bias, up_blocks.3.attentions.2.transformer_blocks.0.ff.net.0.proj.weight, mid_block.attentions.0.transformer_blocks.0.attn2.to_out.0.bias, down_blocks.2.attentions.0.transformer_blocks.0.attn1.to_out.0.weight, down_blocks.1.resnets.0.conv2.weight, up_blocks.2.resnets.2.conv2.weight, up_blocks.3.attentions.0.transformer_blocks.0.ff.net.0.proj.weight, up_blocks.2.resnets.0.time_emb_proj.bias, up_blocks.2.attentions.0.transformer_blocks.0.attn2.to_q.weight, up_blocks.0.resnets.0.conv1.bias, up_blocks.1.resnets.2.conv1.bias, up_blocks.1.resnets.0.conv1.bias, down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_out.0.weight, up_blocks.0.resnets.2.conv1.bias, up_blocks.2.attentions.0.transformer_blocks.0.ff.net.0.proj.bias, up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_q.weight, up_blocks.2.attentions.0.transformer_blocks.0.attn1.to_v.weight, up_blocks.2.resnets.1.conv_shortcut.bias, up_blocks.0.resnets.1.time_emb_proj.weight, up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_out.0.bias, up_blocks.2.attentions.0.transformer_blocks.0.attn2.to_out.0.weight, down_blocks.1.resnets.0.conv2.bias, down_blocks.2.attentions.1.transformer_blocks.0.ff.net.0.proj.weight, up_blocks.3.attentions.1.transformer_blocks.0.ff.net.0.proj.weight, down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_k.weight, down_blocks.0.attentions.0.transformer_blocks.0.attn2.to_v.weight. Please make sure to pass `low_cpu_mem_usage=False` and `device_map=None` if you want to randomly initialize those weights or else make sure your checkpoint file is correct.'.

Steps to reproduce the problem

  1. In the Dreambooth tab, create a new model to train. I used SDv1.5 as my source checkpoint.
  2. Set save preview frequency to 1 (Any value greater than 0 will do, 1 just happens quickest)
  3. Save settings, and click train
  4. Wait until that first epoch is complete and observe the error

Commit and libraries

Initializing Dreambooth
Dreambooth revision: b396af26b7906aa82a29d8847e756396cb2c28fb
[+] xformers version 0.0.20 installed.
[+] torch version 2.0.1+cu118 installed.
[+] torchvision version 0.15.2+cu118 installed.
[+] accelerate version 0.19.0 installed.
[+] diffusers version 0.16.1 installed.
[+] transformers version 4.29.2 installed.
[+] bitsandbytes version 0.35.4 installed.

Command Line Arguments

COMMANDLINE_ARGS=--listen --server-name 0.0.0.0 --skip-install --enable-insecure-extension-access --xformers
REQS_FILE=.\extensions\sd_dreambooth_extension\requirements.txt

Console logs

https://pastebin.com/BPzeyVY8

zedorion commented 1 year ago

I've been having this same issue since the update, but couldn't find a fix. Even attempting to revert to previous commits kept throwing this error or others. The only one I could find that is working was the last update from March 31st under 1.0.13, so I'll keep using that until this gets a fix.

Tsubashi commented 1 year ago

Same here. Reverting to 1.0.13 worked so I've stuck with that for now.

zedorion commented 1 year ago

I saw they did more updates, so I went ahead and tried it. Now the error stopped popping up, but got replaced by the same issue here, https://github.com/d8ahazard/sd_dreambooth_extension/issues/1266#issue-1741256791 As well as not generating the sample images at all or just generating noise.

So if your still having luck with 1.0.13 keep at it for now.

Tsubashi commented 1 year ago

Indeed, this appears to be fixed in #1267