Kinyugo / consistency_models

A mini-library for training consistency models.
https://arxiv.org/abs/2303.01469
MIT License
208 stars 21 forks source link

Cannot load checkpoint #3

Closed aarontan-git closed 11 months ago

aarontan-git commented 1 year ago

I am trying to load the checkpoint "unet_butterflies256_100k.pth"

RuntimeError: Error(s) in loading state_dict for UNet:
    Missing key(s) in state_dict: "model_fn.down_blocks.4.attentions.0.to_q.weight", "model_fn.down_blocks.4.attentions.0.to_q.bias", "model_fn.down_blocks.4.attentions.0.to_k.weight", "model_fn.down_blocks.4.attentions.0.to_k.bias", "model_fn.down_blocks.4.attentions.0.to_v.weight", "model_fn.down_blocks.4.attentions.0.to_v.bias", "model_fn.down_blocks.4.attentions.0.to_out.0.weight", "model_fn.down_blocks.4.attentions.0.to_out.0.bias", "model_fn.down_blocks.4.attentions.1.to_q.weight", "model_fn.down_blocks.4.attentions.1.to_q.bias", "model_fn.down_blocks.4.attentions.1.to_k.weight", "model_fn.down_blocks.4.attentions.1.to_k.bias", "model_fn.down_blocks.4.attentions.1.to_v.weight", "model_fn.down_blocks.4.attentions.1.to_v.bias", "model_fn.down_blocks.4.attentions.1.to_out.0.weight", "model_fn.down_blocks.4.attentions.1.to_out.0.bias", "model_fn.up_blocks.1.attentions.0.to_q.weight", "model_fn.up_blocks.1.attentions.0.to_q.bias", "model_fn.up_blocks.1.attentions.0.to_k.weight", "model_fn.up_blocks.1.attentions.0.to_k.bias", "model_fn.up_blocks.1.attentions.0.to_v.weight", "model_fn.up_blocks.1.attentions.0.to_v.bias", "model_fn.up_blocks.1.attentions.0.to_out.0.weight", "model_fn.up_blocks.1.attentions.0.to_out.0.bias", "model_fn.up_blocks.1.attentions.1.to_q.weight", "model_fn.up_blocks.1.attentions.1.to_q.bias", "model_fn.up_blocks.1.attentions.1.to_k.weight", "model_fn.up_blocks.1.attentions.1.to_k.bias", "model_fn.up_blocks.1.attentions.1.to_v.weight", "model_fn.up_blocks.1.attentions.1.to_v.bias", "model_fn.up_blocks.1.attentions.1.to_out.0.weight", "model_fn.up_blocks.1.attentions.1.to_out.0.bias", "model_fn.up_blocks.1.attentions.2.to_q.weight", "model_fn.up_blocks.1.attentions.2.to_q.bias", "model_fn.up_blocks.1.attentions.2.to_k.weight", "model_fn.up_blocks.1.attentions.2.to_k.bias", "model_fn.up_blocks.1.attentions.2.to_v.weight", "model_fn.up_blocks.1.attentions.2.to_v.bias", "model_fn.up_blocks.1.attentions.2.to_out.0.weight", "model_fn.up_blocks.1.attentions.2.to_out.0.bias", "model_fn.mid_block.attentions.0.to_q.weight", "model_fn.mid_block.attentions.0.to_q.bias", "model_fn.mid_block.attentions.0.to_k.weight", "model_fn.mid_block.attentions.0.to_k.bias", "model_fn.mid_block.attentions.0.to_v.weight", "model_fn.mid_block.attentions.0.to_v.bias", "model_fn.mid_block.attentions.0.to_out.0.weight", "model_fn.mid_block.attentions.0.to_out.0.bias". 
    Unexpected key(s) in state_dict: "model_fn.down_blocks.4.attentions.0.query.weight", "model_fn.down_blocks.4.attentions.0.query.bias", "model_fn.down_blocks.4.attentions.0.key.weight", "model_fn.down_blocks.4.attentions.0.key.bias", "model_fn.down_blocks.4.attentions.0.value.weight", "model_fn.down_blocks.4.attentions.0.value.bias", "model_fn.down_blocks.4.attentions.0.proj_attn.weight", "model_fn.down_blocks.4.attentions.0.proj_attn.bias", "model_fn.down_blocks.4.attentions.1.query.weight", "model_fn.down_blocks.4.attentions.1.query.bias", "model_fn.down_blocks.4.attentions.1.key.weight", "model_fn.down_blocks.4.attentions.1.key.bias", "model_fn.down_blocks.4.attentions.1.value.weight", "model_fn.down_blocks.4.attentions.1.value.bias", "model_fn.down_blocks.4.attentions.1.proj_attn.weight", "model_fn.down_blocks.4.attentions.1.proj_attn.bias", "model_fn.up_blocks.1.attentions.0.query.weight", "model_fn.up_blocks.1.attentions.0.query.bias", "model_fn.up_blocks.1.attentions.0.key.weight", "model_fn.up_blocks.1.attentions.0.key.bias", "model_fn.up_blocks.1.attentions.0.value.weight", "model_fn.up_blocks.1.attentions.0.value.bias", "model_fn.up_blocks.1.attentions.0.proj_attn.weight", "model_fn.up_blocks.1.attentions.0.proj_attn.bias", "model_fn.up_blocks.1.attentions.1.query.weight", "model_fn.up_blocks.1.attentions.1.query.bias", "model_fn.up_blocks.1.attentions.1.key.weight", "model_fn.up_blocks.1.attentions.1.key.bias", "model_fn.up_blocks.1.attentions.1.value.weight", "model_fn.up_blocks.1.attentions.1.value.bias", "model_fn.up_blocks.1.attentions.1.proj_attn.weight", "model_fn.up_blocks.1.attentions.1.proj_attn.bias", "model_fn.up_blocks.1.attentions.2.query.weight", "model_fn.up_blocks.1.attentions.2.query.bias", "model_fn.up_blocks.1.attentions.2.key.weight", "model_fn.up_blocks.1.attentions.2.key.bias", "model_fn.up_blocks.1.attentions.2.value.weight", "model_fn.up_blocks.1.attentions.2.value.bias", "model_fn.up_blocks.1.attentions.2.proj_attn.weight", "model_fn.up_blocks.1.attentions.2.proj_attn.bias", "model_fn.mid_block.attentions.0.query.weight", "model_fn.mid_block.attentions.0.query.bias", "model_fn.mid_block.attentions.0.key.weight", "model_fn.mid_block.attentions.0.key.bias", "model_fn.mid_block.attentions.0.value.weight", "model_fn.mid_block.attentions.0.value.bias", "model_fn.mid_block.attentions.0.proj_attn.weight", "model_fn.mid_block.attentions.0.proj_attn.bias".