ddpm.py:163: UserWarning: Error(s) in loading state_dict for
LatentImages2ImageDiffusion:
size mismatch for model.diffusion_model.input_blocks.0.0.weight: copying a param with shape torch.Size([320, 5, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 11, 3, 3]).
size mismatch for model.diffusion_model.out.2.weight: copying a param with shape torch.Size([4, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([8, 320, 3, 3]).
size mismatch for model.diffusion_model.out.2.bias: copying a param with shape torch.Size([4]) from checkpoint, the shape in current model
is torch.Size([8]).
warnings.warn('Error(s) in loading state_dict for {}:\n\t{}'.format(
Restored from models/stable_diffusion/512-depth-ema.ckpt with 392 missing and 658 unexpected keys
I don't think I downloaded or placed it incorrectly. Now I don't know how to solve this problem, maybe someone has successfully emerged with the training part. I hope someone can answer this question.
When I am training the code, it prompts me:
ddpm.py:163: UserWarning: Error(s) in loading state_dict for
LatentImages2ImageDiffusion: size mismatch for model.diffusion_model.input_blocks.0.0.weight: copying a param with shape torch.Size([320, 5, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 11, 3, 3]). size mismatch for model.diffusion_model.out.2.weight: copying a param with shape torch.Size([4, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([8, 320, 3, 3]). size mismatch for model.diffusion_model.out.2.bias: copying a param with shape torch.Size([4]) from checkpoint, the shape in current model
is torch.Size([8]). warnings.warn('Error(s) in loading state_dict for {}:\n\t{}'.format( Restored from models/stable_diffusion/512-depth-ema.ckpt with 392 missing and 658 unexpected keys
I don't think I downloaded or placed it incorrectly. Now I don't know how to solve this problem, maybe someone has successfully emerged with the training part. I hope someone can answer this question.