How to learn on lower parameters? I try change:
unet_config:
num_res_blocks: 2
to:
unet_config:
num_res_blocks: 1
and have errors. So I change this also in
first_stage_config:
num_res_blocks: 1
but still get error:
missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict(
File "/mnt/data_1/nik/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 2152, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for LatentDiffusion:
size mismatch for model.diffusion_model.input_blocks.5.0.in_layers.2.weight: copying a param with shape torch.Size([640, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([1280, 640, 3, 3]).
size mismatch for model.diffusion_model.input_blocks.5.0.in_layers.2.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
So, how can I change parameters? I only can change resolution from 256 to 64. if I set to 32 it is also errors...
As I understand initial checkpoint was stable diffusion v1.5 and it was learned on some parameters. So can I don't use initial checkpoint? How to disable it and make clean learn? Or where it is possible get initial checkpoints with another learning parameters like num_res_blocks or channel_mult?
Hi. There are 2 questions:
to: unet_config: num_res_blocks: 1
and have errors. So I change this also in first_stage_config: num_res_blocks: 1
but still get error: missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict( File "/mnt/data_1/nik/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 2152, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for LatentDiffusion: size mismatch for model.diffusion_model.input_blocks.5.0.in_layers.2.weight: copying a param with shape torch.Size([640, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([1280, 640, 3, 3]). size mismatch for model.diffusion_model.input_blocks.5.0.in_layers.2.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([1280]).
So, how can I change parameters? I only can change resolution from 256 to 64. if I set to 32 it is also errors...