nerdyrodent / VQGAN-CLIP

Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
Other
2.6k stars 428 forks source link

Wikiart checkpoint issue #23

Closed scottmudge closed 3 years ago

scottmudge commented 3 years ago

If I specify the wikiart_16384 checkpoint, the following error occurs:

Traceback (most recent call last):
  File "C:\Development\ml\VQGAN-CLIP\generate.py", line 364, in <module>
    model = load_vqgan_model(args.vqgan_config, args.vqgan_checkpoint).to(device)
  File "C:\Development\ml\VQGAN-CLIP\generate.py", line 338, in load_vqgan_model
    model.init_from_ckpt(checkpoint_path)
  File "C:\Development\ml\VQGAN-CLIP\taming-transformers\taming\models\vqgan.py", line 52, in init_from_ckpt
    self.load_state_dict(sd, strict=False)
  File "C:\ProgramData\Miniconda3\envs\vqgan\lib\site-packages\torch\nn\modules\module.py", line 1406, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for VQModel:
        size mismatch for loss.discriminator.main.8.weight: copying a param with shape torch.Size([512, 256, 4, 4]) from checkpoin
t, the shape in current model is torch.Size([1, 256, 4, 4]).

Is there a way to specify the initial model shape?

scottmudge commented 3 years ago

Nevermind, I needed to specify the associated YAML conf file with the checkpoint.

jxnasdev commented 2 years ago

How did you solve the problem ? Could you maybe tell me the command.

centuryglass commented 2 years ago

@jxnasdev: Use these arguments: -conf checkpoints/wikiart_16384.yaml -ckpt checkpoints/wikiart_16384.ckpt