nerdyrodent / VQGAN-CLIP

Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
Other
2.59k stars 427 forks source link

Error build_model #97

Closed cgob closed 2 years ago

cgob commented 2 years ago

Hello,

I have an issue with the loading of the clip model. I have tried different arguments but I alway get the same issue.

(vqgan) python generate.py -p "Apple on a table"

Working with z of shape (1, 256, 16, 16) = 65536 dimensions.
loaded pretrained LPIPS loss from taming/modules/autoencoder/lpips\vgg.pth
VQLPIPSWithDiscriminator running with hinge loss.
Restored from checkpoints/vqgan_imagenet_f16_16384.ckpt
Traceback (most recent call last):
  File "VQGAN-CLIP\generate.py", line 549, in <module>
    perceptor = clip.load(args.clip_model, jit=jit)[0].eval().requires_grad_(False).to(device)
  File "VQGAN-CLIP\CLIP\clip\clip.py", line 135, in load
    model = build_model(state_dict or model.state_dict()).to(device)
  File "VQGAN-CLIP\CLIP\clip\model.py", line 396, in build_model
    vit = "visual.proj" in state_dict
  File "D:\cond\envs\vqgan\lib\site-packages\torch\jit\_script.py", line 733, in __contains__
    return self.forward_magic_method("__contains__", key)
  File "D:\cond\envs\vqgan\lib\site-packages\torch\jit\_script.py", line 720, in forward_magic_method
    raise NotImplementedError()
NotImplementedError

Thank you

Cédric

nerdyrodent commented 2 years ago

Which PyTorch did you install?

cgob commented 2 years ago

I used: pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html

nerdyrodent commented 2 years ago

jit should be false then, so it shouldn't be that. My guess would be that a requirement isn't installed.