nerdyrodent / VQGAN-CLIP

Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
Other
2.61k stars 427 forks source link

Model Not Loading #31

Closed PurplePanther closed 3 years ago

PurplePanther commented 3 years ago

What do these lines mean and why aren't they working?

FileNotFoundError Traceback (most recent call last)

in () 3 #@markdown Once this has been run successfully you only need to run parameters and then the program to execute with new parameters 4 device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') ----> 5 model = load_vqgan_model(args.vqgan_config, args.vqgan_checkpoint).to(device) 6 perceptor = clip.load(args.clip_model, jit=False)[0].eval().requires_grad_(False).to(device) 7 *********** /usr/local/lib/python3.7/dist-packages/omegaconf/omegaconf.py in load(file_) 181 182 if isinstance(file_, (str, pathlib.Path)): --> 183 with io.open(os.path.abspath(file_), "r", encoding="utf-8") as f: 184 obj = yaml.load(f, Loader=get_yaml_loader()) 185 elif getattr(file_, "read", None): FileNotFoundError: [Errno 2] No such file or directory: '/content/vqgan_imagenet_f16_16384.yaml'
zhanghongyong123456 commented 3 years ago

What do these lines mean and why aren't they working?

FileNotFoundError Traceback (most recent call last)

in () 3 #@markdown Once this has been run successfully you only need to run parameters and then the program to execute with new parameters 4 device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') ----> 5 model = load_vqgan_model(args.vqgan_config, args.vqgan_checkpoint).to(device) 6 perceptor = clip.load(args.clip_model, jit=False)[0].eval().requiresgrad(False).to(device) 7

/usr/local/lib/python3.7/dist-packages/omegaconf/omegaconf.py in load(file) 181 182 if isinstance(file, (str, pathlib.Path)): --> 183 with io.open(os.path.abspath(file_), "r", encoding="utf-8") as f: 184 obj = yaml.load(f, Loader=get_yamlloader()) 185 elif getattr(file, "read", None):

FileNotFoundError: [Errno 2] No such file or directory: '/content/vqgan_imagenet_f16_16384.yaml'

You can download the models and the configuration files ahead of time and place them in a specific location,Using download_models.sh

microraptor commented 3 years ago

As @zhanghongyong123456 said you have to download the .yaml and .ckpt files for the imagenet_f16_16384 pretrained model and place them into a folder called "checkpoints", as it is described in the readme of the repo. For me the normal download links didn't work, which I described in #33.

nerdyrodent commented 3 years ago

Mirrors have now been removed