CompVis / stable-diffusion

A latent text-to-image diffusion model
https://ommer-lab.com/research/latent-diffusion-models/
Other
68.2k stars 10.15k forks source link

Can't load the model for 'openai/clip-vit-large-patch14'. #436

Open lichao252244354 opened 2 years ago

lichao252244354 commented 2 years ago

run python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms error:

Can't load the model for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack

cloud you help me fix this bug

lxlde commented 1 year ago

download from here https://huggingface.co/openai/clip-vit-large-patch14

shonsubong commented 1 year ago

download from here https://huggingface.co/openai/clip-vit-large-patch14

Hi.There is the same problem. Which file should I download and where do I have to copy? Thank you.

youngjack86 commented 1 year ago

Make sure that you can get access to https://huggingface.co/ normally since the site-packages/transformers/tokenization_utils_base.py script will call site-packages/transformers/utils/hub.py to invoke site-packages/huggingface_hub/file_download.py and commit the model download. Any timeout error may cause failures. Please note the .cache directory under the working and user home directory. I fixed this under Linux system by manually creating the directories and files (bcz i could not get access to huggingface from my Linux machine 😢) . The directories for me are as follows: ~/.cache/huggingface/hub/ ├── [ 48] models--openai--clip-vit-large-patch14 │   ├── [ 6] blobs │   ├── [ 18] refs │   │   └── [ 40] main │   └── [ 54] snapshots │   └── [ 121] 8d052a0f05efbaefbc9e8786ba291cfdf93e5bff │   ├── [ 4519] config.json │   ├── [ 524619] merges.txt │   ├── [ 389] special_tokens_map.json │   ├── [ 905] tokenizer_config.json │   └── [ 961143] vocab.json └── [ 1] version.txt

5 directories, 7 files

g711ab commented 10 months ago

image

Could you please explain how the file structure on Hugging Face corresponds with your file structure? How should I write the version.txt, refs, and snapshots?Thanks.

Make sure that you can get access to https://huggingface.co/ normally since the site-packages/transformers/tokenization_utils_base.py script will call site-packages/transformers/utils/hub.py to invoke site-packages/huggingface_hub/file_download.py and commit the model download. Any timeout error may cause failures. Please note the .cache directory under the working and user home directory. I fixed this under Linux system by manually creating the directories and files (bcz i could not get access to huggingface from my Linux machine 😢) . The directories for me are as follows: ~/.cache/huggingface/hub/ ├── [ 48] models--openai--clip-vit-large-patch14 │   ├── [ 6] blobs │   ├── [ 18] refs │   │   └── [ 40] main │   └── [ 54] snapshots │   └── [ 121] 8d052a0f05efbaefbc9e8786ba291cfdf93e5bff │   ├── [ 4519] config.json │   ├── [ 524619] merges.txt │   ├── [ 389] special_tokens_map.json │   ├── [ 905] tokenizer_config.json │   └── [ 961143] vocab.json └── [ 1] version.txt

5 directories, 7 files