huggingface / diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
https://huggingface.co/docs/diffusers
Apache License 2.0
25.18k stars 5.21k forks source link

Model download broken after removal of use_auth_token #782

Closed allo- closed 1 year ago

allo- commented 1 year ago

Describe the bug

After updating my git checkout to a version that removed use_auth_token parameters, diffusers cannot download CompVis/stable-diffusion-v1-4 anymore with an error 401.

I am logged in with the same auth token as before. Downloading another model that is not protected with an auth token using the same script works.

Another possibly related issue: Using the different model from huggingface, the script re-downloaded it even when no new version was published. As far as I understand the cache structure, even a change in the snapshot id for some reason should still be able to avoid the ~4GB download of the model.

Reproduction

Run

python diffusers/examples/textual_inversion/textual_inversion.py   --pretrained_model_name_or_path=CompVis/stable-diffusion-v1-4 (...)

or an inference script, that had use_auth_token removed.

Logs

Traceback (most recent call last):
  File "stable-diffusion/venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1744, in from_pretrained
    resolved_vocab_files[file_id] = cached_path(
  File "stable-diffusion/venv/lib/python3.10/site-packages/transformers/utils/hub.py", line 284, in cached_path
    output_path = get_from_cache(
  File "stable-diffusion/venv/lib/python3.10/site-packages/transformers/utils/hub.py", line 502, in get_from_cache
    _raise_for_status(r)
  File "stable-diffusion/venv/lib/python3.10/site-packages/transformers/utils/hub.py", line 417, in _raise_for_status
    raise RepositoryNotFoundError(
transformers.utils.hub.RepositoryNotFoundError: 401 Client Error: Repository not found for url: https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/main/tokenizer/vocab.json. If the repo is private, make sure you are authenticated.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "stable-diffusion/diffusers/examples/textual_inversion/textual_inversion.py", line 591, in <module>
    main()
  File "stable-diffusion/diffusers/examples/textual_inversion/textual_inversion.py", line 372, in main
    tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer")
  File "stable-diffusion/venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1762, in from_pretrained
    raise EnvironmentError(
OSError: CompVis/stable-diffusion-v1-4 is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.

System Info

patrickvonplaten commented 1 year ago

Hey @allo-,

Just to confirm - did you login before with:

huggingface-cli login

?

allo- commented 1 year ago

Yes, I was logged in and I can still reproduce.

The textual_inversion script in the diffusers repo raises the exception. It seems that initializing CLIPTokenizer with CLIPTokenizer.from_pretrained doesn't work while StableDiffusionPipeline.from_pretrained works. When downloading the full pipeline with StableDiffusionPipeline and then using local_files_only for loading the CLIPTokenizer I can use textual inversion.

The other issue is unrelated. The model that was downloaded is waifu (Anime retrained model), which indeed got an update recently.

pcuenca commented 1 year ago

Hi @allo- !

I think you may need to upgrade transformers too. Could you please give it a go?

allo- commented 1 year ago

1) textual inversion has the same issue 2) textual inversion with local_files_only now throws ´No such file or directory: '~/.cache/huggingface/hub/models--CompVis--stable-diffusion-v1-4/refs/main` 3) Inference with local_files_only still works. 4) Inference without local_files_only has the same issue (now).

The token is stored in .huggingface/token as before.

allo- commented 1 year ago

huggingface-cli login with the same token works, but doesn't change anything. The same token is stored in the file and I still get the same error. Logged into the website I can access the 401 URL and see the json.

allo- commented 1 year ago

Installing diffusers stable and then reinstalling the git version pulled a new huggingface-hub version and now the downloads work. I guess I need to install the requirements.txt more often between git updates.

maddyonline commented 1 year ago

pip install --upgrade diffusers solved it for me.

allo- commented 1 year ago

The general command is: pip install --upgrade -r diffusers/requirements.txt to get all packages to (at least) the version needed by diffusers. Installing from pypi does this automatically, but a git pull doesn't.

patil-suraj commented 1 year ago

for examples it's recommended to install from main or clone the repo and install from source. Will actually add version checks in example scripts which will raise an error if correct version is not installed.

virtualramblas commented 1 year ago

pip install --upgrade diffusers solved it for me.

This worked for me too on MacOS, Silicon M1.