Open cvar66 opened 1 year ago
same problem here
same here
Me too. I've looked everywhere for a solution, I was beginning to think it was just me or no-one uses this anymore.
Same here.
"OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo with use_auth_token
or log in with huggingface-cli login
and pass use_auth_token=True
."
same issue
Can you try changing line 97 in aesthetic_clip.py
from
aesthetic_clip_model = CLIPModel.from_pretrained(shared.sd_model.cond_stage_model.wrapped.transformer.name_or_path)
to
aesthetic_clip_model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14")
?
I tried this as a workaround, but I don't have enough VRAM to test it properly. I loaded older versions from git and got the same out of memory error - so I can't verify myself that it works.
name_or_path
attribute from the variable above became empty in some recent commit. It's not used anywhere in the main sd-webui, so it didn't break anything there. I'm not sure it's supposed to be "openai/clip-vit-large-patch14", but I can't find anything else it would be.
That seems to have fixed it!! Thanks!
fix works, ty!
Yeah. Thankyou for your kind help l8doku.
I just got an error with last repo:
Traceback (most recent call last):
File "H:\Stable-Diffusion-Automatic\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_errors.py", line 239, in hf_raise_for_status
response.raise_for_status()
File "H:\Stable-Diffusion-Automatic\stable-diffusion-webui\venv\lib\site-packages\requests\models.py", line 943, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/None/resolve/main/config.json
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "H:\Stable-Diffusion-Automatic\stable-diffusion-webui\venv\lib\site-packages\transformers\utils\hub.py", line 409, in cached_file
resolved_file = hf_hub_download(
File "H:\Stable-Diffusion-Automatic\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "H:\Stable-Diffusion-Automatic\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1067, in hf_hub_download
metadata = get_hf_file_metadata(
File "H:\Stable-Diffusion-Automatic\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 124, in _inner_fn
return fn(*args, **kwargs)
File "H:\Stable-Diffusion-Automatic\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1376, in get_hf_file_metadata
hf_raise_for_status(r)
File "H:\Stable-Diffusion-Automatic\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_errors.py", line 268, in hf_raise_for_status
raise RepositoryNotFoundError(message, response) from e
huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-63c1d67d-411d74ec25fbffed46a060a7)
Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If the repo is private, make sure you are authenticated.
Invalid username or password.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "H:\Stable-Diffusion-Automatic\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 321, in run_predict
output = await app.blocks.process_api(
File "H:\Stable-Diffusion-Automatic\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1015, in process_api
result = await self.call_function(fn_index, inputs, iterator, request)
File "H:\Stable-Diffusion-Automatic\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 856, in call_function
prediction = await anyio.to_thread.run_sync(
File "H:\Stable-Diffusion-Automatic\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "H:\Stable-Diffusion-Automatic\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "H:\Stable-Diffusion-Automatic\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
result = context.run(func, *args)
File "H:\Stable-Diffusion-Automatic\stable-diffusion-webui\extensions\stable-diffusion-webui-aesthetic-gradients\scripts\aesthetic.py", line 73, in generate_embs
res = aesthetic_clip.generate_imgs_embd(*args)
File "H:\Stable-Diffusion-Automatic\stable-diffusion-webui\extensions\stable-diffusion-webui-aesthetic-gradients\aesthetic_clip.py", line 104, in generate_imgs_embd
model = aesthetic_clip().to(device)
File "H:\Stable-Diffusion-Automatic\stable-diffusion-webui\extensions\stable-diffusion-webui-aesthetic-gradients\aesthetic_clip.py", line 97, in aesthetic_clip
aesthetic_clip_model = CLIPModel.from_pretrained(shared.sd_model.cond_stage_model.wrapped.transformer.name_or_path)
File "H:\Stable-Diffusion-Automatic\stable-diffusion-webui\venv\lib\site-packages\transformers\modeling_utils.py", line 2012, in from_pretrained
config, model_kwargs = cls.config_class.from_pretrained(
File "H:\Stable-Diffusion-Automatic\stable-diffusion-webui\venv\lib\site-packages\transformers\configuration_utils.py", line 532, in from_pretrained
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "H:\Stable-Diffusion-Automatic\stable-diffusion-webui\venv\lib\site-packages\transformers\configuration_utils.py", line 559, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "H:\Stable-Diffusion-Automatic\stable-diffusion-webui\venv\lib\site-packages\transformers\configuration_utils.py", line 614, in _get_config_dict
resolved_config_file = cached_file(
File "H:\Stable-Diffusion-Automatic\stable-diffusion-webui\venv\lib\site-packages\transformers\utils\hub.py", line 424, in cached_file
raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.
@l8doku patch works
absolutely same, even though i complied transformers from hugging face instead, nothing different happened.
@AUTOMATIC1111 need a fix for this, and thanks for great extension!
您可以尝试更改第 97 行
aesthetic_clip.py
从aesthetic_clip_model = CLIPModel.from_pretrained(shared.sd_model.cond_stage_model.wrapped.transformer.name_or_path)
到
aesthetic_clip_model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14")
?
我尝试将此作为解决方法,但我没有足够的 VRAM 来正确测试它。我从 git 加载旧版本并遇到同样的内存不足错误 - 所以我无法验证自己是否有效。
name_or_path
上述变量的属性在最近的一些提交中变为空。它没有在主 sd-webui 的任何地方使用,所以它没有破坏那里的任何东西。我不确定它应该是“openai/clip-vit-large-patch14”,但我找不到其他任何东西。
OSError: Can't load config for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing a config.json file
It seems that this method doesn't work for me?(┬_┬)
@kou201 after modifying the file, restart the whole SD, not just the UI! It worked for me after that.
@kou201 after modifying the file, restart the whole SD, not just the UI! It worked for me after that.
I modified the code after closing the webui and then opened it for testing. Then the above error prompt appeared. But after I restarted the computer, the plug-in miraculously worked normally.
How strange. (*´Д`)
@kou201 after modifying the file, restart the whole SD, not just the UI! It worked for me after that.
I modified the code after closing the webui and then opened it for testing. Then the above error prompt appeared. But after I restarted the computer, the plug-in miraculously worked normally.
How strange. (*´Д`)
Yes!! Had the same issue, like you did, I also restarted my computer after editing aesthetic_clip.py it now works like a charm!
Can you try changing line 97 in
aesthetic_clip.py
fromaesthetic_clip_model = CLIPModel.from_pretrained(shared.sd_model.cond_stage_model.wrapped.transformer.name_or_path)
to
aesthetic_clip_model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14")
?
I tried this as a workaround, but I don't have enough VRAM to test it properly. I loaded older versions from git and got the same out of memory error - so I can't verify myself that it works.
name_or_path
attribute from the variable above became empty in some recent commit. It's not used anywhere in the main sd-webui, so it didn't break anything there. I'm not sure it's supposed to be "openai/clip-vit-large-patch14", but I can't find anything else it would be.
It worked for me after a reboot in my windows system! Thanks.
@kou201 it works!
Can you try changing line 97 in
aesthetic_clip.py
fromaesthetic_clip_model = CLIPModel.from_pretrained(shared.sd_model.cond_stage_model.wrapped.transformer.name_or_path)
to
aesthetic_clip_model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14")
?
I tried this as a workaround, but I don't have enough VRAM to test it properly. I loaded older versions from git and got the same out of memory error - so I can't verify myself that it works.
name_or_path
attribute from the variable above became empty in some recent commit. It's not used anywhere in the main sd-webui, so it didn't break anything there. I'm not sure it's supposed to be "openai/clip-vit-large-patch14", but I can't find anything else it would be.
IT WORKS!
Which commit are you on in web ui? git log --oneline | head -1
useful, thank you. It turns out that this plugin actually needs to be connected to the Internet
I was able to get it working on sd web ui commit e0e8005
with additional fixes mentioned in my other issue.
Havent used this plugin in a few weeks but now when I try to use it I get this:
OSError: We couldn't connect to 'https://huggingface.co' to load this model, couldn't find it in the cached files and it looks like None is not the path to a directory containing a config.json file. Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.