AbdBarho / stable-diffusion-webui-docker

Easy Docker setup for Stable Diffusion with user-friendly UI
Other
6.75k stars 1.13k forks source link

OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. #653

Closed Pnut-GGG closed 8 months ago

Pnut-GGG commented 9 months ago

Has this issue been opened before?

yes, But I tried the solutions to those problems, but none of them succeeded

Describe the bug

OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.

Which UI

auto

Hardware / Software

Steps to Reproduce

docker compose --profile auto up --build

when I run this command

auto-1 | creating model quickly: OSError auto-1 | Traceback (most recent call last): auto-1 | File "/opt/conda/lib/python3.10/threading.py", line 973, in _bootstrap auto-1 | self._bootstrap_inner() auto-1 | File "/opt/conda/lib/python3.10/threading.py", line 1016, in _bootstrap_inner auto-1 | self.run() auto-1 | File "/opt/conda/lib/python3.10/threading.py", line 953, in run auto-1 | self._target(self._args, self._kwargs) auto-1 | File "/stable-diffusion-webui/modules/initialize.py", line 147, in load_model auto-1 | shared.sd_model # noqa: B018 auto-1 | File "/stable-diffusion-webui/modules/shared_items.py", line 128, in sd_model auto-1 | return modules.sd_models.model_data.get_sd_model() auto-1 | File "/stable-diffusion-webui/modules/sd_models.py", line 531, in get_sd_model auto-1 | load_model() auto-1 | File "/stable-diffusion-webui/modules/sd_models.py", line 634, in load_model auto-1 | sd_model = instantiate_from_config(sd_config.model) auto-1 | File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config auto-1 | return get_obj_from_str(config["target"])(config.get("params", dict())) auto-1 | File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1650, in init auto-1 | super().init(concat_keys, args, kwargs) auto-1 | File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1515, in init auto-1 | super().init(args, kwargs) auto-1 | File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 563, in init auto-1 | self.instantiate_cond_stage(cond_stage_config) auto-1 | File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage auto-1 | model = instantiate_from_config(config) auto-1 | File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config auto-1 | return get_obj_from_str(config["target"])(config.get("params", dict())) auto-1 | File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 103, in init auto-1 | self.tokenizer = CLIPTokenizer.from_pretrained(version) auto-1 | File "/opt/conda/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1809, in from_pretrained auto-1 | raise EnvironmentError( auto-1 | OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer. auto-1 | auto-1 | Failed to create model quickly; will retry using slow method. auto-1 | loading stable diffusion model: OSError auto-1 | Traceback (most recent call last): auto-1 | File "/opt/conda/lib/python3.10/threading.py", line 973, in _bootstrap auto-1 | self._bootstrap_inner() auto-1 | File "/opt/conda/lib/python3.10/threading.py", line 1016, in _bootstrap_inner auto-1 | self.run() auto-1 | File "/opt/conda/lib/python3.10/threading.py", line 953, in run auto-1 | self._target(self._args, self._kwargs) auto-1 | File "/stable-diffusion-webui/modules/initialize.py", line 147, in load_model auto-1 | shared.sd_model # noqa: B018 auto-1 | File "/stable-diffusion-webui/modules/shared_items.py", line 128, in sd_model auto-1 | return modules.sd_models.model_data.get_sd_model() auto-1 | File "/stable-diffusion-webui/modules/sd_models.py", line 531, in get_sd_model auto-1 | load_model() auto-1 | File "/stable-diffusion-webui/modules/sd_models.py", line 643, in load_model auto-1 | sd_model = instantiate_from_config(sd_config.model) auto-1 | File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config auto-1 | return get_obj_from_str(config["target"])(config.get("params", dict())) auto-1 | File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1650, in init auto-1 | super().init(concat_keys, *args, *kwargs) auto-1 | File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1515, in init auto-1 | super().init(args, kwargs) auto-1 | File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 563, in init auto-1 | self.instantiate_cond_stage(cond_stage_config) auto-1 | File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage auto-1 | model = instantiate_from_config(config) auto-1 | File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config auto-1 | return get_obj_from_str(config["target"])(**config.get("params", dict())) auto-1 | File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 103, in init auto-1 | self.tokenizer = CLIPTokenizer.from_pretrained(version) auto-1 | File "/opt/conda/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1809, in from_pretrained auto-1 | raise EnvironmentError( auto-1 | OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.

Additional context Any other context about the problem here. If applicable, add screenshots to help explain your problem.

AbdBarho commented 8 months ago

I think this might be related to a cache problem when downloading, might be a result of inconsistent internet connection, sometimes huggingface server also hang up.

Can you try to delete the folder /data/.cache/ and try again?

Pnut-GGG commented 8 months ago

I have deleted it many times, but it is not possible. I would like to know the specific path structure and try manually putting the file in it

Pnut-GGG commented 8 months ago

Successfully used proxy

lalalala256 commented 8 months ago

May I ask how to set the proxy? I meet the same question.

xunxuntu commented 7 months ago

Successfully used proxy

请问您是怎么实现设置代理的,我也给git设置了代理,但是没用。

How did you set up the proxy, I also set up a proxy for git, but it didn't work. Thanks

lalalala256 commented 7 months ago

Successfully used proxy

请问您是怎么实现设置代理的,我也给git设置了代理,但是没用。

How did you set up the proxy, I also set up a proxy for git, but it didn't work. Thanks

我也设置成功了,代理这玩意儿挺麻烦的。在这个项目里具体注意三点: 0.用环境变量设置命令行代理 1.在docker内部因为执行了命令,也要用ENV设置代理相关的环境变量 2.在代码里面搜aria2c,它下载时也需要设置代理,具体查aria2c的help就行了。