lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.79k stars 185 forks source link

Error while loading. #91

Open maikelsz opened 1 year ago

maikelsz commented 1 year ago

Is there an existing issue for this?

What happened?

Something about clip-vit

Steps to reproduce the problem

just execute webui-user.bat

What should have happened?

run fine

Commit where the problem happens

22bcc7be

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

--medvram --no-half --precision=full  --skip-torch-cuda-test

List of extensions

No

Console logs

f:\SD-webui-dml\stable-diffusion-webui-directml>webui-user.bat
venv "f:\SD-webui-dml\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Commit hash: <none>
Installing requirements for Web UI
Launching Web UI with arguments: --medvram --no-half --precision=full --skip-torch-cuda-test
Warning: experimental graphic memory optimization is disabled due to gpu vendor. Currently this optimization is only available for AMDGPUs.
Disabled experimental graphic memory optimizations.
Interrogations are fallen back to cpu. This doesn't affect on image generation. But if you want to use interrogate (CLIP or DeepBooru), check out this issue: https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues/10
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
No module 'xformers'. Proceeding without it.
Loading weights [92970aa785] from F:\SD-webui-dml\stable-diffusion-webui-directml\models\Stable-diffusion\Dreamlike-Photoreal-2.0.safetensors
Creating model from config: F:\SD-webui-dml\stable-diffusion-webui-directml\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Failed to create model quickly; will retry using slow method.
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
loading stable diffusion model: OSError
Traceback (most recent call last):
  File "f:\SD-webui-dml\stable-diffusion-webui-directml\webui.py", line 139, in initialize
    modules.sd_models.load_model()
  File "F:\SD-webui-dml\stable-diffusion-webui-directml\modules\sd_models.py", line 438, in load_model
    sd_model = instantiate_from_config(sd_config.model)
  File "F:\SD-webui-dml\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "F:\SD-webui-dml\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in __init__
    self.instantiate_cond_stage(cond_stage_config)
  File "F:\SD-webui-dml\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage
    model = instantiate_from_config(config)
  File "F:\SD-webui-dml\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "F:\SD-webui-dml\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 103, in __init__
    self.tokenizer = CLIPTokenizer.from_pretrained(version)
  File "f:\SD-webui-dml\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1785, in from_pretrained
    raise EnvironmentError(
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.

Stable diffusion model failed to load, exiting
Press any key to continue . . .

Additional information

No response

maikelsz commented 1 year ago

this is trying the first run after install

doctorzgh commented 7 months ago

same problem

doctorzgh commented 7 months ago

OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.