AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
139.17k stars 26.41k forks source link

[Bug]: launching with medvram causes RuntimeError #13137

Open opy188 opened 1 year ago

opy188 commented 1 year ago

Is there an existing issue for this?

What happened?

updated to the most recent version 1.6.0. Afterwards I was unable to start the UI without the program giving me a runtime error when it had to load the model.

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)

Steps to reproduce the problem

Put --medvram in the commandline_args for any .bat you use to launch the webui

What should have happened?

it should have loaded

Sysinfo

sysinfo-2023-09-07-23-27.txt

What browsers do you use to access the UI ?

Mozilla Firefox

Console logs

venv "C:\stable diffusion voldemort\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.6.0-14-gb6f242b0
Commit hash: b6f242b025aa83413d8889b2c07171ddb959afe3
Launching Web UI with arguments: --opt-split-attention --no-half-vae --skip-install --xformers --medvram
[AddNet] Updating model hashes...
100%|███████████████████████████████████████████████████████████████████████████████| 96/96 [00:00<00:00, 15986.55it/s]
[AddNet] Updating model hashes...
100%|███████████████████████████████████████████████████████████████████████████████| 96/96 [00:00<00:00, 15985.91it/s]
2023-09-07 19:20:34,497 - ControlNet - INFO - ControlNet v1.1.224
ControlNet preprocessor location: C:\stable diffusion voldemort\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2023-09-07 19:20:34,572 - ControlNet - INFO - ControlNet v1.1.224
Loading weights [fe4efff1e1] from C:\stable diffusion voldemort\stable-diffusion-webui\model.ckpt
C:\stable diffusion voldemort\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet_ui\controlnet_ui_group.py:165: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  with gr.Row(elem_classes=["cnet-image-row"]).style(equal_height=True):
C:\stable diffusion voldemort\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet_ui\controlnet_ui_group.py:179: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  self.generated_image = gr.Image(
C:\stable diffusion voldemort\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\metadata_editor.py:399: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  with gr.Row().style(equal_height=False):
C:\stable diffusion voldemort\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\metadata_editor.py:521: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  cover_image = gr.Image(
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
CLIP Changer.on_app_started done.
Startup time: 10.5s (prepare environment: 3.3s, import torch: 2.6s, import gradio: 0.7s, setup paths: 0.7s, initialize shared: 0.2s, other imports: 0.5s, setup codeformer: 0.1s, load scripts: 1.6s, create ui: 0.4s, gradio launch: 0.4s).
Creating model from config: C:\stable diffusion voldemort\stable-diffusion-webui\configs\v1-inference.yaml
Applying attention optimization: xformers... done.
Traceback (most recent call last):
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "C:\stable diffusion voldemort\stable-diffusion-webui\modules\ui_extra_networks.py", line 392, in pages_html
    return refresh()
  File "C:\stable diffusion voldemort\stable-diffusion-webui\modules\ui_extra_networks.py", line 398, in refresh
    pg.refresh()
  File "C:\stable diffusion voldemort\stable-diffusion-webui\modules\ui_extra_networks_textual_inversion.py", line 13, in refresh
    sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True)
  File "C:\stable diffusion voldemort\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 255, in load_textual_inversion_embeddings
    self.expected_shape = self.get_expected_shape()
  File "C:\stable diffusion voldemort\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 154, in get_expected_shape
    vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1614, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'FrozenCLIPEmbedder' object has no attribute 'encode_embedding_init_text'
Traceback (most recent call last):
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "C:\stable diffusion voldemort\stable-diffusion-webui\modules\ui_extra_networks.py", line 392, in pages_html
    return refresh()
  File "C:\stable diffusion voldemort\stable-diffusion-webui\modules\ui_extra_networks.py", line 398, in refresh
    pg.refresh()
  File "C:\stable diffusion voldemort\stable-diffusion-webui\modules\ui_extra_networks_textual_inversion.py", line 13, in refresh
    sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True)
  File "C:\stable diffusion voldemort\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 255, in load_textual_inversion_embeddings
    self.expected_shape = self.get_expected_shape()
  File "C:\stable diffusion voldemort\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 154, in get_expected_shape
    vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1614, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'FrozenCLIPEmbedder' object has no attribute 'encode_embedding_init_text'
  CLIPTextModel applied: openai/clip-vit-large-patch14-336
  CLIPTokenizer not changed
  VRAM-mode: MEDVRAM
Applying attention optimization: xformers... done.
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
  File "C:\Users\Luke\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\Luke\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Users\Luke\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "C:\stable diffusion voldemort\stable-diffusion-webui\modules\initialize.py", line 147, in load_model
    shared.sd_model  # noqa: B018
  File "C:\stable diffusion voldemort\stable-diffusion-webui\modules\shared_items.py", line 110, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "C:\stable diffusion voldemort\stable-diffusion-webui\modules\sd_models.py", line 499, in get_sd_model
    load_model()
  File "C:\stable diffusion voldemort\stable-diffusion-webui\modules\sd_models.py", line 649, in load_model
    sd_model.cond_stage_model_empty_prompt = get_empty_cond(sd_model)
  File "C:\stable diffusion voldemort\stable-diffusion-webui\modules\sd_models.py", line 537, in get_empty_cond
    return sd_model.cond_stage_model([""])
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\stable diffusion voldemort\stable-diffusion-webui\modules\sd_hijack_clip.py", line 234, in forward
    z = self.process_tokens(tokens, multipliers)
  File "C:\stable diffusion voldemort\stable-diffusion-webui\modules\sd_hijack_clip.py", line 273, in process_tokens
    z = self.encode_with_transformers(tokens)
  File "C:\stable diffusion voldemort\stable-diffusion-webui\modules\sd_hijack_clip.py", line 326, in encode_with_transformers
    outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
    return self.text_model(
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 730, in forward
    hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 227, in forward
    inputs_embeds = self.token_embedding(input_ids)
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\stable diffusion voldemort\stable-diffusion-webui\modules\sd_hijack.py", line 321, in forward
    inputs_embeds = self.wrapped(input_ids)
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward
    return F.embedding(
  File "C:\stable diffusion voldemort\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2210, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)

Stable diffusion model failed to load

Additional information

No response

eek168 commented 10 months ago

I was getting this issue too after using --medvram-sdxl. Was able to fix by running once with --reinstall-torch (in addition to all my other arguments). Then closed, removed --reinstall-torch, and could rerun without getting the error.

ZeroCool22 commented 9 months ago

I was getting this issue too after using --medvram-sdxl. Was able to fix by running once with --reinstall-torch (in addition to all my other arguments). Then closed, removed --reinstall-torch, and could rerun without getting the error.

In what directory/folder you must run the command? /venv?

eek168 commented 9 months ago

In what directory/folder you must run the command? /venv?

I just added the argument inside of the webui-user.bat file (which is where I have all my other args), saved the file, ran it once, then removed the —reinstall-torch argument.