openvinotoolkit / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
290 stars 45 forks source link

[Bug]: RuntimeError: "addmm_impl_cpu_" not implemented for 'Half' #104

Open yinan7 opened 8 months ago

yinan7 commented 8 months ago

Is there an existing issue for this?

What happened?

laptop installation ready,already can get in webui. but have RuntimeError: "addmm_implcpu" not implemented for 'Half'

Steps to reproduce the problem

laptop installation ready,(no GPU, torch windows intel cpu) already can get in webui.

What should have happened?

basic test

Sysinfo

win10 chrome

What browsers do you use to access the UI ?

Google Chrome

Console logs

C:\Users\z84238884>cd C:\Users\z84238884\Downloads\stable-diffusion-webui-master\stable-diffusion-webui-master

C:\Users\z84238884\Downloads\stable-diffusion-webui-master\stable-diffusion-webui-master>webui-user.bat
venv "C:\Users\z84238884\Downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\venv\Scripts\Python.exe"
==============================================================================================================
INCOMPATIBLE PYTHON VERSION

This program is tested with 3.10.6 Python, but you have 3.11.3.
If you encounter an error with "RuntimeError: Couldn't install torch." message,
or any other error regarding unsuccessful package (library) installation,
please downgrade (or upgrade) to the latest version of 3.10 Python
and delete current Python and "venv" folder in WebUI's directory.

You can download 3.10 Python from here: https://www.python.org/downloads/release/python-3106/

Alternatively, use a binary release of WebUI: https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases

Use --skip-python-version-check to suppress this warning.
==============================================================================================================
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
Python 3.11.3 (tags/v3.11.3:f3909b8, Apr  4 2023, 23:49:59) [MSC v.1934 64 bit (AMD64)]
Version: 1.8.0-RC
Commit hash: <none>
Launching Web UI with arguments: --skip-torch-cuda-test
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
Loading weights [fe4efff1e1] from C:\Users\z84238884\Downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\models\Stable-diffusion\sd-v1-4.ckpt
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 29.0s (prepare environment: 0.3s, import torch: 21.2s, import gradio: 1.8s, setup paths: 2.0s, initialize shared: 0.5s, other imports: 1.2s, load scripts: 1.1s, create ui: 0.4s, gradio launch: 0.4s).
Creating model from config: C:\Users\z84238884\Downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\configs\v1-inference.yaml
Applying attention optimization: InvokeAI... done.
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
  File "C:\Users\z84238884\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 995, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\z84238884\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1038, in _bootstrap_inner
    self.run()
  File "C:\Users\z84238884\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 975, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\z84238884\Downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\initialize.py", line 148, in load_model
    shared.sd_model  # noqa: B018
  File "C:\Users\z84238884\Downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\shared_items.py", line 133, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "C:\Users\z84238884\Downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\sd_models.py", line 621, in get_sd_model
    load_model()
  File "C:\Users\z84238884\Downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\sd_models.py", line 771, in load_model
    with devices.autocast(), torch.no_grad():
         ^^^^^^^^^^^^^^^^^^
  File "C:\Users\z84238884\Downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\devices.py", line 218, in autocast
    if has_xpu() or has_mps() or cuda_no_autocast():
                                 ^^^^^^^^^^^^^^^^^^
  File "C:\Users\z84238884\Downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\devices.py", line 28, in cuda_no_autocast
    device_id = get_cuda_device_id()
                ^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\z84238884\Downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\devices.py", line 40, in get_cuda_device_id
    ) or torch.cuda.current_device()
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\z84238884\Downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\venv\Lib\site-packages\torch\cuda\__init__.py", line 769, in current_device
    _lazy_init()
  File "C:\Users\z84238884\Downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\venv\Lib\site-packages\torch\cuda\__init__.py", line 298, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

Stable diffusion model failed to load
Exception in thread Thread-16 (load_model):
Traceback (most recent call last):
  File "C:\Users\z84238884\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1038, in _bootstrap_inner
    self.run()
  File "C:\Users\z84238884\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 975, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\z84238884\Downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\initialize.py", line 153, in load_model
    devices.first_time_calculation()
  File "C:\Users\z84238884\Downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\modules\devices.py", line 267, in first_time_calculation
    linear(x)
  File "C:\Users\z84238884\Downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\venv\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\z84238884\Downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\venv\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\z84238884\Downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\extensions-builtin\Lora\networks.py", line 500, in network_Linear_forward
    return originals.Linear_forward(self, input)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\z84238884\Downloads\stable-diffusion-webui-master\stable-diffusion-webui-master\venv\Lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'
Interrupted with signal 2 in <frame at 0x0000022A683B75A0, file 'C:\\Users\\z84238884\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\threading.py', line 324, code wait>
终止批处理操作吗(Y/N)?

Additional information

No response

chinmoy-gavini commented 8 months ago

@yinan7 Could you do the following. In webui-user.bat, in the line where you see set COMMANDLINE_ARGS, add --precision full --no-half, so it might look something like the following:

set COMMANDLINE_ARGS=--skip-torch-cuda-test --precision full --no-half.

Because you are not using a GPU, you would also add the skip-torch-cuda-test.

yinan7 commented 8 months ago

--precision full --no-half

Thanks! this is another PC,same issue. can be solved but the PC have AMD 5700xt GPU. Lapto hasn't tried yet by the way ,I want to know the GPU Has it helped the program? Thanks again! image

yinan7 commented 8 months ago

@yinan7 Could you do the following. In webui-user.bat, in the line where you see set COMMANDLINE_ARGS, add --precision full --no-half, so it might look something like the following:

set COMMANDLINE_ARGS=--skip-torch-cuda-test --precision full --no-half.

Because you are not using a GPU, you would also add the skip-torch-cuda-test.

test in laptop chrome error image cmd stucked in 20min +

yinan7 commented 8 months ago

image

yinan7 commented 8 months ago

--precision full --no-half

Thanks! this is another PC,same issue. can be solved but the PC have AMD 5700xt GPU. Lapto hasn't tried yet by the way ,I want to know the GPU Has it helped the program? Thanks again! image

in PC, look like GPU is not working in AI programing.GPU usage is less than 10% 70 percent of the CPU, when generating images

chinmoy-gavini commented 8 months ago

@yinan7 Could you do the following. In webui-user.bat, in the line where you see set COMMANDLINE_ARGS, add --precision full --no-half, so it might look something like the following: set COMMANDLINE_ARGS=--skip-torch-cuda-test --precision full --no-half. Because you are not using a GPU, you would also add the skip-torch-cuda-test.

test in laptop chrome error image cmd stucked in 20min +

Can you add --no-gradio-queue like so:

set COMMANDLINE_ARGS=--skip-torch-cuda-test --precision full --no-half --no-gradio-queue

lncubus commented 7 months ago

Can you add --no-gradio-queue like so: set COMMANDLINE_ARGS=--skip-torch-cuda-test --precision full --no-half --no-gradio-queue

Thank you for your help. Trying to run that on laptop with no Nvidia card on board.

yinan7 commented 7 months ago

@chinmoy-gavini Thank you for your help,problem is fixed