AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
142.81k stars 26.92k forks source link

CUDA Setup failed despite GPU being available. Please run the following command to get more information #13998

Open WagnerFighter opened 12 months ago

WagnerFighter commented 12 months ago

Is there an existing issue for this?

What happened?

CUDA Setup failed despite GPU being available. Please run the following command to get more information:

    python -m bitsandbytes

Ok.. I go into Python, enter this command, but python can't understand. Installed "cuda_12.2.2_537.13windows.exe" (I have not installed the latest version of cuda, since the latest 12.3.0 version refuses to be installed.)

At first I had an alarming exclamation mark (!) xformers 0.0.20. It was also written that there is a more recent version of pip, I am installing a new fresh version of pip, but every time ahead of time in webui-user.bat I was still written that there is a more recent version of pip.. But I've already installed it, what nonsense..

I am not a programmer, why should I understand all these difficulties with installation, installation should be simple. Either the explanation should be detailed, I don't even know how to go through python to any directory, and there are a lot of people like me.

Screenshot_122 Screenshot_01 Screenshot_1

Steps to reproduce the problem

difficult

What should have happened?

Everything should be installed without problems.

Sysinfo

import json import os import sys import traceback

import platform import hashlib import pkg_resources import psutil import re

import launch from modules import paths_internal, timer, shared, extensions, errors

checksum_token = "DontStealMyGamePlz__WINNERS_DONT_USE_DRUGS__DONT_COPY_THAT_FLOPPY" environment_whitelist = { "GIT", "INDEX_URL", "WEBUI_LAUNCH_LIVE_OUTPUT", "GRADIO_ANALYTICS_ENABLED", "PYTHONPATH", "TORCH_INDEX_URL", "TORCH_COMMAND", "REQS_FILE", "XFORMERS_PACKAGE", "CLIP_PACKAGE", "OPENCLIP_PACKAGE", "STABLE_DIFFUSION_REPO", "K_DIFFUSION_REPO", "CODEFORMER_REPO", "BLIP_REPO", "STABLE_DIFFUSION_COMMIT_HASH", "K_DIFFUSION_COMMIT_HASH", "CODEFORMER_COMMIT_HASH", "BLIP_COMMIT_HASH", "COMMANDLINE_ARGS", "IGNORE_CMD_ARGS_ERRORS", }

def pretty_bytes(num, suffix="B"): for unit in ["", "K", "M", "G", "T", "P", "E", "Z", "Y"]: if abs(num) < 1024 or unit == 'Y': return f"{num:.0f}{unit}{suffix}" num /= 1024

def get(): res = get_dict()

text = json.dumps(res, ensure_ascii=False, indent=4)

h = hashlib.sha256(text.encode("utf8"))
text = text.replace(checksum_token, h.hexdigest())

return text

re_checksum = re.compile(r'"Checksum": "([0-9a-fA-F]{64})"')

def check(x): m = re.search(re_checksum, x) if not m: return False

replaced = re.sub(re_checksum, f'"Checksum": "{checksum_token}"', x)

h = hashlib.sha256(replaced.encode("utf8"))
return h.hexdigest() == m.group(1)

def get_dict(): ram = psutil.virtual_memory()

res = {
    "Platform": platform.platform(),
    "Python": platform.python_version(),
    "Version": launch.git_tag(),
    "Commit": launch.commit_hash(),
    "Script path": paths_internal.script_path,
    "Data path": paths_internal.data_path,
    "Extensions dir": paths_internal.extensions_dir,
    "Checksum": checksum_token,
    "Commandline": get_argv(),
    "Torch env info": get_torch_sysinfo(),
    "Exceptions": get_exceptions(),
    "CPU": {
        "model": platform.processor(),
        "count logical": psutil.cpu_count(logical=True),
        "count physical": psutil.cpu_count(logical=False),
    },
    "RAM": {
        x: pretty_bytes(getattr(ram, x, 0)) for x in ["total", "used", "free", "active", "inactive", "buffers", "cached", "shared"] if getattr(ram, x, 0) != 0
    },
    "Extensions": get_extensions(enabled=True),
    "Inactive extensions": get_extensions(enabled=False),
    "Environment": get_environment(),
    "Config": get_config(),
    "Startup": timer.startup_record,
    "Packages": sorted([f"{pkg.key}=={pkg.version}" for pkg in pkg_resources.working_set]),
}

return res

def format_traceback(tb): return [[f"{x.filename}, line {x.lineno}, {x.name}", x.line] for x in traceback.extract_tb(tb)]

def format_exception(e, tb): return {"exception": str(e), "traceback": format_traceback(tb)}

def get_exceptions(): try: return list(reversed(errors.exception_records)) except Exception as e: return str(e)

def get_environment(): return {k: os.environ[k] for k in sorted(os.environ) if k in environment_whitelist}

def get_argv(): res = []

for v in sys.argv:
    if shared.cmd_opts.gradio_auth and shared.cmd_opts.gradio_auth == v:
        res.append("<hidden>")
        continue

    if shared.cmd_opts.api_auth and shared.cmd_opts.api_auth == v:
        res.append("<hidden>")
        continue

    res.append(v)

return res

re_newline = re.compile(r"\r*\n")

def get_torch_sysinfo(): try: import torch.utils.collect_env info = torch.utils.collect_env.get_env_info()._asdict()

    return {k: re.split(re_newline, str(v)) if "\n" in str(v) else v for k, v in info.items()}
except Exception as e:
    return str(e)

def get_extensions(*, enabled):

try:
    def to_json(x: extensions.Extension):
        return {
            "name": x.name,
            "path": x.path,
            "version": x.version,
            "branch": x.branch,
            "remote": x.remote,
        }

    return [to_json(x) for x in extensions.extensions if not x.is_builtin and x.enabled == enabled]
except Exception as e:
    return str(e)

def get_config(): try: return shared.opts.data except Exception as e: return str(e)

What browsers do you use to access the UI ?

Google Chrome

Console logs

venv "E:\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.6.0-2-g4afaaf8a
Commit hash: 4afaaf8a020c1df457bcf7250cb1c7f609699fa7
Installing requirements
Launching Web UI with arguments: --xformers --theme dark
False

===================================BUG REPORT===================================
E:\stable-diffusion-webui\venv\lib\site-packages\bitsandbytes\cuda_setup\main.py:166: UserWarning: Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

  warn(msg)
================================================================================
The following directories listed in your path were found to be non-existent: {WindowsPath('tmp/restart')}
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
The following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')}
DEBUG: Possible options found for libcudart.so: set()
CUDA SETUP: PyTorch settings found: CUDA_VERSION=118, Highest Compute Capability: 8.9.
CUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md
CUDA SETUP: Loading binary E:\stable-diffusion-webui\venv\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.so...
argument of type 'WindowsPath' is not iterable
CUDA SETUP: Problem: The main issue seems to be that the main CUDA runtime library was not detected.
CUDA SETUP: Solution 1: To solve the issue the libcudart.so location needs to be added to the LD_LIBRARY_PATH variable
CUDA SETUP: Solution 1a): Find the cuda runtime library via: find / -name libcudart.so 2>/dev/null
CUDA SETUP: Solution 1b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_1a
CUDA SETUP: Solution 1c): For a permanent solution add the export from 1b into your .bashrc file, located at ~/.bashrc
CUDA SETUP: Solution 2: If no library was found in step 1a) you need to install CUDA.
CUDA SETUP: Solution 2a): Download CUDA install script: wget https://github.com/TimDettmers/bitsandbytes/blob/main/cuda_install.sh
CUDA SETUP: Solution 2b): Install desired CUDA version to desired location. The syntax is bash cuda_install.sh CUDA_VERSION PATH_TO_INSTALL_INTO.
CUDA SETUP: Solution 2b): For example, "bash cuda_install.sh 113 ~/local/" will download CUDA 11.3 and install into the folder ~/local
Traceback (most recent call last):
  File "E:\stable-diffusion-webui\venv\lib\site-packages\transformers\utils\import_utils.py", line 1086, in _get_module
    return importlib.import_module("." + module_name, self.__name__)
  File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "E:\stable-diffusion-webui\venv\lib\site-packages\transformers\modeling_utils.py", line 85, in <module>
    from accelerate import __version__ as accelerate_version
  File "E:\stable-diffusion-webui\venv\lib\site-packages\accelerate\__init__.py", line 3, in <module>
    from .accelerator import Accelerator
  File "E:\stable-diffusion-webui\venv\lib\site-packages\accelerate\accelerator.py", line 35, in <module>
    from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state
  File "E:\stable-diffusion-webui\venv\lib\site-packages\accelerate\checkpointing.py", line 24, in <module>
    from .utils import (
  File "E:\stable-diffusion-webui\venv\lib\site-packages\accelerate\utils\__init__.py", line 131, in <module>
    from .bnb import has_4bit_bnb_layers, load_and_quantize_model
  File "E:\stable-diffusion-webui\venv\lib\site-packages\accelerate\utils\bnb.py", line 42, in <module>
    import bitsandbytes as bnb
  File "E:\stable-diffusion-webui\venv\lib\site-packages\bitsandbytes\__init__.py", line 6, in <module>
    from . import cuda_setup, utils, research
  File "E:\stable-diffusion-webui\venv\lib\site-packages\bitsandbytes\research\__init__.py", line 1, in <module>
    from . import nn
  File "E:\stable-diffusion-webui\venv\lib\site-packages\bitsandbytes\research\nn\__init__.py", line 1, in <module>
    from .modules import LinearFP8Mixed, LinearFP8Global
  File "E:\stable-diffusion-webui\venv\lib\site-packages\bitsandbytes\research\nn\modules.py", line 8, in <module>
    from bitsandbytes.optim import GlobalOptimManager
  File "E:\stable-diffusion-webui\venv\lib\site-packages\bitsandbytes\optim\__init__.py", line 6, in <module>
    from bitsandbytes.cextension import COMPILED_WITH_CUDA
  File "E:\stable-diffusion-webui\venv\lib\site-packages\bitsandbytes\cextension.py", line 20, in <module>
    raise RuntimeError('''
RuntimeError:
        CUDA Setup failed despite GPU being available. Please run the following command to get more information:

        python -m bitsandbytes

        Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
        to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
        and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "E:\stable-diffusion-webui\launch.py", line 48, in <module>
    main()
  File "E:\stable-diffusion-webui\launch.py", line 44, in main
    start()
  File "E:\stable-diffusion-webui\modules\launch_utils.py", line 432, in start
    import webui
  File "E:\stable-diffusion-webui\webui.py", line 13, in <module>
    initialize.imports()
  File "E:\stable-diffusion-webui\modules\initialize.py", line 16, in imports
    import pytorch_lightning  # noqa: F401
  File "E:\stable-diffusion-webui\venv\lib\site-packages\pytorch_lightning\__init__.py", line 35, in <module>
    from pytorch_lightning.callbacks import Callback  # noqa: E402
  File "E:\stable-diffusion-webui\venv\lib\site-packages\pytorch_lightning\callbacks\__init__.py", line 14, in <module>
    from pytorch_lightning.callbacks.batch_size_finder import BatchSizeFinder
  File "E:\stable-diffusion-webui\venv\lib\site-packages\pytorch_lightning\callbacks\batch_size_finder.py", line 24, in <module>
    from pytorch_lightning.callbacks.callback import Callback
  File "E:\stable-diffusion-webui\venv\lib\site-packages\pytorch_lightning\callbacks\callback.py", line 25, in <module>
    from pytorch_lightning.utilities.types import STEP_OUTPUT
  File "E:\stable-diffusion-webui\venv\lib\site-packages\pytorch_lightning\utilities\types.py", line 27, in <module>
    from torchmetrics import Metric
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torchmetrics\__init__.py", line 14, in <module>
    from torchmetrics import functional  # noqa: E402
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torchmetrics\functional\__init__.py", line 120, in <module>
    from torchmetrics.functional.text._deprecated import _bleu_score as bleu_score
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torchmetrics\functional\text\__init__.py", line 50, in <module>    from torchmetrics.functional.text.bert import bert_score  # noqa: F401
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torchmetrics\functional\text\bert.py", line 23, in <module>
    from torchmetrics.functional.text.helper_embedding_metric import (
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torchmetrics\functional\text\helper_embedding_metric.py", line 27, in <module>
    from transformers import AutoModelForMaskedLM, AutoTokenizer, PreTrainedModel, PreTrainedTokenizerBase
  File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
  File "E:\stable-diffusion-webui\venv\lib\site-packages\transformers\utils\import_utils.py", line 1076, in __getattr__
    module = self._get_module(self._class_to_module[name])
  File "E:\stable-diffusion-webui\venv\lib\site-packages\transformers\utils\import_utils.py", line 1088, in _get_module
    raise RuntimeError(
RuntimeError: Failed to import transformers.modeling_utils because of the following error (look up to see its traceback):

        CUDA Setup failed despite GPU being available. Please run the following command to get more information:

        python -m bitsandbytes

        Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
        to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
        and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues
Для продолжения нажмите любую клавишу . . .

Additional information

No response

WagnerFighter commented 12 months ago

Deleted the "venv" folder, reinstalled. It worked. I install Dreambooth, restart it and see the problem again. exclamation mark at xformers 0.0.20 and some problems with Cuda Screenshot_777 Screenshot_888

WagnerFighter commented 12 months ago

Screenshot_999

AlexCppns commented 12 months ago

I had a very similar issue when I launched webui about 30 minutes ago:

CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... The following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')} DEBUG: Possible options found for libcudart.so: set() CUDA SETUP: PyTorch settings found: CUDA_VERSION=118, Highest Compute Capability: 8.6. CUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md CUDA SETUP: Loading binary C:\Users\user\projects\auto-clean\stable-diffusion-webui\venv\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.so... argument of type 'WindowsPath' is not iterable CUDA SETUP: Problem: The main issue seems to be that the main CUDA runtime library was not detected. CUDA SETUP: Solution 1: To solve the issue the libcudart.so location needs to be added to the LD_LIBRARY_PATH variable CUDA SETUP: Solution 1a): Find the cuda runtime library via: find / -name libcudart.so 2>/dev/null CUDA SETUP: Solution 1b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_1a CUDA SETUP: Solution 1c): For a permanent solution add the export from 1b into your .bashrc file, located at ~/.bashrc CUDA SETUP: Solution 2: If no library was found in step 1a) you need to install CUDA. CUDA SETUP: Solution 2a): Download CUDA install script: wget https://github.com/TimDettmers/bitsandbytes/blob/main/cuda_install.sh CUDA SETUP: Solution 2b): Install desired CUDA version to desired location. The syntax is bash cuda_install.sh CUDA_VERSION PATH_TO_INSTALL_INTO. CUDA SETUP: Solution 2b): For example, "bash cuda_install.sh 113 ~/local/" will download CUDA 11.3 and install into the folder ~/local Traceback (most recent call last): File "C:\Users\user\projects\auto-clean\stable-diffusion-webui\venv\lib\site-packages\transformers\utils\import_utils.py", line 1086, in _get_module return importlib.import_module("." + module_name, self.__name__) File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1050, in _gcd_import File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "C:\Users\user\projects\auto-clean\stable-diffusion-webui\venv\lib\site-packages\transformers\modeling_utils.py", line 85, in <module> from accelerate import __version__ as accelerate_version File "C:\Users\user\projects\auto-clean\stable-diffusion-webui\venv\lib\site-packages\accelerate\__init__.py", line 3, in <module> from .accelerator import Accelerator File "C:\Users\user\projects\auto-clean\stable-diffusion-webui\venv\lib\site-packages\accelerate\accelerator.py", line 35, in <module> from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state File "C:\Users\user\projects\auto-clean\stable-diffusion-webui\venv\lib\site-packages\accelerate\checkpointing.py", line 24, in <module> from .utils import ( File "C:\Users\user\projects\auto-clean\stable-diffusion-webui\venv\lib\site-packages\accelerate\utils\__init__.py", line 131, in <module> from .bnb import has_4bit_bnb_layers, load_and_quantize_model File "C:\Users\user\projects\auto-clean\stable-diffusion-webui\venv\lib\site-packages\accelerate\utils\bnb.py", line 42, in <module> import bitsandbytes as bnb File "C:\Users\user\projects\auto-clean\stable-diffusion-webui\venv\lib\site-packages\bitsandbytes\__init__.py", line 6, in <module> from . import cuda_setup, utils, research File "C:\Users\user\projects\auto-clean\stable-diffusion-webui\venv\lib\site-packages\bitsandbytes\research\__init__.py", line 1, in <module> from . import nn File "C:\Users\user\projects\auto-clean\stable-diffusion-webui\venv\lib\site-packages\bitsandbytes\research\nn\__init__.py", line 1, in <module> from .modules import LinearFP8Mixed, LinearFP8Global File "C:\Users\user\projects\auto-clean\stable-diffusion-webui\venv\lib\site-packages\bitsandbytes\research\nn\modules.py", line 8, in <module> from bitsandbytes.optim import GlobalOptimManager File "C:\Users\user\projects\auto-clean\stable-diffusion-webui\venv\lib\site-packages\bitsandbytes\optim\__init__.py", line 6, in <module> from bitsandbytes.cextension import COMPILED_WITH_CUDA File "C:\Users\user\projects\auto-clean\stable-diffusion-webui\venv\lib\site-packages\bitsandbytes\cextension.py", line 20, in <module> raise RuntimeError(''' RuntimeError: CUDA Setup failed despite GPU being available. Please run the following command to get more information: python -m bitsandbytes Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues

wcole3 commented 12 months ago

Encountered this today as well on Windows and main. Removing the extension that required bitsandbytes (for me that was Dreambooth), deleting venv, and rerunning webui.bat fixed the issues. If you don't know which extension is using bitsandbytes, I would try removing your extensions, deleting venv, and rerunning to confirm webui launches. Then iteratively add back in extensions until you identify the issue.

Possible discussion https://github.com/d8ahazard/sd_dreambooth_extension/issues/1389

WagnerFighter commented 12 months ago

installing "bitsandbytes" doesn't do anything, I can't use dreambooth.

IIIWHKIII commented 12 months ago
  1. Open cmd, run this: python.exe -m pip install bitsandbytes-windows

  2. go to C:\Users\Me\AppData\Local\Programs\Python\Python310\Lib\site-packages

or wherever you have installed Python 3.10 idk.

copy bitsandbytes folder.

  1. paste and replace in your "\venv\Lib\site-packages"

for me is C:\stable-diffusion-webui\venv\Lib\site-packages.

now it works.

i have dreambooth too.

WagnerFighter commented 12 months ago
  1. Откройте cmd, запустите это: python.exe -m pip install bitsandbytes-windows
  2. перейдите в C:\Users\Me\AppData\Local\Programs\Python\Python310\Lib\site-packages

или где бы вы ни установили Python 3.10, я не знаю.

скопируйте папку bitsandbytes.

  1. вставьте и замените в «\venv\Lib\site-packages»

для меня это C:\stable-diffusion-webui\venv\Lib\site-packages.

теперь это работает.

у меня тоже есть Dreambooth.

This is a victory! Only need to move not 1 bitsandbytes folder, but 3 folders: bitsandbytes, bitsandbytes_windows-0.37.5.dist-info, bitsandbytes-0.41.2.post2.dist-info