bitsandbytes-foundation / bitsandbytes

Accessible large language models via k-bit quantization for PyTorch.
https://huggingface.co/docs/bitsandbytes/main/en/index
MIT License
6.18k stars 621 forks source link

python -m bitsandbytes - UDA Setup failed despite GPU being available. Please run the following command to get more information: #606

Closed joelvargasapo closed 9 months ago

joelvargasapo commented 1 year ago

I was installing LLM Studio on Ubuntu 22.04 using the same steps I did for 20.04, except that I followed the below steps to install the install the nvidia driver, and I am experiencing the below error

nVidia driver installation steps https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=22.04&target_type=deb_local

CUDA SETUP: Something unexpected happened. Please compile from source: git clone git@github.com:TimDettmers/bitsandbytes.git cd bitsandbytes CUDA_VERSION=122 python setup.py install CUDA SETUP: Setup Failed! Process SpawnProcess-1: Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/ubuntu/.local/share/virtualenvs/h2o-llmstudio-B1eqiLCk/lib/python3.10/site-packages/uvicorn/_subprocess.py", line 76, in subprocess_started target(sockets=sockets) File "/home/ubuntu/.local/share/virtualenvs/h2o-llmstudio-B1eqiLCk/lib/python3.10/site-packages/uvicorn/server.py", line 61, in run return asyncio.run(self.serve(sockets=sockets)) File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/usr/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete return future.result() File "/home/ubuntu/.local/share/virtualenvs/h2o-llmstudio-B1eqiLCk/lib/python3.10/site-packages/uvicorn/server.py", line 68, in serve config.load() File "/home/ubuntu/.local/share/virtualenvs/h2o-llmstudio-B1eqiLCk/lib/python3.10/site-packages/uvicorn/config.py", line 473, in load self.loaded_app = import_from_string(self.app) File "/home/ubuntu/.local/share/virtualenvs/h2o-llmstudio-B1eqiLCk/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string module = importlib.import_module(module_str) File "/usr/lib/python3.10/importlib/init.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1050, in _gcd_import File "", line 1027, in _find_and_load File "", line 1006, in _find_and_load_unlocked File "", line 688, in _load_unlocked File "", line 883, in exec_module File "", line 241, in _call_with_frames_removed File "/home/ubuntu/h2o-llmstudio/./app.py", line 4, in from app_utils.sections.chat import show_chat_is_running_dialog File "/home/ubuntu/h2o-llmstudio/./app_utils/sections/chat.py", line 17, in from app_utils.utils import get_experiments, get_ui_elements, parse_ui_elements File "/home/ubuntu/h2o-llmstudio/./app_utils/utils.py", line 33, in from llm_studio.python_configs.text_causal_language_modeling_config import ( File "/home/ubuntu/h2o-llmstudio/./llm_studio/python_configs/text_causal_language_modeling_config.py", line 15, in from llm_studio.src.models import text_causal_language_modeling_model, text_reward_model File "/home/ubuntu/h2o-llmstudio/./llm_studio/src/models/text_causal_language_modeling_model.py", line 5, in from peft import LoraConfig, get_peft_model File "/home/ubuntu/.local/share/virtualenvs/h2o-llmstudio-B1eqiLCk/lib/python3.10/site-packages/peft/init.py", line 22, in from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING, PEFT_TYPE_TO_CONFIG_MAPPING, get_peft_config, get_peft_model File "/home/ubuntu/.local/share/virtualenvs/h2o-llmstudio-B1eqiLCk/lib/python3.10/site-packages/peft/mapping.py", line 20, in from .peft_model import ( File "/home/ubuntu/.local/share/virtualenvs/h2o-llmstudio-B1eqiLCk/lib/python3.10/site-packages/peft/peft_model.py", line 39, in from .tuners import ( File "/home/ubuntu/.local/share/virtualenvs/h2o-llmstudio-B1eqiLCk/lib/python3.10/site-packages/peft/tuners/init.py", line 21, in from .lora import LoraConfig, LoraModel File "/home/ubuntu/.local/share/virtualenvs/h2o-llmstudio-B1eqiLCk/lib/python3.10/site-packages/peft/tuners/lora.py", line 41, in import bitsandbytes as bnb File "/home/ubuntu/.local/share/virtualenvs/h2o-llmstudio-B1eqiLCk/lib/python3.10/site-packages/bitsandbytes/init.py", line 6, in from . import cuda_setup, utils, research File "/home/ubuntu/.local/share/virtualenvs/h2o-llmstudio-B1eqiLCk/lib/python3.10/site-packages/bitsandbytes/research/init.py", line 1, in from . import nn File "/home/ubuntu/.local/share/virtualenvs/h2o-llmstudio-B1eqiLCk/lib/python3.10/site-packages/bitsandbytes/research/nn/init.py", line 1, in from .modules import LinearFP8Mixed, LinearFP8Global File "/home/ubuntu/.local/share/virtualenvs/h2o-llmstudio-B1eqiLCk/lib/python3.10/site-packages/bitsandbytes/research/nn/modules.py", line 8, in from bitsandbytes.optim import GlobalOptimManager File "/home/ubuntu/.local/share/virtualenvs/h2o-llmstudio-B1eqiLCk/lib/python3.10/site-packages/bitsandbytes/optim/init.py", line 6, in from bitsandbytes.cextension import COMPILED_WITH_CUDA File "/home/ubuntu/.local/share/virtualenvs/h2o-llmstudio-B1eqiLCk/lib/python3.10/site-packages/bitsandbytes/cextension.py", line 20, in raise RuntimeError(''' RuntimeError:

jiqing-feng commented 1 year ago

Same issue with torch==2.0.1 bitsandbytes==0.40.2 cuda==11.7

FurkanGozukara commented 1 year ago

here same

D:\97 kohya\kohya_ss\venv\lib\site-packages\bitsandbytes\cuda_setup\main.py:166: UserWarning: Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

  warn(msg)
================================================================================
The following directories listed in your path were found to be non-existent: {WindowsPath('AQAAANCMnd8BFdERjHoAwE/Cl+sBAAAAotnJucoClk6q27i00zbDTgQAAAACAAAAAAAQZgAAAAEAACAAAABR/EJ65EFlk4BCexrRmIwhTLr6nM/sU/5Jp75aLJ/W4AAAAAAOgAAAAAIAACAAAAAykqjYoLisVQVeNwLbXjo8yiN/zPyh5RqcHFN5HQtVrGAAAABxC6FKnjEMIeCycGwCq9bO8VB2WgGrM+0mygHrt8dAszP5/ahQ2MNSaTFGM6FC33ZgcBeETWdPzv9eQEq6keSWORzuXtuMfcbmo2m/dkRBpk80CbXwW1yrq1iIq7atnKRAAAAAo+A8ocHHFIs6kP9FlvjMO6Wj6vx5LR0oKh5CPKco9t3yaQvUMauiydkLlpHSNfXx5mrtZxg5JjmCLiXKRfDjgg==')}
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
The following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')}
DEBUG: Possible options found for libcudart.so: set()
CUDA SETUP: PyTorch settings found: CUDA_VERSION=118, Highest Compute Capability: 8.6.
CUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md
CUDA SETUP: Loading binary D:\97 kohya\kohya_ss\venv\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.so...
argument of type 'WindowsPath' is not iterable
CUDA SETUP: Problem: The main issue seems to be that the main CUDA runtime library was not detected.
CUDA SETUP: Solution 1: To solve the issue the libcudart.so location needs to be added to the LD_LIBRARY_PATH variable
CUDA SETUP: Solution 1a): Find the cuda runtime library via: find / -name libcudart.so 2>/dev/null
CUDA SETUP: Solution 1b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_1a
CUDA SETUP: Solution 1c): For a permanent solution add the export from 1b into your .bashrc file, located at ~/.bashrc
CUDA SETUP: Solution 2: If no library was found in step 1a) you need to install CUDA.
CUDA SETUP: Solution 2a): Download CUDA install script: wget https://github.com/TimDettmers/bitsandbytes/blob/main/cuda_install.sh
CUDA SETUP: Solution 2b): Install desired CUDA version to desired location. The syntax is bash cuda_install.sh CUDA_VERSION PATH_TO_INSTALL_INTO.
CUDA SETUP: Solution 2b): For example, "bash cuda_install.sh 113 ~/local/" will download CUDA 11.3 and install into the folder ~/local
Traceback (most recent call last):
  File "C:\Python3108\lib\runpy.py", line 187, in _run_module_as_main
    mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
  File "C:\Python3108\lib\runpy.py", line 146, in _get_module_details
    return _get_module_details(pkg_main_name, error)
  File "C:\Python3108\lib\runpy.py", line 110, in _get_module_details
    __import__(pkg_name)
  File "D:\97 kohya\kohya_ss\venv\lib\site-packages\bitsandbytes\__init__.py", line 6, in <module>
    from . import cuda_setup, utils, research
  File "D:\97 kohya\kohya_ss\venv\lib\site-packages\bitsandbytes\research\__init__.py", line 1, in <module>
    from . import nn
  File "D:\97 kohya\kohya_ss\venv\lib\site-packages\bitsandbytes\research\nn\__init__.py", line 1, in <module>
    from .modules import LinearFP8Mixed, LinearFP8Global
  File "D:\97 kohya\kohya_ss\venv\lib\site-packages\bitsandbytes\research\nn\modules.py", line 8, in <module>
    from bitsandbytes.optim import GlobalOptimManager
  File "D:\97 kohya\kohya_ss\venv\lib\site-packages\bitsandbytes\optim\__init__.py", line 6, in <module>
    from bitsandbytes.cextension import COMPILED_WITH_CUDA
  File "D:\97 kohya\kohya_ss\venv\lib\site-packages\bitsandbytes\cextension.py", line 20, in <module>
    raise RuntimeError('''
RuntimeError:
        CUDA Setup failed despite GPU being available. Please run the following command to get more information:

        python -m bitsandbytes

        Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
        to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
        and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues
fredrik-hansen commented 1 year ago

I was able to get this to work. The solution in my case was found in how_to_use_nonpytorch_cuda.md The most likely cause was a system update of cuda to cuda122 while torch was on cuda121.

I did the below: bash cuda_install.sh 122 /home/user/.local/cuda122 1 export BNB_CUDA_VERSION=122 export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/user/.local/cuda122

When trying my code again it worked.

How to use a CUDA version that is different from PyTorch

Some features of bitsandbytes may need a newer CUDA version than regularly supported by PyTorch binaries from conda / pip. In that case you can use the following instructions to load a precompiled bitsandbytes binary that works for you.

mawenju203 commented 1 year ago

win10 : https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels

swumagic commented 11 months ago

Bitsandbytes was not supported windows before, but my method can support windows.(yuhuang) 1 open folder J:\StableDiffusion\sdwebui,Click the address bar of the folder and enter CMD or WIN+R, CMD 。enter,cd /d J:\StableDiffusion\sdwebui 2 J:\StableDiffusion\sdwebui\py310\python.exe -m pip uninstall bitsandbytes

3 J:\StableDiffusion\sdwebui\py310\python.exe -m pip uninstall bitsandbytes-windows

4 J:\StableDiffusion\sdwebui\py310\python.exe -m pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.1-py3-none-win_amd64.whl

Replace your SD venv directory file(python.exe Folder) here(J:\StableDiffusion\sdwebui\py310)

github-actions[bot] commented 10 months ago

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.