bitsandbytes-foundation / bitsandbytes

Accessible large language models via k-bit quantization for PyTorch.
https://huggingface.co/docs/bitsandbytes/main/en/index
MIT License
6.3k stars 631 forks source link

libcudart.so Not Found #1313

Closed arunsandy1309 closed 2 months ago

arunsandy1309 commented 3 months ago

System Info

I'm trying to use the LLaVA-Med model on my windows 11 machine in a virtual env. Below are my configurations: OS - Windows 11 Python - 3.10.14 PyTorch - 4.36 (Required) bitsandbytes - 0.41.0 (Required) CUDA - 11.8

The exact torch and bitsandbytes versions are important for running the LLaVA-Med.

Reproduction

!python -m bitsandbytes

False

===================================BUG REPORT===================================

The following directories listed in your path were found to be non-existent: {WindowsPath('/Users/aruns/miniconda3/envs/llava_med/lib'), WindowsPath('C')} The following directories listed in your path were found to be non-existent: {WindowsPath('module'), WindowsPath('/matplotlib_inline.backend_inline')} The following directories listed in your path were found to be non-existent: {WindowsPath('vs/workbench/api/node/extensionHostProcess')} CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... The following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')} DEBUG: Possible options found for libcudart.so: set() CUDA SETUP: PyTorch settings found: CUDA_VERSION=118, Highest Compute Capability: 6.1. CUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md CUDA SETUP: Loading binary c:\Users\aruns\miniconda3\envs\llava_med\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118_nocublaslt.so... argument of type 'WindowsPath' is not iterable CUDA SETUP: Problem: The main issue seems to be that the main CUDA runtime library was not detected. CUDA SETUP: Solution 1: To solve the issue the libcudart.so location needs to be added to the LD_LIBRARY_PATH variable CUDA SETUP: Solution 1a): Find the cuda runtime library via: find / -name libcudart.so 2>/dev/null CUDA SETUP: Solution 1b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_1a CUDA SETUP: Solution 1c): For a permanent solution add the export from 1b into your .bashrc file, located at ~/.bashrc CUDA SETUP: Solution 2: If no library was found in step 1a) you need to install CUDA. CUDA SETUP: Solution 2a): Download CUDA install script: wget https://github.com/TimDettmers/bitsandbytes/blob/main/cuda_install.sh CUDA SETUP: Solution 2b): Install desired CUDA version to desired location. The syntax is bash cuda_install.sh CUDA_VERSION PATH_TO_INSTALL_INTO. CUDA SETUP: Solution 2b): For example, "bash cuda_install.sh 113 ~/local/" will download CUDA 11.3 and install into the folder ~/local c:\Users\aruns\miniconda3\envs\llava_med\lib\site-packages\bitsandbytes\cuda_setup\main.py:166: UserWarning: Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

warn(msg) c:\Users\aruns\miniconda3\envs\llava_med\lib\site-packages\bitsandbytes\cuda_setup\main.py:166: UserWarning: C:\Users\aruns\miniconda3\envs\llava_med did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths... warn(msg) c:\Users\aruns\miniconda3\envs\llava_med\lib\site-packages\bitsandbytes\cuda_setup\main.py:166: UserWarning: WARNING: Compute capability < 7.5 detected! Only slow 8-bit matmul is supported for your GPU! If you run into issues with 8-bit matmul, you can try 4-bit quantization: https://huggingface.co/blog/4bit-transformers-bitsandbytes warn(msg) Traceback (most recent call last): File "c:\Users\aruns\miniconda3\envs\llava_med\lib\runpy.py", line 187, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "c:\Users\aruns\miniconda3\envs\llava_med\lib\runpy.py", line 146, in _get_module_details return _get_module_details(pkg_main_name, error) File "c:\Users\aruns\miniconda3\envs\llava_med\lib\runpy.py", line 110, in _get_module_details import(pkg_name) File "c:\Users\aruns\miniconda3\envs\llava_med\lib\site-packages\bitsandbytes__init.py", line 6, in from . import cuda_setup, utils, research File "c:\Users\aruns\miniconda3\envs\llava_med\lib\site-packages\bitsandbytes\research__init__.py", line 1, in from . import nn File "c:\Users\aruns\miniconda3\envs\llava_med\lib\site-packages\bitsandbytes\research\nn\init.py", line 1, in from .modules import LinearFP8Mixed, LinearFP8Global File "c:\Users\aruns\miniconda3\envs\llava_med\lib\site-packages\bitsandbytes\research\nn\modules.py", line 8, in from bitsandbytes.optim import GlobalOptimManager File "c:\Users\aruns\miniconda3\envs\llava_med\lib\site-packages\bitsandbytes\optim\init__.py", line 6, in from bitsandbytes.cextension import COMPILED_WITH_CUDA File "c:\Users\aruns\miniconda3\envs\llava_med\lib\site-packages\bitsandbytes\cextension.py", line 20, in raise RuntimeError(''' RuntimeError: CUDA Setup failed despite GPU being available. Please run the following command to get more information:

    python -m bitsandbytes

    Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
    to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
    and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues

image

Expected behavior

Can you let me know if I missed anything. I have added the CUDA bin folder to the PATH enviromental variables too. I have no idea how to proceed on this.

matthewdouglas commented 3 months ago

Windows support requires a newer version where bitsandbytes>=0.43.0. You may be able to get things to work with 0.41.0 using WSL, or alternatively, see if the upstream llava-med package can relax their requirement.

Did you try on Windows with bitsandbytes == 0.43.3 despite the warning?

arunsandy1309 commented 2 months ago

Yes, this is now working...