PromtEngineer / localGPT

Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
Apache License 2.0
20.09k stars 2.24k forks source link

CUDA Setup failed despite GPU being available #241

Open CalendulaED opened 1 year ago

CalendulaED commented 1 year ago

The problem happen during ingest

python ingest.py --device_type cuda `2023-07-22 11:13:57,529 - INFO - ingest.py:120 - Loading documents from D:\OnlineLearning\GPT\localGPT/SOURCE_DOCUMENTS 2023-07-22 11:14:02,200 - INFO - ingest.py:129 - Loaded 1 documents from D:\OnlineLearning\GPT\localGPT/SOURCE_DOCUMENTS 2023-07-22 11:14:02,200 - INFO - ingest.py:130 - Split into 72 chunks of text 2023-07-22 11:14:05,229 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large load INSTRUCTOR_Transformer False

===================================BUG REPORT=================================== C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\bitsandbytes\cuda_setup\main.py:166: UserWarning: Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

warn(msg)

The following directories listed in your path were found to be non-existent: {WindowsPath('C'), WindowsPath('/Users/wuyux/anaconda3/envs/localgpt/lib')} C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\bitsandbytes\cuda_setup\main.py:166: UserWarning: C:\Users\wuyux\anaconda3\envs\localgpt did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths... warn(msg) CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... The following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')} DEBUG: Possible options found for libcudart.so: set() CUDA SETUP: PyTorch settings found: CUDA_VERSION=118, Highest Compute Capability: 8.6. CUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md CUDA SETUP: Loading binary C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.so... argument of type 'WindowsPath' is not iterable CUDA SETUP: Problem: The main issue seems to be that the main CUDA runtime library was not detected. CUDA SETUP: Solution 1: To solve the issue the libcudart.so location needs to be added to the LD_LIBRARY_PATH variable CUDA SETUP: Solution 1a): Find the cuda runtime library via: find / -name libcudart.so 2>/dev/null CUDA SETUP: Solution 1b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_1a
CUDA SETUP: Solution 1c): For a permanent solution add the export from 1b into your .bashrc file, located at ~/.bashrc CUDA SETUP: Solution 2: If no library was found in step 1a) you need to install CUDA. CUDA SETUP: Solution 2a): Download CUDA install script: wget https://github.com/TimDettmers/bitsandbytes/blob/main/cuda_install.sh CUDA SETUP: Solution 2b): Install desired CUDA version to desired location. The syntax is bash cuda_install.sh CUDA_VERSION PATH_TO_INSTALL_INTO.
CUDA SETUP: Solution 2b): For example, "bash cuda_install.sh 113 ~/local/" will download CUDA 11.3 and install into the folder ~/local Traceback (most recent call last): File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\transformers\utils\import_utils.py", line 1099, in _get_module return importlib.import_module("." + module_name, self.name) File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\importlib__init.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1050, in _gcd_import File "", line 1027, in _find_and_load File "", line 1006, in _find_and_load_unlocked File "", line 688, in _load_unlocked File "", line 883, in exec_module File "", line 241, in _call_with_frames_removed File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\transformers\models\t5\modeling_t5.py", line 37, in from ...modeling_utils import PreTrainedModel File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\transformers\modeling_utils.py", line 86, in from accelerate import dispatch_model, infer_auto_device_map, init_empty_weights File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\accelerate__init.py", line 3, in from .accelerator import Accelerator File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\accelerate\accelerator.py", line 35, in from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\accelerate\checkpointing.py", line 24, in from .utils import ( File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\accelerate\utils\init.py", line 131, in from .bnb import has_4bit_bnb_layers, load_and_quantize_model File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\accelerate\utils\bnb.py", line 42, in import bitsandbytes as bnb File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\bitsandbytes\init__.py", line 6, in from . import cuda_setup, utils, research File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\bitsandbytes\research\init.py", line 1, in from . import nn File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\bitsandbytes\research\nn\init.py", line 1, in from .modules import LinearFP8Mixed, LinearFP8Global File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\bitsandbytes\research\nn\modules.py", line 8, in from bitsandbytes.optim import GlobalOptimManager File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\bitsandbytes\optim\init__.py", line 6, in from bitsandbytes.cextension import COMPILED_WITH_CUDA File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\bitsandbytes\cextension.py", line 20, in raise RuntimeError(''' RuntimeError: CUDA Setup failed despite GPU being available. Please run the following command to get more information:

    python -m bitsandbytes

    Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
    to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
    and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "D:\OnlineLearning\GPT\localGPT\ingest.py", line 158, in main() File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\click\core.py", line 1157, in call return self.main(args, kwargs) File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\click\core.py", line 1078, in main rv = self.invoke(ctx) File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\click\core.py", line 1434, in invoke return ctx.invoke(self.callback, ctx.params) File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\click\core.py", line 783, in invoke return __callback(args, kwargs) File "D:\OnlineLearning\GPT\localGPT\ingest.py", line 133, in main embeddings = HuggingFaceInstructEmbeddings( File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\langchain\embeddings\huggingface.py", line 137, in init self.client = INSTRUCTOR( File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\sentence_transformers\SentenceTransformer.py", line 95, in init modules = self._load_sbert_model(model_path) File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\InstructorEmbedding\instructor.py", line 474, in _load_sbert_model module = module_class.load(os.path.join(model_path, module_config['path'])) File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\InstructorEmbedding\instructor.py", line 306, in load return INSTRUCTOR_Transformer(model_name_or_path=input_path, config) File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\InstructorEmbedding\instructor.py", line 240, in init self._load_model(self.model_name_or_path, config, cache_dir, **model_args) File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\sentence_transformers\models\Transformer.py", line 47, in _load_model self._load_t5_model(model_name_or_path, config, cache_dir) File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\sentence_transformers\models\Transformer.py", line 53, in _load_t5_model from transformers import T5EncoderModel File "", line 1075, in _handle_fromlist File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\transformers\utils\import_utils.py", line 1090, in getattr value = getattr(module, name) File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\transformers\utils\import_utils.py", line 1089, in getattr module = self._get_module(self._class_to_module[name]) File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\transformers\utils\import_utils.py", line 1101, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.models.t5.modeling_t5 because of the following error (look up to see its traceback):

    CUDA Setup failed despite GPU being available. Please run the following command to get more information:

    python -m bitsandbytes

    Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
    to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
    and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues`

python -m bitsandbytes `False

===================================BUG REPORT=================================== C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\bitsandbytes\cuda_setup\main.py:166: UserWarning: Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

warn(msg)

The following directories listed in your path were found to be non-existent: {WindowsPath('/Users/wuyux/anaconda3/envs/localgpt/lib'), WindowsPath('C')} C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\bitsandbytes\cuda_setup\main.py:166: UserWarning: C:\Users\wuyux\anaconda3\envs\localgpt did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths... warn(msg) CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... The following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')} DEBUG: Possible options found for libcudart.so: set() CUDA SETUP: PyTorch settings found: CUDA_VERSION=118, Highest Compute Capability: 8.6. CUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md CUDA SETUP: Loading binary C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.so... argument of type 'WindowsPath' is not iterable CUDA SETUP: Problem: The main issue seems to be that the main CUDA runtime library was not detected. CUDA SETUP: Solution 1: To solve the issue the libcudart.so location needs to be added to the LD_LIBRARY_PATH variable CUDA SETUP: Solution 1a): Find the cuda runtime library via: find / -name libcudart.so 2>/dev/null CUDA SETUP: Solution 1b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_1a
CUDA SETUP: Solution 1c): For a permanent solution add the export from 1b into your .bashrc file, located at ~/.bashrc CUDA SETUP: Solution 2: If no library was found in step 1a) you need to install CUDA. CUDA SETUP: Solution 2a): Download CUDA install script: wget https://github.com/TimDettmers/bitsandbytes/blob/main/cuda_install.sh CUDA SETUP: Solution 2b): Install desired CUDA version to desired location. The syntax is bash cuda_install.sh CUDA_VERSION PATH_TO_INSTALL_INTO.
CUDA SETUP: Solution 2b): For example, "bash cuda_install.sh 113 ~/local/" will download CUDA 11.3 and install into the folder ~/local Traceback (most recent call last): File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\runpy.py", line 187, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\runpy.py", line 146, in _get_module_details return _get_module_details(pkg_main_name, error) File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\runpy.py", line 110, in _get_module_details import(pkg_name) File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\bitsandbytes__init__.py", line 6, in from . import cuda_setup, utils, research File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\bitsandbytes\research__init.py", line 1, in from . import nn File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\bitsandbytes\research\nn\init.py", line 1, in from .modules import LinearFP8Mixed, LinearFP8Global File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\bitsandbytes\research\nn\modules.py", line 8, in from bitsandbytes.optim import GlobalOptimManager File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\bitsandbytes\optim\init__.py", line 6, in from bitsandbytes.cextension import COMPILED_WITH_CUDA File "C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\bitsandbytes\cextension.py", line 20, in raise RuntimeError(''' RuntimeError: CUDA Setup failed despite GPU being available. Please run the following command to get more information:

    python -m bitsandbytes

    Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
    to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
    and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues`

But I do have the cuda nvidia-smi `Sat Jul 22 11:18:28 2023
+---------------------------------------------------------------------------------------+ | NVIDIA-SMI 536.40 Driver Version: 536.40 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA GeForce RTX 3060 Ti WDDM | 00000000:08:00.0 On | N/A | | 30% 44C P0 44W / 200W | 1854MiB / 8192MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 5440 C+G ...oogle\Chrome\Application\chrome.exe N/A | | 0 N/A N/A 8004 C+G ...tionsPlus\logioptionsplus_agent.exe N/A | | 0 N/A N/A 8816 C+G C:\Windows\explorer.exe N/A | | 0 N/A N/A 10488 C+G ...7\extracted\runtime\WeChatAppEx.exe N/A | | 0 N/A N/A 12192 C+G ...nt.CBS_cw5n1h2txyewy\SearchHost.exe N/A | | 0 N/A N/A 12348 C+G ...2txyewy\StartMenuExperienceHost.exe N/A | | 0 N/A N/A 15172 C+G ...ogram Files\pCloud Drive\pCloud.exe N/A | | 0 N/A N/A 15312 C+G ...t.LockApp_cw5n1h2txyewy\LockApp.exe N/A | | 0 N/A N/A 17544 C+G ...on\114.0.1823.82\msedgewebview2.exe N/A | | 0 N/A N/A 18528 C+G ...CBS_cw5n1h2txyewy\TextInputHost.exe N/A | | 0 N/A N/A 19608 C+G ...al\Discord\app-1.0.9015\Discord.exe N/A | | 0 N/A N/A 20168 C+G ...\cef\cef.win7x64\steamwebhelper.exe N/A | | 0 N/A N/A 20740 C+G ...on\wallpaper_engine\wallpaper32.exe N/A | | 0 N/A N/A 23500 C+G C:\Program Files\LGHUB\lghub.exe N/A | | 0 N/A N/A 23544 C+G ...B\system_tray\lghub_system_tray.exe N/A | | 0 N/A N/A 24236 C+G C:\Program Files\NordVPN\NordVPN.exe N/A | | 0 N/A N/A 25052 C+G ...(x86)\Canon\Quick Menu\CNQMMAIN.EXE N/A | | 0 N/A N/A 48332 C+G ...m Files\Mozilla Firefox\firefox.exe N/A | | 0 N/A N/A 61796 C+G ...Programs\Microsoft VS Code\Code.exe N/A | | 0 N/A N/A 79776 C+G ...__8wekyb3d8bbwe\WindowsTerminal.exe N/A | | 0 N/A N/A 89936 C+G ...m Files\Mozilla Firefox\firefox.exe N/A | | 0 N/A N/A 97424 C+G ...crosoft\Edge\Application\msedge.exe N/A | | 0 N/A N/A 128708 C+G ...ta\Local\Programs\Notion\Notion.exe N/A | | 0 N/A N/A 145032 C+G ...e Stream\78.0.1.0\GoogleDriveFS.exe N/A | | 0 N/A N/A 168032 C+G ...GeForce Experience\NVIDIA Share.exe N/A | | 0 N/A N/A 197960 C+G ...paper_engine\bin\webwallpaper32.exe N/A | | 0 N/A N/A 198652 C+G ...siveControlPanel\SystemSettings.exe N/A | | 0 N/A N/A 208000 C+G ...5n1h2txyewy\ShellExperienceHost.exe N/A | | 0 N/A N/A 209056 C+G ...__8wekyb3d8bbwe\WindowsTerminal.exe N/A | +---------------------------------------------------------------------------------------+`

Ananderz commented 1 year ago

Getting this error as well

Ananderz commented 1 year ago

@CalendulaED I fixed it by using the following command:

pip install git+https://github.com/Keith-Hon/bitsandbytes-windows.git

CalendulaED commented 1 year ago

@CalendulaED I fixed it by using the following command:

pip install git+https://github.com/Keith-Hon/bitsandbytes-windows.git

Thank you so much! I use your command: pip install git+https://github.com/Keith-Hon/bitsandbytes-windows.git and then I do: python ingest.py --device_type cuda

This is what I got, it seems successfully got the DB: 2023-07-23 16:10:21,057 - INFO - ingest.py:120 - Loading documents from D:\OnlineLearning\GPT\localGPT/SOURCE_DOCUMENTS 2023-07-23 16:10:25,439 - INFO - ingest.py:129 - Loaded 1 documents from D:\OnlineLearning\GPT\localGPT/SOURCE_DOCUMENTS 2023-07-23 16:10:25,439 - INFO - ingest.py:130 - Split into 72 chunks of text 2023-07-23 16:10:40,658 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large load INSTRUCTOR_Transformer

===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

binary_path: C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll CUDA SETUP: Loading binary C:\Users\wuyux\anaconda3\envs\localgpt\lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll... max_seq_length 512 2023-07-23 16:10:44,416 - INFO - init.py:88 - Running Chroma using direct local API. 2023-07-23 16:10:47,585 - WARNING - init.py:43 - Using embedded DuckDB with persistence: data will be stored in: D:\OnlineLearning\GPT\localGPT/DB 2023-07-23 16:10:47,822 - INFO - ctypes.py:22 - Successfully imported ClickHouse Connect C data optimizations 2023-07-23 16:10:47,932 - INFO - json_impl.py:45 - Using python library for writing JSON byte strings 2023-07-23 16:10:48,096 - INFO - duckdb.py:454 - No existing DB found in D:\OnlineLearning\GPT\localGPT/DB, skipping load 2023-07-23 16:10:48,096 - INFO - duckdb.py:466 - No existing DB found in D:\OnlineLearning\GPT\localGPT/DB, skipping load 2023-07-23 16:10:53,416 - INFO - duckdb.py:414 - Persisting DB to disk, putting it in the save folder: D:\OnlineLearning\GPT\localGPT/DB 2023-07-23 16:10:53,430 - INFO - duckdb.py:414 - Persisting DB to disk, putting it in the save folder: D:\OnlineLearning\GPT\localGPT/DB

Ananderz commented 1 year ago

It works! It's just telling you to contact bitsandbytes IF you have a error :)