Using Windows 10 and I have CUDA version 12.3
The error occurs when I try to import Llama
FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\11.2\bin'
(mllm) C:\Users\ooo\tor\LLM_JJMobility>pip install llama-cpp-python
Collecting llama-cpp-python
Downloading llama_cpp_python-0.2.28.tar.gz (9.4 MB)
---------------------------------------- 9.4/9.4 MB 3.6 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: typing-extensions>=4.5.0 in c:\programdata\miniconda3\envs\mllm\lib\site-packages (from llama-cpp-python) (4.7.1)
Requirement already satisfied: numpy>=1.20.0 in c:\programdata\miniconda3\envs\mllm\lib\site-packages (from llama-cpp-python) (1.24.4)
Collecting diskcache>=5.6.1 (from llama-cpp-python)
Downloading diskcache-5.6.3-py3-none-any.whl.metadata (20 kB)
Downloading diskcache-5.6.3-py3-none-any.whl (45 kB)
---------------------------------------- 45.5/45.5 kB ? eta 0:00:00
Building wheels for collected packages: llama-cpp-python
Building wheel for llama-cpp-python (pyproject.toml) ... done
Created wheel for llama-cpp-python: filename=llama_cpp_python-0.2.28-cp310-cp310-win_amd64.whl size=1900570 sha256=f4c975f9518a55e76ddce2952d8745b7664b462cb4f7e292128c354ee7ca4733
Stored in directory: c:\users\ooo\appdata\local\pip\cache\wheels\93\6e\a9\478cce089dc2a082bdcffe468a1c65465c91b25d911b30da82
Successfully built llama-cpp-python
Installing collected packages: diskcache, llama-cpp-python
Successfully installed diskcache-5.6.3 llama-cpp-python-0.2.28
(mllm) C:\Users\ooo\tor\LLM_JJMobility>python
Python 3.10.13 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:24:38) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from llama_cpp import Llama
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\ProgramData\Miniconda3\envs\mllm\lib\site-packages\llama_cpp\__init__.py", line 1, in <module>
from .llama_cpp import *
File "C:\ProgramData\Miniconda3\envs\mllm\lib\site-packages\llama_cpp\llama_cpp.py", line 87, in <module>
_lib = _load_shared_library(_lib_base_name)
File "C:\ProgramData\Miniconda3\envs\mllm\lib\site-packages\llama_cpp\llama_cpp.py", line 63, in _load_shared_library
os.add_dll_directory(os.path.join(os.environ["CUDA_PATH"], "bin"))
File "C:\ProgramData\Miniconda3\envs\mllm\lib\os.py", line 1118, in add_dll_directory
cookie = nt._add_dll_directory(path)
FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\11.2\\bin'
I thought CUDA 11.2 was a specific requirement of this package but it was a conflict on my system. CUDA 11.2 was the env variable and 12.3 was installed.
Using Windows 10 and I have CUDA version 12.3 The error occurs when I try to import Llama FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\11.2\bin'