abetlen / llama-cpp-python

Python bindings for llama.cpp
https://llama-cpp-python.readthedocs.io
MIT License
8.11k stars 964 forks source link

No matter how many times I build it, it won't start #1719

Open Enchante503 opened 2 months ago

Enchante503 commented 2 months ago

Prerequisites

Please answer the following questions for yourself before submitting an issue.

Expected Behavior

Starts normally

Current Behavior

llama-cpp-python/llama_cpp/llama_cpp.py", line 79, in _load_shared_library
    raise FileNotFoundError(
FileNotFoundError: Shared library with base name 'llama' not found

build sample:

user@PC:~/LLM/llama-cpp-python$ pip install --force-reinstall --no-cache-dir llama-cpp-python
Collecting llama-cpp-python
  Downloading llama_cpp_python-0.2.90.tar.gz (63.8 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 63.8/63.8 MB 116.7 MB/s eta 0:00:00
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Installing backend dependencies ... done
  Preparing metadata (pyproject.toml) ... done
Collecting typing-extensions>=4.5.0 (from llama-cpp-python)
  Downloading typing_extensions-4.12.2-py3-none-any.whl.metadata (3.0 kB)
Collecting numpy>=1.20.0 (from llama-cpp-python)
  Downloading numpy-2.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (60 kB)
Collecting diskcache>=5.6.1 (from llama-cpp-python)
  Downloading diskcache-5.6.3-py3-none-any.whl.metadata (20 kB)
Collecting jinja2>=2.11.3 (from llama-cpp-python)
  Downloading jinja2-3.1.4-py3-none-any.whl.metadata (2.6 kB)
Collecting MarkupSafe>=2.0 (from jinja2>=2.11.3->llama-cpp-python)
  Downloading MarkupSafe-2.1.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.0 kB)
Downloading diskcache-5.6.3-py3-none-any.whl (45 kB)
Downloading jinja2-3.1.4-py3-none-any.whl (133 kB)
Downloading numpy-2.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.3 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 16.3/16.3 MB 119.5 MB/s eta 0:00:00
Downloading typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Downloading MarkupSafe-2.1.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB)
Building wheels for collected packages: llama-cpp-python
  Building wheel for llama-cpp-python (pyproject.toml) ... done
  Created wheel for llama-cpp-python: filename=llama_cpp_python-0.2.90-cp310-cp310-linux_x86_64.whl size=305428668 sha256=9eadeabed54bf270fa0173161b2144866ae9f7823afe786bcc5103775ecabaff
  Stored in directory: /tmp/pip-ephem-wheel-cache-5f0wultl/wheels/3d/67/02/f950031435db4a5a02e6269f6adb6703bf1631c3616380f3c6
Successfully built llama-cpp-python
Installing collected packages: typing-extensions, numpy, MarkupSafe, diskcache, jinja2, llama-cpp-python
  Attempting uninstall: typing-extensions
    Found existing installation: typing_extensions 4.12.2
    Uninstalling typing_extensions-4.12.2:
      Successfully uninstalled typing_extensions-4.12.2
  Attempting uninstall: numpy
    Found existing installation: numpy 2.1.0
    Uninstalling numpy-2.1.0:
      Successfully uninstalled numpy-2.1.0
  Attempting uninstall: MarkupSafe
    Found existing installation: MarkupSafe 2.1.5
    Uninstalling MarkupSafe-2.1.5:
      Successfully uninstalled MarkupSafe-2.1.5
  Attempting uninstall: diskcache
    Found existing installation: diskcache 5.6.3
    Uninstalling diskcache-5.6.3:
      Successfully uninstalled diskcache-5.6.3
  Attempting uninstall: jinja2
    Found existing installation: Jinja2 3.1.4
    Uninstalling Jinja2-3.1.4:
      Successfully uninstalled Jinja2-3.1.4
  Attempting uninstall: llama-cpp-python
    Found existing installation: llama_cpp_python 0.2.90
    Uninstalling llama_cpp_python-0.2.90:
      Successfully uninstalled llama_cpp_python-0.2.90
Successfully installed MarkupSafe-2.1.5 diskcache-5.6.3 jinja2-3.1.4 llama-cpp-python-0.2.90 numpy-2.1.0 typing-extensions-4.12.2

user@PC:~/LLM/llama-cpp-python$ python test.py
Traceback (most recent call last):
  File "/home/user/LLM/llama-cpp-python/test.py", line 1, in <module>
    from llama_cpp import Llama
  File "/home/user/LLM/llama-cpp-python/llama_cpp/__init__.py", line 1, in <module>
    from .llama_cpp import *
  File "/home/user/LLM/llama-cpp-python/llama_cpp/llama_cpp.py", line 88, in <module>
    _lib = _load_shared_library(_lib_base_name)
  File "/home/user/LLM/llama-cpp-python/llama_cpp/llama_cpp.py", line 79, in _load_shared_library
    raise FileNotFoundError(
FileNotFoundError: Shared library with base name 'llama' not found

Environment and Context

Windows11 WSL2 Ubuntu 22.04.4 LTS CUDA12.1

Python 3.10.11
GNU Make 4.3     x86_64-pc-linux-gnu
g++ (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0

Steps to Reproduce

I tried several...

CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python

and

pip install --force-reinstall --no-cache-dir llama-cpp-python

and

pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu121

and etc........................... I have researched various methods, including ChatGPT and Google. and Also delete cache and temp files.

P.S.  When building with the CUDA option, for some reason the CPU reached 100% and it took a long time to complete.

Failure Logs

Traceback (most recent call last):
  File "/home/user/.pyenv/versions/3.10.11/lib/python3.10/runpy.py", line 187, in _run_module_as_main
    mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
  File "/home/user/.pyenv/versions/3.10.11/lib/python3.10/runpy.py", line 110, in _get_module_details
    __import__(pkg_name)
  File "/home/user/LLM/llama-cpp-python/llama_cpp/__init__.py", line 1, in <module>
    from .llama_cpp import *
  File "/home/user/LLM/llama-cpp-python/llama_cpp/llama_cpp.py", line 88, in <module>
    _lib = _load_shared_library(_lib_base_name)
  File "/home/user/LLM/llama-cpp-python/llama_cpp/llama_cpp.py", line 79, in _load_shared_library
    raise FileNotFoundError(
FileNotFoundError: Shared library with base name 'llama' not found
Enchante503 commented 2 months ago

Although I'm not sure if this was the cause, after deleting the folder created by Git clone at /home/user/LLM/llama-cpp-python, I created a new empty build folder and performed the sequence of commands (as listed below). This resulted in everything working correctly.

I didn’t think that pip install would be related to the data cloned via Git, but could there have been any impact?

Even though no temporary build files were created when running pip install in the build folder, it doesn’t seem related. However, it’s possible that having files present could have had an adverse effect during the CMake execution.

I'm not an expert, so I don't know why it failed or succeeded.

The command you executed is: CMAKE_ARGS="-DGGML_CUDA=on -DCUDAToolkit_INCLUDE_DIR='/usr/local/cuda-12.1/targets/x86_64-linux/include'" FORCE_CMAKE=1 pip install --force-reinstall --no-cache-dir 'llama-cpp-python[server]'