abetlen / llama-cpp-python

Python bindings for llama.cpp
https://llama-cpp-python.readthedocs.io
MIT License
8.21k stars 979 forks source link

AttributeError: llama-cpp-python/llama_cpp/libllama.so: undefined symbol: llama_backend_init when building from src #479

Open gjmulder opened 1 year ago

gjmulder commented 1 year ago

Prerequisites

Please answer the following questions for yourself before submitting an issue.

Please provide a detailed written description of what you were trying to do, and what you expected llama-cpp-python to do.

Not throw this error in creating a LLama object:

AttributeError: /home/vmajor/llama-cpp-python/llama_cpp/libllama.so: undefined symbol: llama_backend_init

Also reported independently

Please provide a detailed written description of what llama-cpp-python did, instead.

$ python ./smoke_test.py -f ./prompt.txt
Traceback (most recent call last):
  File "/home/mulderg/Work/./smoke_test.py", line 4, in <module>
    from llama_cpp import Llama
  File "/home/mulderg/Work/llama-cpp-python/llama_cpp/__init__.py", line 1, in <module>
    from .llama_cpp import *
  File "/home/mulderg/Work/llama-cpp-python/llama_cpp/llama_cpp.py", line 334, in <module>
    _lib.llama_backend_init.argtypes = [c_bool]
  File "/home/mulderg/anaconda3/envs/lcp/lib/python3.10/ctypes/__init__.py", line 387, in __getattr__
    func = self.__getitem__(name)
  File "/home/mulderg/anaconda3/envs/lcp/lib/python3.10/ctypes/__init__.py", line 392, in __getitem__
    func = self._FuncPtr((name_or_ordinal, self))
AttributeError: /home/mulderg/Work/llama-cpp-python/llama_cpp/libllama.so: undefined symbol: llama_backend_init

Environment and Context

llama-cpp-python$ python3 --version
Python 3.10.10

llama-cpp-python$ git log | head -3
commit 6705f9b6c6b3369481c4e2e0e15d0f1af7a96eff
Author: Andrei Betlen <abetlen@gmail.com>
Date:   Thu Jul 13 23:32:06 2023 -0400

llama-cpp-python$ cd vendor/llama.cpp/
llama-cpp-python/vendor/llama.cpp$ git log | head -3
commit 1d1630996920f889cdc08de26cebf2415958540e
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Sun Jul 9 05:59:53 2023 -0300

Failure Information (for bugs)

Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.

Steps to Reproduce

$ grep Llama smoke_test.py 
from llama_cpp import Llama
llm = Llama(model_path=args.model, n_ctx=args.n_ctx, n_threads=args.n_threads, n_gpu_layers=args.n_gpu_layers)
abetlen commented 1 year ago

@gjmulder might need to force a rebuild? That name was changed in the llama.cpp api but I made the subsequent change in llama_cpp.py

gjmulder commented 1 year ago

@abetlen, what am I missing?

$ python --version
Python 3.10.10

$  rm -rf llama-cpp-python

$  git clone --recurse-submodules git@github.com:abetlen/llama-cpp-python.git

llama-cpp-python$ pip uninstall llama-cpp-python
WARNING: Skipping llama-cpp-python as it is not installed.

llama-cpp-python$ pip install -e .
Obtaining file:///home/mulderg/Work/llama-cpp-python
  Installing build dependencies ... done
  Checking if build backend supports build_editable ... done
  Getting requirements to build editable ... done
  Preparing editable metadata (pyproject.toml) ... done
Requirement already satisfied: typing-extensions>=4.5.0 in /home/mulderg/anaconda3/envs/lcp/lib/python3.10/site-packages (from llama-cpp-python==0.1.71) (4.7.1)
Requirement already satisfied: numpy>=1.20.0 in /home/mulderg/anaconda3/envs/lcp/lib/python3.10/site-packages (from llama-cpp-python==0.1.71) (1.25.1)
Requirement already satisfied: diskcache>=5.6.1 in /home/mulderg/anaconda3/envs/lcp/lib/python3.10/site-packages (from llama-cpp-python==0.1.71) (5.6.1)
Building wheels for collected packages: llama-cpp-python
  Building editable for llama-cpp-python (pyproject.toml) ... done
  Created wheel for llama-cpp-python: filename=llama_cpp_python-0.1.71-0.editable-py3-none-any.whl size=6649 sha256=2813623097821633e2256d3f07673f5787631ca4fb9313c64b891016b71c8bac
  Stored in directory: /data/tmp/pip-ephem-wheel-cache-mjdakw6h/wheels/e3/47/be/a6fc739172435b96a91e69b937ae614e0dd7d99ab1f31bed07
Successfully built llama-cpp-python
Installing collected packages: llama-cpp-python
Successfully installed llama-cpp-python-0.1.71

$ python ./smoke_test.py -f ./prompt.txt
Traceback (most recent call last):
  File "/home/mulderg/Work/./smoke_test.py", line 4, in <module>
    from llama_cpp import Llama
  File "/home/mulderg/Work/llama-cpp-python/llama_cpp/__init__.py", line 1, in <module>
    from .llama_cpp import *
  File "/home/mulderg/Work/llama-cpp-python/llama_cpp/llama_cpp.py", line 80, in <module>
    _lib = _load_shared_library(_lib_base_name)
  File "/home/mulderg/Work/llama-cpp-python/llama_cpp/llama_cpp.py", line 71, in _load_shared_library
    raise FileNotFoundError(
FileNotFoundError: Shared library with base name 'llama' not found

$ pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir
Collecting llama-cpp-python
  Downloading llama_cpp_python-0.1.71.tar.gz (1.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6/1.6 MB 73.7 MB/s eta 0:00:00
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Collecting diskcache>=5.6.1
  Downloading diskcache-5.6.1-py3-none-any.whl (45 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.6/45.6 kB 223.0 MB/s eta 0:00:00
Collecting numpy>=1.20.0
  Downloading numpy-1.25.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 17.6/17.6 MB 64.3 MB/s eta 0:00:00
Collecting typing-extensions>=4.5.0
  Downloading typing_extensions-4.7.1-py3-none-any.whl (33 kB)
Building wheels for collected packages: llama-cpp-python
  Building wheel for llama-cpp-python (pyproject.toml) ... done
  Created wheel for llama-cpp-python: filename=llama_cpp_python-0.1.71-cp310-cp310-linux_x86_64.whl size=252026 sha256=37c33706acf1521ebeaf462f404c87abdb79dd7cb25bd4345b6d22cfd686da2f
  Stored in directory: /data/tmp/pip-ephem-wheel-cache-b3lk8u8y/wheels/85/39/ee/cb44d8af903d1c0b58741f092ea2981424ac28685ab4bc7c28
Successfully built llama-cpp-python
Installing collected packages: typing-extensions, numpy, diskcache, llama-cpp-python
  Attempting uninstall: typing-extensions
    Found existing installation: typing_extensions 4.7.1
    Uninstalling typing_extensions-4.7.1:
      Successfully uninstalled typing_extensions-4.7.1
  Attempting uninstall: numpy
    Found existing installation: numpy 1.25.1
    Uninstalling numpy-1.25.1:
      Successfully uninstalled numpy-1.25.1
  Attempting uninstall: diskcache
    Found existing installation: diskcache 5.6.1
    Uninstalling diskcache-5.6.1:
      Successfully uninstalled diskcache-5.6.1
Successfully installed diskcache-5.6.1 llama-cpp-python-0.1.71 numpy-1.25.1 typing-extensions-4.7.1

$ python ./smoke_test.py -f ./prompt.txt
llama.cpp: loading model from /data/llama/7B/ggml-model-f16.bin
llama_model_load_internal: format     = ggjt v1 (pre #1405)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 8192
llama_model_load_internal: n_embd     = 4096
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 32
llama_model_load_internal: n_layer    = 32
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: ftype      = 1 (mostly F16)
llama_model_load_internal: n_ff       = 11008
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size =    0.08 MB
llama_model_load_internal: mem required  = 14645.09 MB (+ 1026.00 MB per state)
llama_new_context_with_model: kv self size  = 4096.00 MB
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | 
^C
iactix commented 1 year ago

Same here. Pretty much the same experience as it was over here:

https://github.com/abetlen/llama-cpp-python/issues/439

But the error is "undefined symbol: llama_backend_init" this time. Here is how I rebuild, as I did many times before, until that issue 439 happened.

pip uninstall llama-cpp-python git pull set CMAKE_ARGS=-DLLAMA_CUBLAS=on set FORCE_CMAKE=1 python setup.py clean python setup.py install

Note that the repo is properly checked out recursively. If I can make it clean and rebuild even more, I would love to find out how. Win10.

AnonymousAmalgrams commented 1 year ago

Yeah, I'm not sure what's going on here either but rebuilding seems to reliably fix the issue. Not exactly sure how since I'm fairly sure I pulled an entirely fresh version of the repo today for a docker build and the version of llama.cpp repo currently linked as a submodule clearly works, but I initially ended up with what I assume was an older version of the actual llama.cpp file that had llama_init_backend instead of llama_backend_init and no llama_backend_free. I've saved a text comparison if anybody sees any reason to look deeper into this and maybe find out exactly which version of the old file popped up. I don't think there's anything wrong with the repo though.

YerongLi commented 1 year ago

I am getting a similar issue with llama-cpp-python-0.1.85 https://github.com/abetlen/llama-cpp-python/issues/659

takosalad commented 9 months ago

I get the same error now on a completely freshly installed environment + pip packages:

/lib/python3.11/site-packages/llama_cpp/libllama.so: undefined symbol: llama_init_from_file

takosalad commented 9 months ago

@iactix The llama-cpp-python repo does not contain a file "setup.py", where did you get it from?

takosalad commented 9 months ago

Fixed the error for me, it was the #/usr/bin/python3 line in the beginning of a py file, that was referring to the SYSTEM's python installation rather to the one within my virtual environment! Just edited it to point to the venv's python binary instead.