Open gjmulder opened 1 year ago
@gjmulder might need to force a rebuild? That name was changed in the llama.cpp api but I made the subsequent change in llama_cpp.py
@abetlen, what am I missing?
$ python --version
Python 3.10.10
$ rm -rf llama-cpp-python
$ git clone --recurse-submodules git@github.com:abetlen/llama-cpp-python.git
llama-cpp-python$ pip uninstall llama-cpp-python
WARNING: Skipping llama-cpp-python as it is not installed.
llama-cpp-python$ pip install -e .
Obtaining file:///home/mulderg/Work/llama-cpp-python
Installing build dependencies ... done
Checking if build backend supports build_editable ... done
Getting requirements to build editable ... done
Preparing editable metadata (pyproject.toml) ... done
Requirement already satisfied: typing-extensions>=4.5.0 in /home/mulderg/anaconda3/envs/lcp/lib/python3.10/site-packages (from llama-cpp-python==0.1.71) (4.7.1)
Requirement already satisfied: numpy>=1.20.0 in /home/mulderg/anaconda3/envs/lcp/lib/python3.10/site-packages (from llama-cpp-python==0.1.71) (1.25.1)
Requirement already satisfied: diskcache>=5.6.1 in /home/mulderg/anaconda3/envs/lcp/lib/python3.10/site-packages (from llama-cpp-python==0.1.71) (5.6.1)
Building wheels for collected packages: llama-cpp-python
Building editable for llama-cpp-python (pyproject.toml) ... done
Created wheel for llama-cpp-python: filename=llama_cpp_python-0.1.71-0.editable-py3-none-any.whl size=6649 sha256=2813623097821633e2256d3f07673f5787631ca4fb9313c64b891016b71c8bac
Stored in directory: /data/tmp/pip-ephem-wheel-cache-mjdakw6h/wheels/e3/47/be/a6fc739172435b96a91e69b937ae614e0dd7d99ab1f31bed07
Successfully built llama-cpp-python
Installing collected packages: llama-cpp-python
Successfully installed llama-cpp-python-0.1.71
$ python ./smoke_test.py -f ./prompt.txt
Traceback (most recent call last):
File "/home/mulderg/Work/./smoke_test.py", line 4, in <module>
from llama_cpp import Llama
File "/home/mulderg/Work/llama-cpp-python/llama_cpp/__init__.py", line 1, in <module>
from .llama_cpp import *
File "/home/mulderg/Work/llama-cpp-python/llama_cpp/llama_cpp.py", line 80, in <module>
_lib = _load_shared_library(_lib_base_name)
File "/home/mulderg/Work/llama-cpp-python/llama_cpp/llama_cpp.py", line 71, in _load_shared_library
raise FileNotFoundError(
FileNotFoundError: Shared library with base name 'llama' not found
$ pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir
Collecting llama-cpp-python
Downloading llama_cpp_python-0.1.71.tar.gz (1.6 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6/1.6 MB 73.7 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting diskcache>=5.6.1
Downloading diskcache-5.6.1-py3-none-any.whl (45 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.6/45.6 kB 223.0 MB/s eta 0:00:00
Collecting numpy>=1.20.0
Downloading numpy-1.25.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.6 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 17.6/17.6 MB 64.3 MB/s eta 0:00:00
Collecting typing-extensions>=4.5.0
Downloading typing_extensions-4.7.1-py3-none-any.whl (33 kB)
Building wheels for collected packages: llama-cpp-python
Building wheel for llama-cpp-python (pyproject.toml) ... done
Created wheel for llama-cpp-python: filename=llama_cpp_python-0.1.71-cp310-cp310-linux_x86_64.whl size=252026 sha256=37c33706acf1521ebeaf462f404c87abdb79dd7cb25bd4345b6d22cfd686da2f
Stored in directory: /data/tmp/pip-ephem-wheel-cache-b3lk8u8y/wheels/85/39/ee/cb44d8af903d1c0b58741f092ea2981424ac28685ab4bc7c28
Successfully built llama-cpp-python
Installing collected packages: typing-extensions, numpy, diskcache, llama-cpp-python
Attempting uninstall: typing-extensions
Found existing installation: typing_extensions 4.7.1
Uninstalling typing_extensions-4.7.1:
Successfully uninstalled typing_extensions-4.7.1
Attempting uninstall: numpy
Found existing installation: numpy 1.25.1
Uninstalling numpy-1.25.1:
Successfully uninstalled numpy-1.25.1
Attempting uninstall: diskcache
Found existing installation: diskcache 5.6.1
Uninstalling diskcache-5.6.1:
Successfully uninstalled diskcache-5.6.1
Successfully installed diskcache-5.6.1 llama-cpp-python-0.1.71 numpy-1.25.1 typing-extensions-4.7.1
$ python ./smoke_test.py -f ./prompt.txt
llama.cpp: loading model from /data/llama/7B/ggml-model-f16.bin
llama_model_load_internal: format = ggjt v1 (pre #1405)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 8192
llama_model_load_internal: n_embd = 4096
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 32
llama_model_load_internal: n_layer = 32
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 1 (mostly F16)
llama_model_load_internal: n_ff = 11008
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size = 0.08 MB
llama_model_load_internal: mem required = 14645.09 MB (+ 1026.00 MB per state)
llama_new_context_with_model: kv self size = 4096.00 MB
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
^C
Same here. Pretty much the same experience as it was over here:
https://github.com/abetlen/llama-cpp-python/issues/439
But the error is "undefined symbol: llama_backend_init" this time. Here is how I rebuild, as I did many times before, until that issue 439 happened.
pip uninstall llama-cpp-python git pull set CMAKE_ARGS=-DLLAMA_CUBLAS=on set FORCE_CMAKE=1 python setup.py clean python setup.py install
Note that the repo is properly checked out recursively. If I can make it clean and rebuild even more, I would love to find out how. Win10.
Yeah, I'm not sure what's going on here either but rebuilding seems to reliably fix the issue. Not exactly sure how since I'm fairly sure I pulled an entirely fresh version of the repo today for a docker build and the version of llama.cpp repo currently linked as a submodule clearly works, but I initially ended up with what I assume was an older version of the actual llama.cpp file that had llama_init_backend instead of llama_backend_init and no llama_backend_free. I've saved a text comparison if anybody sees any reason to look deeper into this and maybe find out exactly which version of the old file popped up. I don't think there's anything wrong with the repo though.
I am getting a similar issue with llama-cpp-python-0.1.85
https://github.com/abetlen/llama-cpp-python/issues/659
I get the same error now on a completely freshly installed environment + pip packages:
/lib/python3.11/site-packages/llama_cpp/libllama.so: undefined symbol: llama_init_from_file
@iactix The llama-cpp-python repo does not contain a file "setup.py", where did you get it from?
Fixed the error for me, it was the #/usr/bin/python3 line in the beginning of a py file, that was referring to the SYSTEM's python installation rather to the one within my virtual environment! Just edited it to point to the venv's python binary instead.
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Please provide a detailed written description of what you were trying to do, and what you expected
llama-cpp-python
to do.Not throw this error in creating a LLama object:
AttributeError: /home/vmajor/llama-cpp-python/llama_cpp/libllama.so: undefined symbol: llama_backend_init
Also reported independently
Please provide a detailed written description of what
llama-cpp-python
did, instead.Environment and Context
Failure Information (for bugs)
Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.
Steps to Reproduce