abetlen / llama-cpp-python

Python bindings for llama.cpp
https://llama-cpp-python.readthedocs.io
MIT License
7.66k stars 919 forks source link

FileNotFoundError: Shared library with base name 'llama' not found #568

Open mghaoui-interpulse opened 1 year ago

mghaoui-interpulse commented 1 year ago

Prerequisites

Please answer the following questions for yourself before submitting an issue.

Expected Behavior

I'm following the instructions on the README. llama_cpp is buildable on my machine with cuBLAS support (libraries and paths are correct).

> python3 -m venv .venv
> source .venv/bin/activate
(.venv) > CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir

The installation seems to go well:

Collecting llama-cpp-python
  Downloading llama_cpp_python-0.1.77.tar.gz (1.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6/1.6 MB 12.2 MB/s eta 0:00:00
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Collecting typing-extensions>=4.5.0 (from llama-cpp-python)
  Downloading typing_extensions-4.7.1-py3-none-any.whl (33 kB)
Collecting numpy>=1.20.0 (from llama-cpp-python)
  Downloading numpy-1.25.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 18.2/18.2 MB 15.5 MB/s eta 0:00:00
Collecting diskcache>=5.6.1 (from llama-cpp-python)
  Downloading diskcache-5.6.1-py3-none-any.whl (45 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.6/45.6 kB 306.0 MB/s eta 0:00:00
Building wheels for collected packages: llama-cpp-python
  Building wheel for llama-cpp-python (pyproject.toml) ... done
  Created wheel for llama-cpp-python: filename=llama_cpp_python-0.1.77-cp311-cp311-linux_x86_64.whl size=1386177 sha256=67bb0d8316976217d7638216027ad89c76bc58241d7d64f49a1b6b76a40f0c74
  Stored in directory: /tmp/pip-ephem-wheel-cache-q0i3qayl/wheels/e2/67/cb/481cfaabbb5fd5edab627c5b475de63e1b6f7d4d7b678d4d25
Successfully built llama-cpp-python
Installing collected packages: typing-extensions, numpy, diskcache, llama-cpp-python
  Attempting uninstall: typing-extensions
    Found existing installation: typing_extensions 4.7.1
    Uninstalling typing_extensions-4.7.1:
      Successfully uninstalled typing_extensions-4.7.1
  Attempting uninstall: numpy
    Found existing installation: numpy 1.25.2
    Uninstalling numpy-1.25.2:
      Successfully uninstalled numpy-1.25.2
  Attempting uninstall: diskcache
    Found existing installation: diskcache 5.6.1
    Uninstalling diskcache-5.6.1:
      Successfully uninstalled diskcache-5.6.1
  Attempting uninstall: llama-cpp-python
    Found existing installation: llama-cpp-python 0.1.77
    Uninstalling llama-cpp-python-0.1.77:
      Successfully uninstalled llama-cpp-python-0.1.77
Successfully installed diskcache-5.6.1 llama-cpp-python-0.1.77 numpy-1.25.2 typing-extensions-4.7.1

I expected to be able to import the library but that doesn't work.

Current Behavior

> python3
Python 3.11.4 (main, Jun 28 2023, 19:51:46) [GCC] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from llama_cpp import Llama
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/moni/samples/llama-cpp-python/llama_cpp/__init__.py", line 1, in <module>
    from .llama_cpp import *
  File "/home/moni/samples/llama-cpp-python/llama_cpp/llama_cpp.py", line 80, in <module>
    _lib = _load_shared_library(_lib_base_name)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/moni/samples/llama-cpp-python/llama_cpp/llama_cpp.py", line 71, in _load_shared_library
    raise FileNotFoundError(
FileNotFoundError: Shared library with base name 'llama' not found

Environment and Context

$ lscpu

Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         48 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  16
  On-line CPU(s) list:   0-15
Vendor ID:               AuthenticAMD
  Model name:            AMD Ryzen 7 5800X 8-Core Processor
    CPU family:          25
    Model:               33
    Thread(s) per core:  2
    Core(s) per socket:  8
    Socket(s):           1
    Stepping:            0
    Frequency boost:     disabled
    CPU(s) scaling MHz:  52%
    CPU max MHz:         4850.1948
    CPU min MHz:         2200.0000
    BogoMIPS:            7588.01
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy ab
                         m sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzer
                         o irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization features:
  Virtualization:        AMD-V
Caches (sum of all):
  L1d:                   256 KiB (8 instances)
  L1i:                   256 KiB (8 instances)
  L2:                    4 MiB (8 instances)
  L3:                    32 MiB (1 instance)
NUMA:
  NUMA node(s):          1
  NUMA node0 CPU(s):     0-15
Vulnerabilities:
  Itlb multihit:         Not affected
  L1tf:                  Not affected
  Mds:                   Not affected
  Meltdown:              Not affected
  Mmio stale data:       Not affected
  Retbleed:              Not affected
  Spec store bypass:     Mitigation; Speculative Store Bypass disabled via prctl
  Spectre v1:            Mitigation; usercopy/swapgs barriers and __user pointer sanitization
  Spectre v2:            Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
  Srbds:                 Not affected
  Tsx async abort:       Not affected

$ uname -a

Linux moni-opensuse-bp 6.4.6-1-default #1 SMP PREEMPT_DYNAMIC Tue Jul 25 04:42:30 UTC 2023 (55520bc) x86_64 x86_64 x86_64 GNU/Linux
$ python3 --version
$ make --version
$ g++ --version
Python 3.11.4

GNU Make 4.4.1
Built for x86_64-suse-linux-gnu

g++ (SUSE Linux) 13.1.1 20230720 [revision 9aac37ab8a7b919a89c6d64bc7107a8436996e93]

Steps to Reproduce

Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.

  1. step 1
  2. step 2
  3. step 3
  4. etc.

Note: Many issues seem to be regarding functional or performance issues / differences with llama.cpp. In these cases we need to confirm that you're comparing against the version of llama.cpp that was built with your python package, and which parameters you're passing to the context.

Try the following:

  1. git clone https://github.com/abetlen/llama-cpp-python
  2. cd llama-cpp-python
  3. rm -rf _skbuild/ # delete any old builds
  4. python setup.py develop
  5. cd ./vendor/llama.cpp
  6. Follow llama.cpp's instructions to cmake llama.cpp
  7. Run llama.cpp's ./main with the same arguments you previously passed to llama-cpp-python and see if you can reproduce the issue. If you can, log an issue with llama.cpp

I tried, and I get this:

/usr/lib/python3.11/site-packages/setuptools/command/develop.py:40: EasyInstallDeprecationWarning: easy_install command is deprecated.
!!

        ********************************************************************************
        Please avoid running ``setup.py`` and ``easy_install``.
        Instead, use pypa/build, pypa/installer, pypa/build or
        other standards-based tools.

        See https://github.com/pypa/setuptools/issues/917 for details.
        ********************************************************************************

!!
  easy_install.initialize_options(self)
Traceback (most recent call last):
  File "/home/moni/.local/lib/python3.11/site-packages/skbuild/setuptools_wrap.py", line 645, in setup
    cmkr = cmaker.CMaker(cmake_executable)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/moni/.local/lib/python3.11/site-packages/skbuild/cmaker.py", line 148, in __init__
    self.cmake_version = get_cmake_version(self.cmake_executable)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/moni/.local/lib/python3.11/site-packages/skbuild/cmaker.py", line 105, in get_cmake_version
    raise SKBuildError(msg) from err

Problem with the CMake installation, aborting build. CMake executable is cmake
mghaoui-interpulse commented 1 year ago

The repo is cloned recursively and I am able to go into the vendor directory and compile llama_cpp and run it.

cd ./vendor/llama.cpp
make clean && make LLAMA_CUBLAS=1 -j
./main -i --interactive-first -m /run/media/moni/T7/samples/llama.cpp/models/13B-chat/ggml-model-q4_0.bin -n 128 -ngl 999

and that works fine.

Going back to llama-cpp-python and trying to load the library didn't work.

I even tried:

CMAKE_ARGS="-DLLAMA_CUBLAS=on -DBUILD_SHARED_LIBS=ON" FORCE_CMAKE=1 pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir

With a shared libs option turned on, but no dice.

mghaoui-interpulse commented 1 year ago

Hm. Weird. When I run it in a Jupyter Notebook in Python, it works perfectly?

image

So why doesn't it work in the command line?

mghaoui-interpulse commented 1 year ago

Ok weird. If I create a test.py file with

from llama_cpp import Llama

and launch it:

python test.py

It works perfectly.

So it's only the interactive Python that is having a problem. Ok...

mapa17 commented 1 year ago

Hi, I had similar issues. As it turns out, the problem was, that the "wrong" llama_cpp.py was used to perform the import. Instead of the llama_cpp.py that is located in the python site-packages folder after install, the llama_cpp.py within my current folder/repo was used.

Thats a problem because llama_cpp.py::_load_shared_library() is using _base_path = pathlib.Path(__file__).parent.resolve() to find the shared library file and was looking for the shared library in the folder in which its finds the first llama_cpp.py file.

I am not sure, but I think instead of using __file__ one would be making use of site.getsitepackages() to get the path of the current site folder and look for the so file there.

corv89 commented 1 year ago

You're exactly right, when I move out of the project directory I can suddenly do from llama_cpp import Llama just fine.

@abetlen Please take a look at this issue

tgmerritt commented 1 year ago

Thank you both - I had the same experience. Within the llama-cpp-python project directory, it wouldn't work, as soon as I cd .. and tried, it worked fine. Big thanks to @mapa17 for figuring this out

mghaoui-interpulse commented 1 year ago

Sounds like something needs to be modified in the code ...

chaddwick25 commented 1 year ago

Thanks to all that posted. This bug was driving me crazy.

ghevge commented 7 months ago

I am seeing a similar error when trying to start llama-cpp-pyton container for llama-gpt: https://github.com/getumbrel/llama-gpt/issues/144 . Any idea if it is caused by the same problem ? The stacktrace looks similar...

sdoshi commented 1 month ago

Thank you for posting the workaround!