Open mghaoui-interpulse opened 1 year ago
The repo is cloned recursively and I am able to go into the vendor directory and compile llama_cpp and run it.
cd ./vendor/llama.cpp
make clean && make LLAMA_CUBLAS=1 -j
./main -i --interactive-first -m /run/media/moni/T7/samples/llama.cpp/models/13B-chat/ggml-model-q4_0.bin -n 128 -ngl 999
and that works fine.
Going back to llama-cpp-python and trying to load the library didn't work.
I even tried:
CMAKE_ARGS="-DLLAMA_CUBLAS=on -DBUILD_SHARED_LIBS=ON" FORCE_CMAKE=1 pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir
With a shared libs option turned on, but no dice.
Hm. Weird. When I run it in a Jupyter Notebook in Python, it works perfectly?
So why doesn't it work in the command line?
Ok weird. If I create a test.py
file with
from llama_cpp import Llama
and launch it:
python test.py
It works perfectly.
So it's only the interactive Python that is having a problem. Ok...
Hi, I had similar issues. As it turns out, the problem was, that the "wrong" llama_cpp.py was used to perform the import. Instead of the llama_cpp.py that is located in the python site-packages folder after install, the llama_cpp.py within my current folder/repo was used.
Thats a problem because llama_cpp.py::_load_shared_library() is using _base_path = pathlib.Path(__file__).parent.resolve()
to find the shared library file and was looking for the shared library in the folder in which its finds the first llama_cpp.py file.
I am not sure, but I think instead of using __file__
one would be making use of site.getsitepackages()
to get the path of the current site folder and look for the so file there.
You're exactly right, when I move out of the project directory I can suddenly do from llama_cpp import Llama
just fine.
@abetlen Please take a look at this issue
Thank you both - I had the same experience. Within the llama-cpp-python
project directory, it wouldn't work, as soon as I cd ..
and tried, it worked fine. Big thanks to @mapa17 for figuring this out
Sounds like something needs to be modified in the code ...
Thanks to all that posted. This bug was driving me crazy.
I am seeing a similar error when trying to start llama-cpp-pyton container for llama-gpt: https://github.com/getumbrel/llama-gpt/issues/144 . Any idea if it is caused by the same problem ? The stacktrace looks similar...
Thank you for posting the workaround!
FileNotFoundError: Shared library with base name 'llama' not found,please tell me how to deal with ?thanks!!!
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
I'm following the instructions on the README. llama_cpp is buildable on my machine with cuBLAS support (libraries and paths are correct).
The installation seems to go well:
I expected to be able to import the library but that doesn't work.
Current Behavior
Environment and Context
$ lscpu
$ uname -a
Steps to Reproduce
Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.
Note: Many issues seem to be regarding functional or performance issues / differences with
llama.cpp
. In these cases we need to confirm that you're comparing against the version ofllama.cpp
that was built with your python package, and which parameters you're passing to the context.Try the following:
git clone https://github.com/abetlen/llama-cpp-python
cd llama-cpp-python
rm -rf _skbuild/
# delete any old buildspython setup.py develop
cd ./vendor/llama.cpp
cmake
llama.cpp./main
with the same arguments you previously passed to llama-cpp-python and see if you can reproduce the issue. If you can, log an issue with llama.cppI tried, and I get this: