Open kmlob opened 9 months ago
@kmlob
Can you try make BUILD_SHARED_LIBS=1 LLAMA_CUBLAS=1 -j libllama.so
in the working llama.cpp directory, and replace the generated libllama.so in the vendor/llama.cpp dir. And try test.py after that.
This is to rule out the compiletime vs runtime issue.
@kmlob
Can you try
make BUILD_SHARED_LIBS=1 LLAMA_CUBLAS=1 -j libllama.so
in the working llama.cpp directory, and replace the generated libllama.so in the vendor/llama.cpp dir. And try test.py after that. This is to rule out the compiletime vs runtime issue.
Tried to run this in a clean llama.cpp
repo and copy libllama.so
to llama-cpp-python/llama_cpp
, and it worked! Thinking there might be some problems with the current makefiles.
@kmlob
Can you try
make BUILD_SHARED_LIBS=1 LLAMA_CUBLAS=1 -j libllama.so
in the working llama.cpp directory, and replace the generated libllama.so in the vendor/llama.cpp dir. And try test.py after that. This is to rule out the compiletime vs runtime issue.
I can also confirm that this works. This serves me as a work-a-round for now. Thanks
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
Current Behavior
Environment and Context
Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.
$ lscpu Architecture: x86_64 CPU(s): 16
$ uname -a 6.1.57-gentoo-x86_64
Failure Information (for bugs)
Steps to Reproduce
Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.
test.py:
On the same system, following works fine:
Failure Logs
When running "python test.py":