transformerlab / transformerlab-app

Open Source Application for Advanced LLM Engineering: interact, train, fine-tune, and evaluate large language models on your own computer.
https://transformerlab.ai/
GNU Affero General Public License v3.0
418 stars 24 forks source link

Error building llama_cpp_python #193

Open dadmobile opened 2 days ago

dadmobile commented 2 days ago

User Anton on discord reported:

Collecting diskcache>=5.6.1 (from llama-cpp-python) Using cached diskcache-5.6.3-py3-none-any.whl.metadata (20 kB) Requirement already satisfied: jinja2>=2.11.3 in /home/anton/.transformerlab/envs/transformerlab/lib/python3.11/site-packages (from llama-cpp-python) (3.1.4) Requirement already satisfied: MarkupSafe>=2.0 in /home/anton/.transformerlab/envs/transformerlab/lib/python3.11/site-packages (from jinja2>=2.11.3->llama-cpp-python) (2.1.5) Using cached diskcache-5.6.3-py3-none-any.whl (45 kB) Building wheels for collected packages: llama-cpp-python Building wheel for llama-cpp-python (pyproject.toml): started Building wheel for llama-cpp-python (pyproject.toml): finished with status 'error' error: subprocess-exited-with-error

× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [128 lines of output] /home/anton/.transformerlab/envs/transformerlab/compiler_compat/ld: warning: libgomp.so.1, needed by vendor/llama.cpp/ggml/src/ggml-cpu/libggml-cpu.so, not found (try using -rpath or -rpath-link)

Notnaton commented 2 days ago

local_server.log Attaching log file with build error

dadmobile commented 1 day ago

Found some references to this being an error since llama_cpp_python 0.2.80 (I'm currently on 0.2.89). I noticed you were setting some variables on your installer? Not sure if that has something to do with it. See this thread:

https://github.com/abetlen/llama-cpp-python/issues/1573

Specifically the part about:

Super useful, this worked for me CMAKE_ARGS="-DGGML_CUDA=on -DLLAVA_BUILD=off" pip install llama-cpp-python --upgrade --force-reinstall on Linux Mint Cinnamon, thanks

Notnaton commented 1 day ago

I'll give it a try after work 👍

Notnaton commented 11 hours ago

local_server.log transformerlab.log

setup.sh pip install llama-cpp-python --upgrade --force-reinstall