abetlen / llama-cpp-python

Python bindings for llama.cpp
https://llama-cpp-python.readthedocs.io
MIT License
7.64k stars 917 forks source link

install target fails for llava #1481

Open waheedi opened 3 months ago

waheedi commented 3 months ago

Prerequisites

Please answer the following questions for yourself before submitting an issue.

Expected Behavior

I was building from source specifically version 0.2.76

make build should have built all the required stuff

Current Behavior

build install step fails at llava example due to some path variations (not sure what is wrong where though)

 -- Installing: /home/bargo/projects/rocm-setup/llama-cpp-python/llama_cpp/libllama.so
  CMake Error at /tmp/tmpigzjrup0/build/vendor/llama.cpp/examples/llava/cmake_install.cmake:46 (file):
    file INSTALL cannot find
    "/tmp/tmpigzjrup0/build/vendor/llama.cpp/examples/llava/libllava.so": No
    such file or directory.
  Call Stack (most recent call first):
    /tmp/tmpigzjrup0/build/cmake_install.cmake:128 (include)

  *** CMake install failed
  error: subprocess-exited-with-error

  × Building editable for llama_cpp_python (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> See above for output.

Environment and Context

As build have successfully completed for both llama.pp and for the binding except that installing them have failed, I believe it should not matter on the environment that much (but im running rocm 6.1.1 built from source as well)

$ python3 --version
$ make --version
$ g++ --version

Failure Information (for bugs)

what have been failed to be found llava shared library actually have already been built and its located in two locations;

llama-cpp-python$ find . -name libllava.so
./build/vendor/llama.cpp/examples/llava/libllava.so
./llama_cpp/libllava.so

Steps to Reproduce

  1. git clone 2.pip3 install . OR CMAKE_ARGS="-D LLAMA_HIPBLAS=ON -D CMAKE_C_COMPILER=/opt/rocm/llvm/bin/clang -D CMAKE_CXX_COMPILER=/opt/rocm/llvm/bin/clang++ -D CMAKE_PREFIX_PATH=/opt/rocm" make build -j 8

To mitigate it right now:

I'm just skipping the llava build

option(LLAVA_BUILD "Build llava shared library and install alongside python package" ON) by passing OFF

ganakee commented 3 months ago

I also get this error on Linux 22.04 with AMD.


      -- Configuring done (2.0s)
      CMake Error in vendor/llama.cpp/examples/llava/CMakeLists.txt:
        HIP_ARCHITECTURES is empty for target "llava_shared".

      -- Generating done (0.0s)
      CMake Generate step failed.  Build files cannot be regenerated correctly.

      *** CMake configuration failed
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
Dajinu commented 3 months ago

Me too. I get the same error.

phoenom commented 3 months ago

I have some workaround for this issue.

  1. Download source code from previous llama-cpp-python release ( i used 0.2.71 ) and unzip it
  2. Download soruce code of previous version of llama-cpp ( i used b2800) and unzip it in vendor folder in llama-cpp-python folder make sure to replace existing llama.cpp folder.
  3. install with

    pip3 install . OR CMAKE_ARGS="-D LLAMA_HIPBLAS=ON -D CMAKE_C_COMPILER=/opt/rocm/llvm/bin/clang -D CMAKE_CXX_COMPILER=/opt/rocm/llvm/bin/clang++ -D CMAKE_PREFIX_PATH=/opt/rocm" make build -j 8

im pretty sure there are better configuration instead of my configuration llama-cpp-python (0.2.71) and llama-cpp ( b2800). but well at least it works right now.

note: probably need to install additional libraries depends on your system.

lufixSch commented 2 months ago

I'm observing the same error. I'm using AMD/ROCm too.

zweiblumen commented 2 months ago

Same here error here. Ubuntu 22.04 AMD/ROC: It will fail at installing llava_shared library (libllava.so): Installing: /home/[]rojects/rocm-setup/llama-cpp-python/llama_cpp/libllama.so "/tmp/tmpigzjrup0/build/vendor/llama.cpp/examples/llava/libllava.so": No such file or directory.

Concretely these task(s) will fail:

install(
            TARGETS llava_shared
            LIBRARY DESTINATION ${SKBUILD_PLATLIB_DIR}/llama_cpp
            RUNTIME DESTINATION ${SKBUILD_PLATLIB_DIR}/llama_cpp
            ARCHIVE DESTINATION ${SKBUILD_PLATLIB_DIR}/llama_cpp
            FRAMEWORK DESTINATION ${SKBUILD_PLATLIB_DIR}/llama_cpp
            RESOURCE DESTINATION ${SKBUILD_PLATLIB_DIR}/llama_cpp
        )
        # Temporary fix for https://github.com/scikit-build/scikit-build-core/issues/374
        install(
            TARGETS llava_shared
            LIBRARY DESTINATION ${CMAKE_CURRENT_SOURCE_DIR}/llama_cpp
            RUNTIME DESTINATION ${CMAKE_CURRENT_SOURCE_DIR}/llama_cpp
            ARCHIVE DESTINATION ${CMAKE_CURRENT_SOURCE_DIR}/llama_cpp
            FRAMEWORK DESTINATION ${CMAKE_CURRENT_SOURCE_DIR}/llama_cpp
            RESOURCE DESTINATION ${CMAKE_CURRENT_SOURCE_DIR}/llama_cpp
        )

it will work if i remove LLAVA support with: -DLLAVA_BUILD=off As for missing HIP_ARCHITECTURES try: -DCMAKE_HIP_ARCHITECTURES=gfx1100

devzzzero commented 2 months ago

Hi, I am having a similar issue

  cd llama-cpp-python
  git pull --recurse-submodules -v
  git clean -x -n -f
  cmake -B /pkgs/build/llama-cpp-python  -DCMAKE_INSTALL_PREFIX=/pkgs/llama-cpp-python  -DLLAMA_CUDA=on
  cmake --build /pkgs/build/llama-cpp-python  --config Release -v
  cmake --install /pkgs/build/llama-cpp-python --prefix /pkgs/llama-cpp-python

the last step (the install step) fails with

CMake Error at /pkgs/build/llama-cpp-python/cmake_install.cmake:65 (file):
  file cannot create directory: /llama_cpp.  Maybe need administrative
  privileges.

and I traced it down to ${SKBUILD_PLATLIB_DIR}/llama_cpp and SKBUILD_PLATLIB_DIR is not set here.

So what I'm trying to do is just compile the .so bits for llama-cpp-python (See #1533), but it looks like I'm missing something here

jin-eld commented 2 months ago

I'd like to point out one thing though: the standalone llama.cpp repo at the current master branch (45c0e2e4c1268c2d7c8c45536f15e3c9a731ecdc) builds just fine with this command (c&p from llama.cpp build instructions) and also produces llava binaries/libraries:

HIPCXX="$(hipconfig -l)/clang" HIP_PATH="$(hipconfig -R)" cmake -S . -B build -DLLAMA_HIPBLAS=ON -DAMDGPU_TARGETS=gfx900 -DCMAKE_BUILD_TYPE=Release     && cmake --build build --config Release -- -j 16

I tried updating vendor/llama.cpp in llama-cpp-python to that revision, but it did not help. Explicitly disabling llava (which I did not need anyway) made llama-cpp-python compile and produce a .whl:

HIPCXX="$(hipconfig -l)/clang" HIP_PATH="$(hipconfig -R)" CMAKE_ARGS="-DLLAVA_BUILD=OFF -DLLAMA_HIPBLAS=on -DAMDGPU_TARGETS=gfx900" pip wheel .

I'm on Linux / ROCm 6.x