I'm running the install as stated on the guide. Using RHEL9.2 and G3 instance in IBM Cloud with GPU, but when I run 'make instruct-nvidia' I get this error:
Building wheels for collected packages: llama_cpp_python
Building wheel for llama_cpp_python (pyproject.toml): started
Building wheel for llama_cpp_python (pyproject.toml): still running...
Building wheel for llama_cpp_python (pyproject.toml): finished with status 'error'
error: subprocess-exited-with-error
× Building wheel for llama_cpp_python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [292 lines of output]
scikit-build-core 0.9.4 using CMake 3.29.3 (wheel)
Configuring CMake...
2024-05-27 17:52:09,122 - scikit_build_core - WARNING - Can't find a Python library, got libdir=/usr/lib64, ldlibrary=libpython3.11.so, multiarch=x86_64-linux-gnu, masd=None
loading initial cache file /tmp/tmpqf4uxcth/build/CMakeInit.txt
-- The C compiler identification is GNU 11.4.1
-- The CXX compiler identification is GNU 11.4.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.43.0")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
CMake Warning at vendor/llama.cpp/CMakeLists.txt:390 (message):
LLAMA_CUBLAS is deprecated and will be removed in the future.
Use LLAMA_CUDA instead
-- Found CUDAToolkit: /usr/local/cuda/targets/x86_64-linux/include (found version "12.3.107")
-- CUDA found
-- The CUDA compiler identification is NVIDIA 12.3.107
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Using CUDA architectures: 52;61;70
-- CUDA host compiler is GNU 11.4.1
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
CMake Warning (dev) at CMakeLists.txt:26 (install):
Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at CMakeLists.txt:35 (install):
Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
This warning is for project developers. Use -Wno-dev to suppress it.
-- Configuring done (24.3s)
-- Generating done (0.0s)
-- Build files have been written to: /tmp/tmpqf4uxcth/build
*** Building project with Ninja...
Change Dir: '/tmp/tmpqf4uxcth/build'
Run Build Command(s): /tmp/pip-build-env-gyccnkm3/normal/lib64/python3.11/site-packages/ninja/data/bin/ninja -v
[1/56] /usr/bin/cc -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DGGML_CUDA_PEER_MAX_BATCH_SIZE=128 -DGGML_CUDA_USE_GRAPHS -DGGML_SCHED_MAX_COPIES=4 -DGGML_USE_CUDA -DGGML_USE_LLAMAFILE -DK_Q
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama_cpp_python
Failed to build llama_cpp_python
ERROR: Could not build wheels for llama_cpp_python, which is required to install pyproject.toml-based projects
[notice] A new release of pip available: 22.3.1 -> 24.0
[notice] To update, run: pip install --upgrade pip
Error: error building at STEP "RUN CMAKE_ARGS="-DLLAMA_CUBLAS=on" CFLAGS="-mno-avx" python3.11 -m pip install -r https://raw.githubusercontent.com/instructlab/instructlab/${GIT_TAG}/requirements.txt --force-reinstall --no-cache-dir llama-cpp-python": error while running runtime: exit status 1
make[1]: [Makefile:19: nvidia] Error 1
make[1]: Leaving directory '/opt/rhelai-dev-preview/training/instructlab'
make: [Makefile:48: instruct-nvidia] Error 2
I'm running the install as stated on the guide. Using RHEL9.2 and G3 instance in IBM Cloud with GPU, but when I run 'make instruct-nvidia' I get this error:
Building wheels for collected packages: llama_cpp_python Building wheel for llama_cpp_python (pyproject.toml): started Building wheel for llama_cpp_python (pyproject.toml): still running... Building wheel for llama_cpp_python (pyproject.toml): finished with status 'error' error: subprocess-exited-with-error
× Building wheel for llama_cpp_python (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [292 lines of output] scikit-build-core 0.9.4 using CMake 3.29.3 (wheel) Configuring CMake... 2024-05-27 17:52:09,122 - scikit_build_core - WARNING - Can't find a Python library, got libdir=/usr/lib64, ldlibrary=libpython3.11.so, multiarch=x86_64-linux-gnu, masd=None loading initial cache file /tmp/tmpqf4uxcth/build/CMakeInit.txt -- The C compiler identification is GNU 11.4.1 -- The CXX compiler identification is GNU 11.4.1 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found Git: /usr/bin/git (found version "2.43.0") -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE CMake Warning at vendor/llama.cpp/CMakeLists.txt:390 (message): LLAMA_CUBLAS is deprecated and will be removed in the future.
...
note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for llama_cpp_python Failed to build llama_cpp_python ERROR: Could not build wheels for llama_cpp_python, which is required to install pyproject.toml-based projects
[notice] A new release of pip available: 22.3.1 -> 24.0 [notice] To update, run: pip install --upgrade pip Error: error building at STEP "RUN CMAKE_ARGS="-DLLAMA_CUBLAS=on" CFLAGS="-mno-avx" python3.11 -m pip install -r https://raw.githubusercontent.com/instructlab/instructlab/${GIT_TAG}/requirements.txt --force-reinstall --no-cache-dir llama-cpp-python": error while running runtime: exit status 1 make[1]: [Makefile:19: nvidia] Error 1 make[1]: Leaving directory '/opt/rhelai-dev-preview/training/instructlab' make: [Makefile:48: instruct-nvidia] Error 2