abetlen / llama-cpp-python

Python bindings for llama.cpp
https://llama-cpp-python.readthedocs.io
MIT License
8.22k stars 981 forks source link

Unable to build SYCL #1277

Open DDXDB opened 8 months ago

DDXDB commented 8 months ago

Prerequisites

I am running the latest code. Development is very rapid so there are no tagged versions as of now. I carefully followed the README.md. I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed). I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

After following the steps to install llama_cpp_python + SYCL, the application should work and can run on Intel GPU.

Current Behavior

Please provide a detailed written description of what llama-cpp-python did, instead.

Environment and Context

CPU: Ryzen 5 5600X GPU: Intel Arc A770&A750 RAM: 32 GB 3600 Mhz OS: Windows 11 23H2 Display Driver: Intel® Graphics Driver 31.0.101.5333

Python 3.10.11 GNU Make 4.4 Built for x86_64-w64-mingw32 Microsoft Visual Studio 2022 Intel OneApi w64devkit-fortran-1.21.0.zip

Failure Information (for bugs)

Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.

Steps to Reproduce

.\venv\Scripts\Activate.ps1
& "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" intel64
$env:FORCE_CMAKE=1
$env:CMAKE_GENERATOR = "MinGW Makefiles"
$env:CMAKE_ARGS="-DLLAMA_SYCL=on -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icx"
$env:CMAKE_GENERATOR = "MinGW Makefiles"
pip install llama-cpp-python

Failure Logs

PS D:\Program Files\sakura> .\venv\Scripts\Activate.ps1
(venv) PS D:\Program Files\sakura> & "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" intel64
:: initializing oneAPI environment...
   Initializing Visual Studio command-line environment...
   Visual Studio version 17.9.2 environment configured.
   "C:\Program Files\Microsoft Visual Studio\2022\Community\"
   Visual Studio command-line environment initialized for: 'x64'
:  advisor -- latest
:  compiler -- latest
:  dal -- latest
:  debugger -- latest
:  dev-utilities -- latest
:  dnnl -- latest
:  dpcpp-ct -- latest
:  dpl -- latest
:  ipp -- latest
:  ippcp -- latest
:  mkl -- latest
:  tbb -- latest
:  vtune -- latest
:: oneAPI environment initialized ::
(venv) PS D:\Program Files\sakura> $env:FORCE_CMAKE=1
(venv) PS D:\Program Files\sakura> $env:CMAKE_GENERATOR = "MinGW Makefiles"
(venv) PS D:\Program Files\sakura>  $env:CMAKE_ARGS="-DLLAMA_SYCL=on -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icx"
(venv) PS D:\Program Files\sakura> $env:CMAKE_GENERATOR = "MinGW Makefiles"
(venv) PS D:\Program Files\sakura> pip install llama-cpp-python
Collecting llama-cpp-python
  Using cached llama_cpp_python-0.2.56.tar.gz (36.9 MB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: typing-extensions>=4.5.0 in d:\program files\sakura\venv\lib\site-packages (from llama-cpp-python) (4.10.0)
Requirement already satisfied: numpy>=1.20.0 in d:\program files\sakura\venv\lib\site-packages (from llama-cpp-python) (1.26.4)
Collecting diskcache>=5.6.1 (from llama-cpp-python)
  Using cached diskcache-5.6.3-py3-none-any.whl.metadata (20 kB)
Requirement already satisfied: jinja2>=2.11.3 in d:\program files\sakura\venv\lib\site-packages (from llama-cpp-python) (3.1.3)
Requirement already satisfied: MarkupSafe>=2.0 in d:\program files\sakura\venv\lib\site-packages (from jinja2>=2.11.3->llama-cpp-python) (2.1.5)
Using cached diskcache-5.6.3-py3-none-any.whl (45 kB)
Building wheels for collected packages: llama-cpp-python
  Building wheel for llama-cpp-python (pyproject.toml) ... error
  error: subprocess-exited-with-error

  × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [36 lines of output]
      *** scikit-build-core 0.8.2 using CMake 3.29.0 (wheel)
      *** Configuring CMake...
      2024-03-15 19:18:00,710 - scikit_build_core - WARNING - Can't find a Python library, got libdir=None, ldlibrary=None, multiarch=None, masd=None
      loading initial cache file C:\Users\98440\AppData\Local\Temp\tmpdzy7p6h9\build\CMakeInit.txt
      -- Building for: MinGW Makefiles
      -- The C compiler identification is unknown
      -- The CXX compiler identification is unknown
      CMake Error at CMakeLists.txt:3 (project):
        The CMAKE_C_COMPILER:

          icx

        is not a full path and was not found in the PATH.  Perhaps the extension is
        missing?

        Tell CMake where to find the compiler by setting either the environment
        variable "CC" or the CMake cache entry CMAKE_C_COMPILER to the full path to
        the compiler, or to the compiler name if it is in the PATH.

      CMake Error at CMakeLists.txt:3 (project):
        The CMAKE_CXX_COMPILER:

          icx

        is not a full path and was not found in the PATH.  Perhaps the extension is
        missing?

        Tell CMake where to find the compiler by setting either the environment
        variable "CXX" or the CMake cache entry CMAKE_CXX_COMPILER to the full path
        to the compiler, or to the compiler name if it is in the PATH.

      -- Configuring incomplete, errors occurred!

      *** CMake configuration failed
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
hpxiong commented 8 months ago
   -- Building for: MinGW Makefiles
      -- The C compiler identification is unknown
      -- The CXX compiler identification is unknown
      CMake Error at CMakeLists.txt:3 (project):
        The CMAKE_C_COMPILER:

did you add MinGW path to your env path variable?

DDXDB commented 8 months ago
   -- Building for: MinGW Makefiles
      -- The C compiler identification is unknown
      -- The CXX compiler identification is unknown
      CMake Error at CMakeLists.txt:3 (project):
        The CMAKE_C_COMPILER:

did you add MinGW path to your env path variable?

I'm sure I added it

hpxiong commented 8 months ago

If that's the case, you might want to re-install CUDA. I ran into something similar due to not following installation order of Visual Studio first then CUDA. Re-installed CUDA and everything worked as expected. Give it a try.

DDXDB commented 7 months ago

If that's the case, you might want to re-install CUDA. I ran into something similar due to not following installation order of Visual Studio first then CUDA. Re-installed CUDA and everything worked as expected. Give it a try.

I'm building SYCL, not CUDA

abetlen commented 7 months ago

Can you set CMAKE_CXX_COMPILER to the full path to icx, not sure if mingw / windows has an equivalent to running which icx? Sorry I don't run windows or an Intel GPU so can't help too much.

DDXDB commented 7 months ago

Can you set CMAKE_CXX_COMPILER to the full path to icx, not sure if mingw / windows has an equivalent to running ? Sorry I don't run windows or an Intel GPU so can't help too much.which icx

Strangely enough, my computer environment, build llama.cpp, worked fine, but llama-cpp-python did not.

DDXDB commented 7 months ago

This instruction was successfully compiled in powershell

cmd.exe "/K" '"C:\Program Files (x86)\Intel\oneAPI\setvars.bat" && powershell'
.\venv\Scripts\Activate.ps1
sycl-ls
$env:FORCE_CMAKE=1
$env:CMAKE_GENERATOR = "MinGW Makefiles"
$env:CMAKE_ARGS="-DLLAMA_SYCL=on -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icx"
pip install llama-cpp-python