Open DDXDB opened 8 months ago
-- Building for: MinGW Makefiles
-- The C compiler identification is unknown
-- The CXX compiler identification is unknown
CMake Error at CMakeLists.txt:3 (project):
The CMAKE_C_COMPILER:
did you add MinGW path to your env path variable?
-- Building for: MinGW Makefiles -- The C compiler identification is unknown -- The CXX compiler identification is unknown CMake Error at CMakeLists.txt:3 (project): The CMAKE_C_COMPILER:
did you add MinGW path to your env path variable?
I'm sure I added it
If that's the case, you might want to re-install CUDA. I ran into something similar due to not following installation order of Visual Studio first then CUDA. Re-installed CUDA and everything worked as expected. Give it a try.
If that's the case, you might want to re-install CUDA. I ran into something similar due to not following installation order of Visual Studio first then CUDA. Re-installed CUDA and everything worked as expected. Give it a try.
I'm building SYCL, not CUDA
Can you set CMAKE_CXX_COMPILER to the full path to icx, not sure if mingw / windows has an equivalent to running which icx
? Sorry I don't run windows or an Intel GPU so can't help too much.
Can you set CMAKE_CXX_COMPILER to the full path to icx, not sure if mingw / windows has an equivalent to running ? Sorry I don't run windows or an Intel GPU so can't help too much.
which icx
Strangely enough, my computer environment, build llama.cpp, worked fine, but llama-cpp-python did not.
This instruction was successfully compiled in powershell
cmd.exe "/K" '"C:\Program Files (x86)\Intel\oneAPI\setvars.bat" && powershell'
.\venv\Scripts\Activate.ps1
sycl-ls
$env:FORCE_CMAKE=1
$env:CMAKE_GENERATOR = "MinGW Makefiles"
$env:CMAKE_ARGS="-DLLAMA_SYCL=on -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icx"
pip install llama-cpp-python
Prerequisites
I am running the latest code. Development is very rapid so there are no tagged versions as of now. I carefully followed the README.md. I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed). I reviewed the Discussions, and have a new bug or useful enhancement to share.
Expected Behavior
After following the steps to install llama_cpp_python + SYCL, the application should work and can run on Intel GPU.
Current Behavior
Please provide a detailed written description of what
llama-cpp-python
did, instead.Environment and Context
CPU: Ryzen 5 5600X GPU: Intel Arc A770&A750 RAM: 32 GB 3600 Mhz OS: Windows 11 23H2 Display Driver: Intel® Graphics Driver 31.0.101.5333
Python 3.10.11 GNU Make 4.4 Built for x86_64-w64-mingw32 Microsoft Visual Studio 2022 Intel OneApi w64devkit-fortran-1.21.0.zip
Failure Information (for bugs)
Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.
Steps to Reproduce
Failure Logs