Open Enchante503 opened 2 months ago
Although I'm not sure if this was the cause, after deleting the folder created by Git clone at /home/user/LLM/llama-cpp-python, I created a new empty build folder and performed the sequence of commands (as listed below). This resulted in everything working correctly.
I didn’t think that pip install would be related to the data cloned via Git, but could there have been any impact?
Even though no temporary build files were created when running pip install in the build folder, it doesn’t seem related. However, it’s possible that having files present could have had an adverse effect during the CMake execution.
I'm not an expert, so I don't know why it failed or succeeded.
The command you executed is:
CMAKE_ARGS="-DGGML_CUDA=on -DCUDAToolkit_INCLUDE_DIR='/usr/local/cuda-12.1/targets/x86_64-linux/include'" FORCE_CMAKE=1 pip install --force-reinstall --no-cache-dir 'llama-cpp-python[server]'
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
Starts normally
Current Behavior
build sample:
Environment and Context
Windows11 WSL2 Ubuntu 22.04.4 LTS CUDA12.1
Steps to Reproduce
I tried several...
CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python
and
pip install --force-reinstall --no-cache-dir llama-cpp-python
and
pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu121
and etc........................... I have researched various methods, including ChatGPT and Google. and Also delete cache and temp files.
P.S. When building with the CUDA option, for some reason the CPU reached 100% and it took a long time to complete.
Failure Logs