Open YunLiu1 opened 3 months ago
Hi @YunLiu1,
msg="inference compute" id=0 library=cpu
is a confusing and useless runtime log, and it does not mean that ollama is running on CPU. To ensure that it's running on the dGPU, you may follow the steps below:
ipex-llm[cpp]
. For more details, please see our ollama document.
When "pip install ipex-llm[cpp]", then "init-ollama.bat", it runs on CPU: " ... msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="31.6 GiB" ... "
But when "pip install ipex-llm[xpu]", it can run on my A770 dGPU.
When install them both "pip install ipex-llm[cpp,xpu]", i got this error:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. bigdl-core-cpp 2.5.0b20240616 requires torch==2.2.0, but you have torch 2.1.0a0+cxx11.abi which is incompatible.