Open YunLiu1 opened 5 months ago
Hi @YunLiu1,
msg="inference compute" id=0 library=cpu
is a confusing and useless runtime log, and it does not mean that ollama is running on CPU. To ensure that it's running on the dGPU, you may follow the steps below:
ipex-llm[cpp]
. For more details, please see our ollama document.To run ipex-llm ollama on your dGPU, you only need to install ipex-llm[cpp]
Is there documentation that explains these options? What does installing ipex-llm[xpu] do that ipex-llm[cpp] doesn't? Why do all the examples in the quickstart folder seem to all use different options and versions none of them seem to be compatible?
Is there documentation that explains these options? What does installing ipex-llm[xpu] do that ipex-llm[cpp] doesn't? Why do all the examples in the quickstart folder seem to all use different options and versions none of them seem to be compatible?
Hi @samamiller, you may see our official document for more details.
To run ipex-llm ollama on your dGPU, you only need to install ipex-llm[cpp]
Is there documentation that explains these options? What does installing ipex-llm[xpu] do that ipex-llm[cpp] doesn't? Why do all the examples in the quickstart folder seem to all use different options and versions none of them seem to be compatible?
@samamiller It depends on your use cases (e.g., llama.cpp vs. pytorch vs. llm); see https://github.com/intel-analytics/ipex-llm#use
When "pip install ipex-llm[cpp]", then "init-ollama.bat", it runs on CPU: " ... msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="31.6 GiB" ... "
But when "pip install ipex-llm[xpu]", it can run on my A770 dGPU.
When install them both "pip install ipex-llm[cpp,xpu]", i got this error:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. bigdl-core-cpp 2.5.0b20240616 requires torch==2.2.0, but you have torch 2.1.0a0+cxx11.abi which is incompatible.