intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, GraphRAG, DeepSpeed, Axolotl, etc
Apache License 2.0
6.68k stars 1.26k forks source link

Ollama Error on MTL-iGPU #12037

Closed weiseng-yeap closed 1 month ago

weiseng-yeap commented 1 month ago

Hi IPEX-LLM Team.

We testing the OLLAMA follow this guide: https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_quickstart.md

The platform that we using is MTL iGPU and ubuntu 22.04

And tested on oneAPI version 2024.1 & 2024.2 detected same issue.

Steps:

  1. On terminal input command ./ollama serve
  2. On 2nd terminal input command ./ollma run phi3

Error: image

sgwhat commented 1 month ago

Hi @weiseng-yeap, you may try to install oneAPI 2024.0 and run phi3 again, it works fine based on my testing.