intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, GraphRAG, DeepSpeed, Axolotl, etc
Apache License 2.0
6.69k stars 1.26k forks source link

update ollama 0.3.x support #11709

Open przybjul opened 3 months ago

przybjul commented 3 months ago

''ipex-llm[cpp]==2.5.0b20240527 is consistent with [v0.1.34] of ollama.

Our current version is consistent with [v0.1.39] of ollama.''

Is it possible to update supported ollama version to 0.3.x?

sgwhat commented 3 months ago

hi @przybjul, we can upgrade the version of ipex-llm ollama, and we will notify you immediately after completing the upgrade.

AlbertXu233 commented 2 months ago

hi @przybjul, we can upgrade the version of ipex-llm ollama, and we will notify you immediately after completing the upgrade.

I wanna know whether the upgrade is completed or not.

sgwhat commented 2 months ago

Hi @AlbertXu233 , our current version is still consistent with v0.1.39 of ollama. Do you need the newly supported models or other latest features in ollama version 0.3.x?

rebootcheng commented 2 months ago

@sgwhat Some new models are only supported by higher versions of Olama

sgwhat commented 2 months ago

Hi @rebootcheng, we have now supported models available in higher versions of Ollama, including Llama 3.1, Qwen 2, Gemma 2, Phi 3, and others.

rebootcheng commented 2 months ago

Hi @sgwhat The deepseek-coder-v2 model doesn't seem to support it

reinhardsiegel79 commented 2 months ago

hi, "tools" seems to be only available in ollama 0.3.x - I would be waiting for that. :)

AlbertXu233 commented 2 months ago

Hi @AlbertXu233 , our current version is still consistent with v0.1.39 of ollama. Do you need the newly supported models or other latest features in ollama version 0.3.x?

I need gemma2 2b and batch embedings api in newly ollama version.

rnwang04 commented 2 months ago

Hi all, ipex-llm‘s ollama is upgrade to 0.3.6 with ipex-llm[cpp]>=2.2.0b20240827, you may have a try with it 😊