intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, GraphRAG, DeepSpeed, Axolotl, etc
Apache License 2.0
6.75k stars 1.27k forks source link

Path of models using Ollama with IPEX-LLM (Windows) #12403

Closed NikosDi closed 1 week ago

NikosDi commented 1 week ago

Hello. I'm trying to figure out the path of the downloaded models when I run:

ollama pull <model name>

using the version of Ollama with IPEX-LLM.

It's not C:\Users\<username>\.ollama\models (obviously)

sgwhat commented 1 week ago

Actually it should be C:\Users\<username>\.ollama\models as default.

NikosDi commented 1 week ago

Well, no.

I tried to search the model after downloading it and it should be easy due to the size (> 4GB) but it was invisible - search couldn't find it. So, I had to monitor the download procedure in real-time and I discovered that the path actually is:

C:\Windows\System32\config\systemprofile\.ollama\models

How can I set the default path to another location ? (e.g C:\Users\<username>\.ollama\models)

sgwhat commented 1 week ago

You may set the environment variable OLLAMA_MODELS.

NikosDi commented 1 week ago

It works like a charm.

I had already seen the FAQ and suggestions of "community ollama" but somehow I think this "Intel's ollama IPEX-LLM" version is something different with its own settings.