janhq / jan

Jan is an open source alternative to ChatGPT that runs 100% offline on your computer. Multiple engine support (llama.cpp, TensorRT-LLM)
https://jan.ai/
GNU Affero General Public License v3.0
21k stars 1.21k forks source link

feat: Improving Ollama integration #2998

Open eckartal opened 1 month ago

eckartal commented 1 month ago

Problem Integrating Ollama with Jan using the single OpenAI endpoint feels challenging. It’s also a hassle to ‘download’ the model.

Success Criteria

Additional context Related Reddit comment to be updated: https://www.reddit.com/r/LocalLLaMA/comments/1d8n9wr/comment/l77ifd1/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

ShravanSunder commented 3 weeks ago

Yes please! This is my biggest blocker with Jan. I don't want multiple redundant model file location. I'd like my ollama models to be easily used

richardstevenhack commented 1 week ago

I second this. I looked at the docs about "Ollama integration", but all that does is set up the server endpoint. You can't select an Ollama model already downloaded where Ollama stores its models and I don't think you can upload the model. On my openSUSE Tumbleweed system, Ollama stores its models in /var/lib/ollama/.ollama/models/ rather than the default Ollama location, and the Import file selection dialog can't even see the directories below the /var/lib/ollama directory.