logancyang / obsidian-copilot

THE Copilot in Obsidian
GNU Affero General Public License v3.0
2.78k stars 191 forks source link

Feature Request: ollama as a backend option #156

Open clscott opened 1 year ago

clscott commented 1 year ago

This is a really great local llm backend that works on a lot of platforms (including intel macs) and is basically a 1-click install.

Main site: https://ollama.ai/ API dosc: https://github.com/jmorganca/ollama/blob/main/docs/api.md Article about to indexing an obsidian vault: https://ollama.ai/blog/llms-in-obsidian

dxcore35 commented 1 year ago

Please this will be again another level!

lzumot commented 1 year ago

has my vote if it also uses liteLLM to able to use manymore models that come out faster

ishaan-jaff commented 1 year ago

@lzumot thanks for mentioning litellm - I'm the maintainer of LiteLLM

ishaan-jaff commented 1 year ago
$ litellm --model ollama/codellama --temperature 0.3 --max_tokens 2048

Now just set the openai api base and you can use ollama with this project

import openai 

openai.api_base = "http://0.0.0.0:8000"

print(openai.ChatCompletion.create(model="test", messages=[{"role":"user", "content":"Hey!"}]))
matthew-boyens commented 11 months ago

This would be awesome! Any updates on when it could be incorporated into the plugin? Love your work

peterlionelnewman commented 8 months ago

Ollama was integrated, though the recent addition of openai APIs to ollama should make integration more straightforward for all future work. As well once embeddings get integrated into ollama then this will be possible to do locally and in a straightforward manner too.

https://ollama.com/blog/openai-compatibility