end-4 / dots-hyprland

i hate minimalism so...
https://end-4.github.io/dots-hyprland-wiki/en/
GNU General Public License v3.0
3.71k stars 255 forks source link

[Feature] Support for ollama as AI assistant #346

Closed clsty closed 3 months ago

clsty commented 5 months ago

What would you like to be added?

ollama support.

How will it help

Currently we have gemini and chatgpt support, which both need online account and internet access.

end-4 commented 5 months ago

I'll wait for the guy who made a pr earlier to make a new one (with doubts) I might see if I can take a look at it and add it myself later but very unlikely as my machine is too slow for it to be practical for me

clsty commented 5 months ago

This could also be a good thing for https://github.com/end-4/dots-hyprland/discussions/263 , because


A reference on API usage, taking llava for example: https://ollama.com/library/llava

catmeowjiao commented 5 months ago

can you add gpt4all or llama.cpp + chatglm.cpp?

arlo-phoenix commented 4 months ago

If you want broke man ollama / local llama support: Ollama features an OpenAI compatible API on port 11434 (https://ollama.com/blog/openai-compatibility). I think you can just change ai/proxyUrl in .config/ags/modules/.configuration/user_options.js. Otherwise it could also be a custom provider like I did here. It's what I'm using right now and imo works well enough to the point a custom ollama service isn't needed (except changing models, but for that general open ai service could just be improved)

This is also compatible with other local llm options as long as they have an open ai compatible webserver (which most do). I personally use the llama.cpp server directly and start it with

# just use a very high -ngl like 1000 to run it fully on GPU
./llama.cpp/build/bin/server -ngl 1000 --port 11434 --model /path/to/model
end-4 commented 3 months ago

done in #532 i guess