Closed clsty closed 3 months ago
I'll wait for the guy who made a pr earlier to make a new one (with doubts) I might see if I can take a look at it and add it myself later but very unlikely as my machine is too slow for it to be practical for me
This could also be a good thing for https://github.com/end-4/dots-hyprland/discussions/263 , because
A reference on API usage, taking llava for example: https://ollama.com/library/llava
can you add gpt4all or llama.cpp + chatglm.cpp?
If you want broke man ollama / local llama support: Ollama features an OpenAI compatible API on port 11434 (https://ollama.com/blog/openai-compatibility). I think you can just change ai/proxyUrl in .config/ags/modules/.configuration/user_options.js
. Otherwise it could also be a custom provider like I did here. It's what I'm using right now and imo works well enough to the point a custom ollama service isn't needed (except changing models, but for that general open ai service could just be improved)
This is also compatible with other local llm options as long as they have an open ai compatible webserver (which most do). I personally use the llama.cpp server directly and start it with
# just use a very high -ngl like 1000 to run it fully on GPU
./llama.cpp/build/bin/server -ngl 1000 --port 11434 --model /path/to/model
done in #532 i guess
What would you like to be added?
ollama support.
How will it help
Currently we have gemini and chatgpt support, which both need online account and internet access.