nomic-ai / gpt4all

GPT4All: Chat with Local LLMs on Any Device
https://nomic.ai/gpt4all
MIT License
68.68k stars 7.53k forks source link

Integration with Ollama #2544

Open ceramicv opened 1 month ago

ceramicv commented 1 month ago

I already have many models downloaded for use with locally installed Ollama.

As my Ollama server is always running is there a way to get GPT4All to use models being served up via Ollama, or can I point to where Ollama houses those already downloaded LLMs and have GPT4All use thos without having to download new models specifically for GPT4All?

AndriyMulyar commented 1 month ago

@cebtenzzre How hard would it be to customize model directory? I've def seen this asked before

manyoso commented 1 month ago

Model directory is already a setting. You can change the model directory to the same one that ollama uses.

auphof commented 1 month ago

I have multiple machines and 1 local server with dedicated GPU's running Ollama. Would it be possible to have gpt4all use this local server resource and not use my local GPU

I note that https://github.com/ollama/ollama/pull/5059 will bring in Vulkan support into Ollama