langchain-ai / opengpts

MIT License
6.42k stars 846 forks source link

requesting to add ollama. #29

Open thesanju opened 10 months ago

hwchase17 commented 10 months ago

we need prompting strategies that work with oss models reliably first

thesanju commented 10 months ago

Yes, I'll be waiting for that feature, imagine GPTs running locally and doing things in background while you are working on your things.

andrewnguonly commented 5 months ago

@thesanju, Ollama is now supported out-of-the-box. I just tested it with the latest code and it works as expected. Please give it a try.

If you're running Ollama from your local machine (http://localhost:11434), you'll need to ensure the backend Docker service has access to the Ollama API. Because localhost will reference the container itself, you'll need to use the special DNS name host.docker.internal to refer to your host machine. You can specify the environment variable OLLAMA_BASE_URL=http://host.docker.internal:11434 so that the backend service will point to the Ollama API running on the host machine.