Closed jgpruitt closed 2 weeks ago
@jonatas is Ollama running on your host machine and pgai running in docker? If so, you need to use http://host.docker.internal:11434
If that fixes it, I guess I should improve the docs with this info
Yes! it works @jgpruitt! thanks for the update 👍
@jonatas Oh wow! Good point. I guess I'm spoiled with my Apple M2 Pro. I didn't realize how slow it could be.
Adds support for interacting with LLMs running in Ollama.