Canner / WrenAI

🚀 Open-source SQL AI Agent for Text-to-SQL. Make Text2SQL Easy! 🙌
https://www.getwren.ai/oss
GNU Affero General Public License v3.0
1.67k stars 146 forks source link

Ollama fails on MacOS #511

Open slum44 opened 1 month ago

slum44 commented 1 month ago

Describe the bug I am following this tutorial: https://blog.getwren.ai/how-to-use-meta-llama-3-to-query-mysql-database-using-ollama-on-your-machine-2c087b204e41

and I get the following error when trying to ask a question image

To Reproduce Steps to reproduce the behavior:

  1. Follow the blog steps to install. Note that it is missing the part about Docker Desktop (not everyone has this by default). My data model loaded fine
  2. Click on Ask
  3. See error

Expected behavior I expect Wren to presumably answer my question

Screenshots Ollama is running successfully image

Container Logs You can execute the following command to get the logs of containers and provide them here:

docker logs wrenai-wren-ui-1 >& wrenai-wren-ui.log && \
docker logs wrenai-wren-ai-service-1 >& wrenai-wren-ai-service.log && \
docker logs wrenai-wren-engine-1 >& wrenai-wren-engine.log && \
docker logs wrenai-ibis-server-1 >& wrenai-ibis-server.log

Desktop (please complete the following information):

Wren AI Information

Additional context Add any other context about the problem here.

Logs and my env.ai attached. I think the issue similar to https://github.com/Canner/WrenAI/issues/494

I noticed that the docker.compose file doesn't have any references to OLLAMA even though I chose OLLAMA / custom provider on setup. I've also included the docker compose file

wrenai-wren-ui.log wrenai-wren-engine.log

wrenai-wren-ai-service.log wrenai-ibis-server.log

wrenai-ibis-server.log

Note that I also got a button / prompt to redeploy but that also failed

cyyeh commented 1 month ago

@slum44 hi, thanks for reaching out! I've checked your .env.ai, and I found the possible reason of the error is the value of OLLAMA_URL. It should be the default value http://host.docker.internal:11434, not http://localhost:11434. Since Wren AI is running in docker, so it couldn't access directly access Ollama using localhost. host.docker.internal is the method provided by Docker to access localhost on host machine.

slum44 commented 1 month ago

@cyyeh thanks for responding so quickly. I've changed my .env.ai file as you have advised and I've restarted the docker container but the error is still the same

image

image

cyyeh commented 1 month ago

@slum44 could you try to restart the launcher again instead of just restating the failed container?

slum44 commented 1 month ago

@cyyeh absolutely I am happy to try. I reran wren-launcher-darwin.sh and terminal is ok (s/shot below) but still the same error from browser unfortunately

image

cyyeh commented 1 month ago

@slum44 could you try to go to the modeling page and try to redeploy the model? (there should be a deploy button at the top right of the modeling page).

Could you provide the ai-service logs again, thanks?

slum44 commented 1 month ago

@cyyeh done - this time a new error (screenshot below)

If it helps, the first time I ran it sees the database ok as I can see all the tables on the left hand side of the modelling page

image

cyyeh commented 1 month ago

Also it seems something wrong with the port 5555, it seems that there is already one process running using port 5555.

Or would you mind join our Discord server: https://discord.gg/5DvshJqG8Z? and we can take a look together?

cyyeh commented 1 month ago

Oh, I think you need to delete the process running on 5555 first, and restart launcher again.

cyyeh commented 1 month ago

@slum44 I think we can add the functionality that we automatically pull the ollama models users chose if they are not pulled yet! I suppose the user experience will be much better!

cyyeh commented 1 month ago

@slum44 could I close this issue?