Closed NuclearDuck13 closed 2 months ago
After selecting a model or before the models are displayed?
After selecting the model. I select llama3.1 and try sending a message and that's the error I get.
Have you tried any other model?
Just tried with phi3, same issue
Could you download the latest version, v1.2.0 and set the timeout multiplier (Settings > Interface) to ten? Does that solve the issue?
Here are the thing I did that fixed the problem (not sure which did what but it's working now):
Updated everything (app, ngrok, ollama) Set timeout multiplier to 10 Appended "--host-header="localhost:11434" to my ngrok run command (this is very likely what fixed it)
I have Ollama running on my Windows machine and ngrok running with a static url. I use that url in the settings of the app and it says it's good, I check the url on multiple devices and it says "Ollama is running!", but when I select the model in the app and send a message I get "Issue: Request Failed. Server issues". Is this a problem with my pc setup or my phone?