Closed ashish-atidiv closed 1 week ago
Hi @ashish-atidiv
You first need to install Ollama and pull the Llama 3.2 model. It will run on defaut port indicated in project config (11434)
Hi @wramarques, thanks for jumping in. Hi @ashish-atidiv sorry for the delayed response. As we are trying to make a request to the Llama 3.2 model running locally on Ollama, please ensure that Ollama is running locally you can do that using ollama run llama3.2
. Also verify this by making a curl request to the port curl http://localhost:11434
Thanks. This worked!
Hey @Sumanth077 I am facing issues running the project and getting this error.
Environment: