Open mrblaze27 opened 3 days ago
OLLAMA_API_BASE_URL = "http://localhost:3000"
I see you're using a different port than the default. What method did you use to install Ollama? What OS are you on?
COMPLETION_MODEL = "llama3.2"
Make sure Ollama has llama3.2 downloaded before you try to use it in AlwaysReddy. You can do this by running ollama run llama3.2
in the terminal.
Also try updating your Ollama if you haven't in a while.
OLLAMA_API_BASE_URL = "http://localhost:3000"
I see you're using a different port than the default. What method did you use to install Ollama? What OS are you on?
cause i run Ollama in docker with URL= http://localhost:3000/ i try to run on windows11
COMPLETION_MODEL = "llama3.2"
Make sure Ollama has llama3.2 downloaded before you try to use it in AlwaysReddy. You can do this by running
ollama run llama3.2
in the terminal.
it works:
PS C:\Users\simo1\Desktop\AlwaysReddy-main> ollama run llama3.2
>>> hi
Hello! How can I assist you today?
>>> Send a message (/? for help)
Also try updating your Ollama if you haven't in a while.
just updated
sorry, now it's work! i closed ollama with docker and run it in cmd with ollama serve, and of course using default port. thank you for your support!
Hi, i have an error after Transcript, someone can help me? log below:
config.py: