Open WindsGithub opened 2 months ago
Changed the model to llama3.1, issue persists
same bro
Did you install ollama? You should be able to monitor ollama processing the request. It's possible you're trying to use a model that is too large for your hardware.
İ am gamers_efsanevisi discord add me or Tag me on emergent garden
All I can see from the terminal is: Awaiting local response... (model: llama3) after sending a message, it stays like that and the response never arrives