Closed SaundersB closed 9 months ago
After further testing on an M3 Max I found Ollama runs extremely slow in Docker vs running native on my local machine. However, it may still provide use to others to have Ollama running with very little external configuration.
Adding the Ollama service to Docker compose to automatically connect to the chatbot-ollama service.