Closed joecryptotoo closed 6 months ago
After further testing on an M3 Max I found Ollama runs extremely slow in Docker vs running native on my local machine. However, it may still provide use to others to have Ollama running with very little external configuration.
After further testing on an M3 Max I found Ollama runs extremely slow in Docker vs running native on my local machine. However, it may still provide use to others to have Ollama running with very little external configuration.
Running this in docker on a Mac won't give it access to the GPU so it was running on CPU only. I'm running this on an HP server with an Nividia RTX4090 connected to it.
This is great, but I think it's better to keep Ollama out of docker, because on Mac it's not using GPU as both of you @joecryptotoo and @SaundersB confirmed.
This will start up everything you need to get going.