Closed michaelgloeckner closed 10 months ago
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
ollama available under http://ollama.one-cx.org:80
currently its not running in gpu mode and also the model needs to be installed after the service is up
create docker image with ollama to host different llama2 models