Closed leeyoshinari closed 2 months ago
@leeyoshinari can you try docker pull dustynv/ollama:r36.2.0
, I am able to run the same docker run --runtime nvidia -it --rm --network=host dustynv/ollama:r36.2.0
command as you without issue.
You can also override the model cache mount point using --volume /host/models:/root/.ollama
. otherwise with no mounts in your docker run
command, you will need to re-download the models the next time the ollama server container shuts down.
solve it, thank you!
when I run with "jetson-containers run $(autotag ollama)", it works. but when I run with "docker run --runtime nvidia -it --rm --network=host dustynv/ollama:r36.2.0", it reoprts error.
I enter inside the docker image, 'id_ed25519' exists and is in '/root/.ollama'. '/root/.ollama' is a soft link to '/data/models/ollama'.
If I want to run ollama in the background, what shoud i do? pls~