Closed FreakDev closed 8 months ago
actually it works when I'm building the image without specifying a platform (I am on a mac) but if I try to build the image with --platform linux/amd64
option it tells me
> [11/13] RUN /bin/bash setup.sh tinyllama:
10.15 setup.sh: line 10: 18 Illegal instruction ollama serve
here is my docker file
FROM ollama/ollama:latest
RUN apt-get install -y curl
ADD . .
ARG MODEL
RUN /bin/bash setup.sh ${MODEL}
ENTRYPOINT ["/bin/bash", "start.sh"]
any idea ?
It looks like you're building and running this on Apple Silicon. With --platform linux/amd64
it's possible it's using Rosetta. The Linux build currently enables AVX which isn't supported on Rosetta hence the illegal instruction.
I see... so, as far as I understand, I can't from my Apple Silicon mac, build image that uses Ollama and targets linux/amd64 platform ?
thank you for your feedback ! By any chance, do you know if there is another way to do what am I trying to do (embedding a model into a docker file) ?
At present, that is correct. Ollama won't run under Rosetta.
I'm working on some updates that will enable Rosetta support as a fall back mode.
Hello !
i'm trying to setup ollama to run in a docker container, in order to have it run in runpod serverless function and to do so i'd like to pull a model file in my container image (embed the model file into the docker image)
basically i'd like to have a script like this that run during the build fo the image :
but this doesn't work the curl never returns a http code 200...
any idea why ? and/or how could I achieve this (maybe there is another/easier way of doing this) ?
thanks in advance !