Open noli44 opened 1 year ago
If you don´t mind I would like to understand your use case.
You have a ARM Mac and want to execute a docker with linux/amd64 platform so I suppose you install docker for Mac with Apple silicon version (https://docs.docker.com/desktop/install/mac-install/), and I suppose (I never tried) that docker will require Rosetta 2 because you are not using linux/arm64.
Are you expecting to run the application in linux/amd64 in CPU only?
You are making this just as a test or you will deploy the final container in other platforms expecting to execute on GPU?
Hi,
Sorry that compose file is a little misleading. I was attempting to run on amd64 to see if I could isolate the issue to arm64, but found the problem regardless.
It's commented out so would be would be using the default arm64 image.
At the moment I want to be able to run locally in a container (arm64), but future state will likely be linux/amd64 on cloud infrastructure.
Thanks
I will try to run it in a container. I will give you feedback once I get some conclusions.
I am running in osx on an M1 and have been able to run the Web project without any issues. The problem arises when it's run within a container.
I see the following when using the build runtimes and if I build llama.cpp
Dockerfile in LLama.Web project
compose.yml in root directory. I modified the code/config to look for models in the mounted local volume.
Any help is appreciated.