Open MicahZoltu opened 1 month ago
I get the same error with TinyLalama-1.1B:
<3>WSL (9) ERROR: UtilGetPpid:1293: Failed to parse: /proc/1/stat, content: 1 (bash) S 0 1 1 34816 9 4194560 741 235 19 4 1 0 0 0 20 0 1 0 2632973 4698112 962 18446744073709551615 94463164870656 94463165846317 140734858901344 0 0 0 65536 3686404 1266761467 1 0 0 17 1 0 0 0 0 0 94463166069168 94463166117808 94463196512256 140734858903387 140734858903397 140734858903397 140734858903534 0
This suggests the problem isn't specific to any one llamafile, but rather a more generalized problem with Llamafiles in Docker, or WSL's involvement somehow.
Possibly related, and the only other place I can find useful details about this error: https://github.com/microsoft/WSL/issues/10073
Perhaps someone who better understands how these llamafiles work will get some insight from the discussion in that issue which seems to talk about how binaries are launched within Docker.
Environment: Docker 24.0.7 on Ubuntu in WSL 2.1.5.0 on Windows 11
Create
Dockerfile.llama
:docker image build --file Dockerfile.llama --tag llama .
docker container run --rm -it llama
docker container run --rm -it --entrypoint /bin/bash llama
./Meta-Llama-3-70B-Instruct.Q4_0.llamafile
from inside the docker container.I get the same error if I use
alpine
as the base image. The error is coming from inside the container it seems, but it starts withWSL
which the container shouldn't have any awareness off. I couldn't find any information online aboutUtilGetPpid
either. The problem could be with any of the following, and I am open to someone helping me to figure out who is at fault!Meta-Llama-3-70B-Instruct-Q4 Llamafile