Open fjmmontiel opened 1 week ago
getting the same error also
Thanks for raising the issue! This is the same issue as https://github.com/meta-llama/llama-stack/issues/100, which was recently fixed.
Could you check if installing from source works for you?
mkdir -p ~/local
cd ~/local
git clone git@github.com:meta-llama/llama-stack.git
conda create -n stack python=3.10
conda activate stack
cd llama-stack
$CONDA_PREFIX/bin/pip install -e .
llama stack build
Still getting the same error, cause I am using docker environment and not conda. I have make a new environment from scratch and install llama-stack with pip. Shouldn't be there a new version of the library?
Here is the full trace of logs:
Llama Stack is composed of several APIs working together. Let's configure the providers (implementations) you want to use for these APIs.
RUN apt-get update && apt-get install -y iputils-ping net-tools iproute2 dnsutils telnet curl wget telnet procps psmisc lsof traceroute bubblewrap && rm -rf /var/lib/apt/lists/*
RUN pip install llama-stack RUN pip install fastapi fire httpx uvicorn accelerate blobfile fairscale fbgemm-gpu==0.8.0 torch torchvision transformers zmq accelerate codeshield torch transformers matplotlib pillow pandas scikit-learn aiosqlite psycopg2-binary redis blobfile chardet pypdf tqdm numpy scikit-learn scipy nltk sentencepiece transformers faiss-cpu RUN pip install torch --index-url https://download.pytorch.org/whl/cpu RUN pip install sentence-transformers --no-deps
This would be good in production but for debugging flexibility lets not add it right now
We need a more solid production ready entrypoint.sh anyway
#
ENTRYPOINT ["python", "-m", "llama_stack.distribution.server.server"]
ADD testllama/lib/python3.11/site-packages/llama_stack/configs/distributions/docker/test-build.yaml ./llamastack-build.yaml
docker build -t llamastack-test -f /var/folders/kq/8rs_ws5s72x3dmpzh_z6_zc80000gn/T/tmp.zQDghZdAmH/Dockerfile /Users/franciscojose.maldonado/_VOIS/hosted-models/testllama/lib/python3.11/site-packages [+] Building 1.6s (12/12) FINISHED docker:desktop-linux => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 1.16kB 0.0s => [internal] load metadata for docker.io/library/python:3.10-slim 1.5s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => CANCELED [1/8] FROM docker.io/library/python:3.10-slim@sha256:80619a5 0.0s => => resolve docker.io/library/python:3.10-slim@sha256:80619a5316afae70 0.0s => => sha256:80619a5316afae7045a3c13371b0ee670f39bac46ea 9.13kB / 9.13kB 0.0s => => sha256:80cd7261f1d8c75b18c5804f8045ef9601cf87d631e 1.75kB / 1.75kB 0.0s => => sha256:4b31b4d67fb996eb2f30873969bfee7a7256e029338 5.24kB / 5.24kB 0.0s => [internal] load build context 0.0s => => transferring context: 2B 0.0s => CACHED [2/8] WORKDIR /app 0.0s => CACHED [3/8] RUN apt-get update && apt-get install -y iputils- 0.0s => CACHED [4/8] RUN pip install llama-stack 0.0s => CACHED [5/8] RUN pip install fastapi fire httpx uvicorn accelerate bl 0.0s => CACHED [6/8] RUN pip install torch --index-url https://download.pytor 0.0s => CACHED [7/8] RUN pip install sentence-transformers --no-deps 0.0s => ERROR [8/8] ADD testllama/lib/python3.11/site-packages/llama_stack/co 0.0s
And the yaml is in this path: