meta-llama / llama-stack

Model components of the Llama Stack APIs
MIT License
3.14k stars 379 forks source link

llama stack build with docker not working: config yaml not found #114

Open fjmmontiel opened 1 week ago

fjmmontiel commented 1 week ago

Here is the full trace of logs:

Enter a name for your Llama Stack (e.g. my-local-stack): test Enter the image type you want your Llama Stack to be built as (docker or conda): docker

Llama Stack is composed of several APIs working together. Let's configure the providers (implementations) you want to use for these APIs.

Enter provider for the inference API: (default=meta-reference): meta-reference Enter provider for the safety API: (default=meta-reference): meta-reference Enter provider for the agents API: (default=meta-reference): meta-reference Enter provider for the memory API: (default=meta-reference): meta-reference Enter provider for the telemetry API: (default=meta-reference): meta-reference

(Optional) Enter a short description for your Llama Stack: test Dockerfile created successfully in /var/folders/kq/8rs_ws5s72x3dmpzh_z6_zc80000gn/T/tmp.zQDghZdAmH/DockerfileFROM python:3.10-slim WORKDIR /app

RUN apt-get update && apt-get install -y iputils-ping net-tools iproute2 dnsutils telnet curl wget telnet procps psmisc lsof traceroute bubblewrap && rm -rf /var/lib/apt/lists/*

RUN pip install llama-stack RUN pip install fastapi fire httpx uvicorn accelerate blobfile fairscale fbgemm-gpu==0.8.0 torch torchvision transformers zmq accelerate codeshield torch transformers matplotlib pillow pandas scikit-learn aiosqlite psycopg2-binary redis blobfile chardet pypdf tqdm numpy scikit-learn scipy nltk sentencepiece transformers faiss-cpu RUN pip install torch --index-url https://download.pytorch.org/whl/cpu RUN pip install sentence-transformers --no-deps

This would be good in production but for debugging flexibility lets not add it right now

We need a more solid production ready entrypoint.sh anyway

#

ENTRYPOINT ["python", "-m", "llama_stack.distribution.server.server"]

ADD testllama/lib/python3.11/site-packages/llama_stack/configs/distributions/docker/test-build.yaml ./llamastack-build.yaml

And the yaml is in this path:

image
hassankhaw commented 1 week ago

getting the same error also

yanxi0830 commented 1 week ago

Thanks for raising the issue! This is the same issue as https://github.com/meta-llama/llama-stack/issues/100, which was recently fixed.

Could you check if installing from source works for you?

mkdir -p ~/local
cd ~/local
git clone git@github.com:meta-llama/llama-stack.git

conda create -n stack python=3.10
conda activate stack

cd llama-stack
$CONDA_PREFIX/bin/pip install -e .

llama stack build
fjmmontiel commented 1 week ago

Still getting the same error, cause I am using docker environment and not conda. I have make a new environment from scratch and install llama-stack with pip. Shouldn't be there a new version of the library?