Closed kun432 closed 3 days ago
This is because you need to mount the checkpoint directory while spinning up docker. Setting $LLAMA_CHECKPOINT_DIR and then llama stack run
should work.
export LLAMA_CHECKPOINT_DIR=~/.llama
This will mount the checkpoint directory while spinning up docker container w/ command
docker run -it -p 5000:5000 -v $LLAMA_CHECKPOINT_DIR:/root/.llama -v /home/kun432/.llama/builds/docker/my-local-stack-run.yaml:/app/config.yaml llamastack-my-local-stack python -m llama_stack.distribution.server.server --yaml_config /app/config.yaml --port 5000
thanks! it works.
also, this should be on documents.
using pyenv + venv + Docker, llama stack run failed and seems cannot found model directory
models has already been downloaded like this:
error message seems it comes from Docker and it cannot find checkpoint dir from the inside of Docker, I guess. Did I miss something?