Open subramen opened 2 weeks ago
Are you building docker image locally, or pulling from docker hub?
i'm using $ cd distributions/meta-reference-gpu && docker compose up
as per https://llama-stack.readthedocs.io/en/latest/getting_started/distributions/self_hosted_distro/meta-reference-gpu.html
looks like it's pulling from the hub?
services:
llamastack:
image: llamastack/distribution-meta-reference-gpu
@subramen Option 1 is pulling from hub. Option 2 allows you to build locally on latest source code.
Follow-up to get it working:
1. Installed llama-stack from source on the cherrypick-working
branch.
this also changes build.yaml to safety: meta-reference
instead of inline::llamaguard
safety: []
3.cd distributions/meta-reference-gpu && docker compose up
@subramen: oh boy you have identified another source of backward incompatibility. We updated the names of these providers -- I am going to revert that back and add a "deprecation warning" which shows up accordingly.
@raghotham we are going to be slowing down considerably now unless we rapidly stabilize and update our images between releases.
Goal for deprecation messages was to move forward without breaking backward compatibility. Letās talk more how we can move fast, responsibly! :)
@subramen: we added support for "deprecating" things so we can change format sensibly by providing warnings and errors which are more graceful. But our current docker images don't have any of the deprecation code either. So this week we will accept bad breakage and rush through a bunch of breaking changes and try to get to a reasonable stable, well-tested state by Friday.
System Info
amd64, 1gpu
Information
š Describe the bug
ValueError: Provider inline::llama-guard is not available for API Api.safety
Commented out run.yaml and build.yaml in an attempt to fix the above.
build.yaml
run.yaml
See error logs for subsequent error.
Error logs
Expected behavior
it works