Open Travis-Barton opened 4 days ago
Change your docker run command here with --yaml_config /root/my-run.yaml
, as it will be reading the file inside docker container. E.g.
docker run -it -p 5000:5000 -v ~/.llama:/root/.llama -v C:/Users/sivar/PycharmProjects/llama_stack_learner/run.yaml:/root/my-run.yaml --gpus=all llamastack/llamastack-local-gpu --yaml_config /root/my-run.yaml
See guide here: https://github.com/meta-llama/llama-stack/tree/main/distributions/meta-reference-gpu
I get this error:
sivar@Odysseus MINGW64 ~/PycharmProjects/llama_stack_learner
$ docker run -it -p 5000:5000 -v ~/.llama:/root/.llama -v C:/Users/sivar/PycharmProjects/llama_stack_learner/run.yaml:/root/my-run.yaml --gpus=all llamastack/llamastack-local-gpu --yaml_config /root/my-run.yaml
Traceback (most recent call last):
File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.10/site-packages/llama_stack/distribution/server/server.py", line 343, in <module>
fire.Fire(main)
File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 135, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 468, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/usr/local/lib/python3.10/site-packages/fire/core.py", line 684, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/usr/local/lib/python3.10/site-packages/llama_stack/distribution/server/server.py", line 274, in main
with open(yaml_config, "r") as fp:
FileNotFoundError: [Errno 2] No such file or directory: 'C:/Program Files/Git/root/my-run.yaml'
I installed llama-stack with pip, so maybe its missing some local file? I just have my .yaml
file sitting in my dummy repo.
System Info
Using Windows 11
Information
🐛 Describe the bug
When running:
I'm able to get the container going for a 3.1 model
but I want it to load 3.2 Vision (attempts to call that model fail) eg:
I've tried pointing the docker container towards my local YAML file with the right model:
but when i try with this:
I get this:
is there a better way to specify the right model_id?
(ps i do have the model downloaded)
Error logs
see abpve
Expected behavior
see above