meta-llama / llama-stack-client-python

Python SDK for Llama Stack
Apache License 2.0
77 stars 21 forks source link

"llama-stack-client" Command Not Working #8

Closed dawenxi-007 closed 1 month ago

dawenxi-007 commented 1 month ago

I used the pip install -r requirements.txt installed the llama-stack-client in the venv through the https://github.com/meta-llama/llama-stack-apps/blob/main/requirements.txt.

However, the llama-stack-client -h gave me the following error:

(llamastk_localgpu_env) tao@r7625h100:~/demo_1024/llamastk_metaflow/llama-stack-apps$ llama-stack-client -h
Traceback (most recent call last):
  File "/home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/bin/llama-stack-client", line 8, in <module>
    sys.exit(main())
  File "/home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages/llama_stack_client/lib/cli/llama_stack_client.py", line 45, in main
    parser = LlamaStackClientCLIParser()
  File "/home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages/llama_stack_client/lib/cli/llama_stack_client.py", line 31, in __init__
    ModelsParser.create(subparsers)
  File "/home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages/llama_stack_client/lib/cli/subcommand.py", line 16, in create
    return cls(*args, **kwargs)
  File "/home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages/llama_stack_client/lib/cli/models/models.py", line 27, in __init__
    ModelsList.create(subparsers)
  File "/home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages/llama_stack_client/lib/cli/subcommand.py", line 16, in create
    return cls(*args, **kwargs)
  File "/home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages/llama_stack_client/lib/cli/models/list.py", line 25, in __init__
    self._add_arguments()
  File "/home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages/llama_stack_client/lib/cli/models/list.py", line 29, in _add_arguments
    self.endpoint = get_config().get("endpoint")
AttributeError: 'NoneType' object has no attribute 'get'

pip list | grep llama shows me the version as the following:

llama_models       0.0.45
llama_stack        0.0.45
llama_stack_client 0.0.41

However, the python env gave me another version number:


(llamastk_localgpu_env) tao@r7625h100:~/demo_1024/llamastk_metaflow/llama-stack-apps$ python
Python 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import llama_stack_client
>>> print(llama_stack_client.__version__)
0.0.1-alpha.0
>>>
yanxi0830 commented 1 month ago

You need to have llama_stack_client 0.0.41. Please re-do pip uninstall llama-stack-client followed by pip install llama-stack-client

dawenxi-007 commented 1 month ago

Reinstalling the package does not work, as shown in the following:

(llamastk_localgpu_env) tao@r7625h100:~/demo_1024/llamastk_metaflow/llama-stack-apps$ pip uninstall llama-stack-client
Found existing installation: llama_stack_client 0.0.41
Uninstalling llama_stack_client-0.0.41:
  Would remove:
    /home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/bin/llama-stack-client
    /home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages/llama_stack_client-0.0.41.dist-info/*
    /home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages/llama_stack_client/*
Proceed (Y/n)? Y
  Successfully uninstalled llama_stack_client-0.0.41
(llamastk_localgpu_env) tao@r7625h100:~/demo_1024/llamastk_metaflow/llama-stack-apps$ pip install llama-stack-client
Collecting llama-stack-client
  Using cached llama_stack_client-0.0.41-py3-none-any.whl (241 kB)
Requirement already satisfied: httpx<1,>=0.23.0 in /home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages (from llama-stack-client) (0.27.2)
Requirement already satisfied: distro<2,>=1.7.0 in /home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages (from llama-stack-client) (1.9.0)
Requirement already satisfied: pydantic<3,>=1.9.0 in /home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages (from llama-stack-client) (2.9.2)
Requirement already satisfied: typing-extensions<5,>=4.7 in /home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages (from llama-stack-client) (4.12.2)
Requirement already satisfied: tabulate>=0.9.0 in /home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages (from llama-stack-client) (0.9.0)
Requirement already satisfied: anyio<5,>=3.5.0 in /home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages (from llama-stack-client) (4.6.2.post1)
Requirement already satisfied: sniffio in /home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages (from llama-stack-client) (1.3.1)
Requirement already satisfied: exceptiongroup>=1.0.2 in /home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages (from anyio<5,>=3.5.0->llama-stack-client) (1.2.2)
Requirement already satisfied: idna>=2.8 in /home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages (from anyio<5,>=3.5.0->llama-stack-client) (3.10)
Requirement already satisfied: idna>=2.8 in /home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages (from anyio<5,>=3.5.0->llama-stack-client) (3.10)                                               [10/941]
Requirement already satisfied: httpcore==1.* in /home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages (from httpx<1,>=0.23.0->llama-stack-client) (1.0.6)
Requirement already satisfied: certifi in /home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages (from httpx<1,>=0.23.0->llama-stack-client) (2024.8.30)
Requirement already satisfied: h11<0.15,>=0.13 in /home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages (from httpcore==1.*->httpx<1,>=0.23.0->llama-stack-client) (0.14.0)
Requirement already satisfied: annotated-types>=0.6.0 in /home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages (from pydantic<3,>=1.9.0->llama-stack-client) (0.7.0)
Requirement already satisfied: pydantic-core==2.23.4 in /home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages (from pydantic<3,>=1.9.0->llama-stack-client) (2.23.4)
Installing collected packages: llama-stack-client
Successfully installed llama-stack-client-0.0.41
(llamastk_localgpu_env) tao@r7625h100:~/demo_1024/llamastk_metaflow/llama-stack-apps$ llama-stack-client -h
Traceback (most recent call last):
  File "/home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/bin/llama-stack-client", line 8, in <module>
    sys.exit(main())
  File "/home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages/llama_stack_client/lib/cli/llama_stack_client.py", line 45, in main
    parser = LlamaStackClientCLIParser()
  File "/home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages/llama_stack_client/lib/cli/llama_stack_client.py", line 31, in __init__
    ModelsParser.create(subparsers)
  File "/home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages/llama_stack_client/lib/cli/subcommand.py", line 16, in create
    return cls(*args, **kwargs)
  File "/home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages/llama_stack_client/lib/cli/models/models.py", line 27, in __init__
    ModelsList.create(subparsers)
  File "/home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages/llama_stack_client/lib/cli/subcommand.py", line 16, in create
    return cls(*args, **kwargs)
  File "/home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages/llama_stack_client/lib/cli/models/list.py", line 25, in __init__
    self._add_arguments()
  File "/home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages/llama_stack_client/lib/cli/models/list.py", line 29, in _add_arguments
    self.endpoint = get_config().get("endpoint")
AttributeError: 'NoneType' object has no attribute 'get'
(llamastk_localgpu_env) tao@r7625h100:~/demo_1024/llamastk_metaflow/llama-stack-apps$ pip list | grep llama
llama_models       0.0.45
llama_stack        0.0.45
llama_stack_client 0.0.41
yanxi0830 commented 1 month ago

Are you able to run the following?

$ llama-stack-client configure
> Enter the host name of the Llama Stack distribution server: localhost
> Enter the port number of the Llama Stack distribution server: 5000
Done! You can now use the Llama Stack Client CLI with endpoint http://localhost:5000

See https://github.com/meta-llama/llama-stack-client-python/blob/main/docs/cli_reference.md

dawenxi-007 commented 1 month ago

Not actually. The lama-stack-client command does not work with error of self.endpoint = get_config().get("endpoint"), which is defined in cli/models/list.py file.

(llamastk_localgpu_env) tao@r7625h100:~/demo_1024/llamastk_metaflow$ pip install llama-stack-client
Requirement already satisfied: llama-stack-client in ./llamastk_localgpu_env/lib/python3.10/site-packages (0.0.41)
Requirement already satisfied: sniffio in ./llamastk_localgpu_env/lib/python3.10/site-packages (from llama-stack-client) (1.3.1)
Requirement already satisfied: typing-extensions<5,>=4.7 in ./llamastk_localgpu_env/lib/python3.10/site-packages (from llama-stack-client) (4.12.2)
Requirement already satisfied: pydantic<3,>=1.9.0 in ./llamastk_localgpu_env/lib/python3.10/site-packages (from llama-stack-client) (2.9.2)
Requirement already satisfied: httpx<1,>=0.23.0 in ./llamastk_localgpu_env/lib/python3.10/site-packages (from llama-stack-client) (0.27.2)
Requirement already satisfied: tabulate>=0.9.0 in ./llamastk_localgpu_env/lib/python3.10/site-packages (from llama-stack-client) (0.9.0)
Requirement already satisfied: distro<2,>=1.7.0 in ./llamastk_localgpu_env/lib/python3.10/site-packages (from llama-stack-client) (1.9.0)
Requirement already satisfied: anyio<5,>=3.5.0 in ./llamastk_localgpu_env/lib/python3.10/site-packages (from llama-stack-client) (4.6.2.post1)
Requirement already satisfied: exceptiongroup>=1.0.2 in ./llamastk_localgpu_env/lib/python3.10/site-packages (from anyio<5,>=3.5.0->llama-stack-client) (1.2.
2)
Requirement already satisfied: idna>=2.8 in ./llamastk_localgpu_env/lib/python3.10/site-packages (from anyio<5,>=3.5.0->llama-stack-client) (3.10)
Requirement already satisfied: certifi in ./llamastk_localgpu_env/lib/python3.10/site-packages (from httpx<1,>=0.23.0->llama-stack-client) (2024.8.30)
Requirement already satisfied: httpcore==1.* in ./llamastk_localgpu_env/lib/python3.10/site-packages (from httpx<1,>=0.23.0->llama-stack-client) (1.0.6
Requirement already satisfied: h11<0.15,>=0.13 in ./llamastk_localgpu_env/lib/python3.10/site-packages (from httpcore==1.*->httpx<1,>=0.23.0->llama-[21/1902]
ent) (0.14.0)
Requirement already satisfied: annotated-types>=0.6.0 in ./llamastk_localgpu_env/lib/python3.10/site-packages (from pydantic<3,>=1.9.0->llama-stack-client) (
0.7.0)
Requirement already satisfied: pydantic-core==2.23.4 in ./llamastk_localgpu_env/lib/python3.10/site-packages (from pydantic<3,>=1.9.0->llama-stack-client) (2
.23.4)
(llamastk_localgpu_env) tao@r7625h100:~/demo_1024/llamastk_metaflow$ llama-stack-client configure
Traceback (most recent call last):
  File "/home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/bin/llama-stack-client", line 8, in <module>
    sys.exit(main())
  File "/home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages/llama_stack_client/lib/cli/llama_stack_client.py", line 45,
in main
    parser = LlamaStackClientCLIParser()
  File "/home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages/llama_stack_client/lib/cli/llama_stack_client.py", line 31,
in __init__
    ModelsParser.create(subparsers)
  File "/home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages/llama_stack_client/lib/cli/subcommand.py", line 16, [5/1902]
e
    return cls(*args, **kwargs)
  File "/home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages/llama_stack_client/lib/cli/models/models.py", line 27, in __
init__
    ModelsList.create(subparsers)
  File "/home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages/llama_stack_client/lib/cli/subcommand.py", line 16, in creat
e
    return cls(*args, **kwargs)
  File "/home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages/llama_stack_client/lib/cli/models/list.py", line 25, in __in
it__
    self._add_arguments()
  File "/home/tao/demo_1024/llamastk_metaflow/llamastk_localgpu_env/lib/python3.10/site-packages/llama_stack_client/lib/cli/models/list.py", line 29, in _add
_arguments
    self.endpoint = get_config().get("endpoint")
yanxi0830 commented 1 month ago

@dawenxi-007 Thanks! This has been fixed, could you try again with the latest llama-stack-client package? llama-stack-client 0.0.47?

pip uninstall llama-stack-client
pip install llama-stack-client
dawenxi-007 commented 1 month ago

@yanxi0830, yes, tested. The new version fixed the issue. Thanks!