llm-edge / hal-9100

Edge full-stack LLM platform. Written in Rust
MIT License
371 stars 30 forks source link

"action" tool (function calling on the server) #59

Closed louis030195 closed 9 months ago

louis030195 commented 10 months ago

25 prelim

image

i.e. same as in chatgpt ui

https://platform.openai.com/docs/actions but for assistant api

CakeCrusher commented 10 months ago

@louis030195 Here is a demo of how I got actions to work with assistant. It consists of the following steps:

  1. Provide a OpenAPI spec
  2. Generate functions from the spec (in parallel)
  3. Generate HTTP request functions from the spec (in parallel)
  4. Whenever the function from the OpenAPI spec is called, send the args to the generated HTTP request function.
  5. Return the results as the tool outputs.
  6. Done.

As an action this whole process should happen automatically in the assistant backend.

https://gist.github.com/CakeCrusher/aad0cdb695aea7eca55d31bc801a7f83

CakeCrusher commented 10 months ago

The expectation is to be able to pass a tool like so:

client.beta.assistants.create(
    name="Action assistant",
    tools=[
        # ...    
        {
            "type": "action",
            "openapi_spec": """
                openapi: 3.0.0
                ...
            """
        } # this tool will execute autonimously
    ]
)

For simplicity here are the features that I think can wait until after the initial release:

louis030195 commented 10 months ago

should be easy to implement after merging https://github.com/stellar-amenities/assistants/pull/62

@CakeCrusher

louis030195 commented 9 months ago

done v0