run-llama / llama_deploy

Deploy your agentic worfklows to production
https://docs.llamaindex.ai/en/stable/module_guides/workflow/deployment/
MIT License
1.73k stars 176 forks source link

Nested workflows execution as tool call with distributed services not routed to distributed service instance. #262

Open danielbrown-se opened 1 week ago

danielbrown-se commented 1 week ago

So i have a 'primary' workflow service that references a number of other agent based workflows that are also deployed as independent services using the llamadeploy system.

In the old llamaagents approach i could wire the 'workflow agent' as a ToolService with metadata, and tool calls would go via the control plane and utilise the distributed service instance to run the tool call.

I dont seem to be seeing this behaviour using the new llamadeploy and workflows approach (but its def possible im missing something obv).

agent.add_workflows(fact_finder_agent=fact_agent)

This does add the workflow and i can manually call the run method from a step (or even a tool call); however my use of it is that it is called as a tool in a standard ReAct workflow process. This approach seems to call the sub workflow in the same service process as the primary workflow, I expected that it would work in a similar fashion to the old ServiceTool approach where a new task would be created for the tool call to be executed by the 'tool service' (which for all intents and purposes in this example is another deployed agent being used as a tool).

Could you confirm that nested workflows are expected to be run in process or should these generate new tasks in the same way the old 'TOOL_CALL' messages worked with ToolServices.

Appologies if I have misunderstood or am presuming features and thanks for any help you may be able to offer.

Danny

logan-markewich commented 1 week ago

Hey Danny,

Yea, the entire UX has changed quite a bit from llama-agents to llama-deploy. I should probably just delete the extra services (I thought I'd find a use for them, but workflows are so general, I don't really see the value currently)

If you want replicate the experience of the tool service, I would write a workflow for a react agent, and a workflow for calling tools.

Luckily for us, we have an example of a react agent: https://docs.llamaindex.ai/en/stable/examples/workflow/react_agent/

To replicate the tool service, I would write a workflow like

class ToolWorkflow(Workflow):
  tools = {"tool1": tool1_fn., ....}

  @step
  async def call_tool(self, ev: StartEvent) -> StopEvent:
    tool_name = ev.get("tool_name")
    tool_kwargs = json.loads(ev.get("tool_kwargs"))

    try:
      tool_to_call = tools[tool_name]
      result = await tool.acall(**tool_kwargs)
    except Exception as e:
      result = f"Error while calling tool {tool_name}: {e}"

    # since we are intending this to run over a network, need to serialize -- could also use pickle
    return StopEvent(result=json.dumps({"result": result}))

Then in the agent example, we can modify the tool calling step to use this workflow

   @step
    async def handle_tool_calls(
        self, ctx: Context, ev: ToolCallEvent, tool_workflow: ToolWorkflow
    ) -> PrepEvent:
        # call tools -- safely!
        for tool_call in tool_calls:
            tool_result = await tool_workflow.run(tool_name=tool_call.tool_name, tool_kwargs=json.dumps(tool_call.tool_kwargs))
            tool_output = json.loads(tool_result)["result"]

            current_reasoning  = await ctx.get("current_reasoning", default=[])
            current_reasoning.append(
                ObservationReasoningStep(observation=tool_output.content)
            )
            await ctx.set("current_reasoning", current_reasoning)

        # prep the next iteraiton
        return PrepEvent()

Now, if both the tool workflow and agent are deployed, it will automatically make calls to the remote tool workflow