run-llama / llama_deploy

Deploy your agentic worfklows to production
https://docs.llamaindex.ai/en/stable/module_guides/llama_deploy/
MIT License
1.86k stars 193 forks source link

Add endpoint to API Server `deployments` resource to get event stream using client's `get_task_result_stream` method. #296

Closed nerdai closed 1 month ago

nerdai commented 1 month ago

(The control plane already consumes the stream messages from the Workflow services and emits SSE using FastAPI StreamingResponse)

Implement the equivalent of what we have in workflows:

handler = workflow.run()
async for event in handler.stream_events():
    # do something with event
    pass

for Llama Deploy through the API Server, something like GET http://apiserver.local/my-deployment/events

Acceptance Criteria