run-llama / llama_deploy

Deploy your agentic worfklows to production
https://docs.llamaindex.ai/en/stable/module_guides/llama_deploy/
MIT License
1.86k stars 193 forks source link

fix: make SimpleMessageQueue server cancellable #394

Closed masci closed 1 day ago

masci commented 2 days ago

Several services don't shutdown properly, and while this is not a problem when you tear down the whole process, it's becoming an issue while we start moving towards a more distributed architecture, where services are expected to come and go (think of orchestrators).

This PR makes SimpleMessageQueue.launch_server() cancellable, more to come.

coveralls commented 2 days ago

Coverage Status

coverage: 72.924% (-0.06%) from 72.985% when pulling 8dc684a760f657c2f776ed2d8b0059bc26ff1317 on massi/simplequeue-shutdown into d396260d0167dd94dee165059af9c27d0f710ec7 on main.

masci commented 2 days ago

Ah, so basically anything with a lifespan will need this change?

It would work anyways, but I'd say, unless the lifespan is needed for also other purposes, keeping the cancelling logic close to the "server launch" logic is easier to maintain