celery / ceps

Celery Enhancement Proposals
27 stars 15 forks source link

Support for persisted callbacks / chaining #3

Open jmdacruz opened 6 years ago

jmdacruz commented 6 years ago

I've recently been looking at using celery for chained tasks or tasks with callbacks, where one or more of the tasks in the chain interacts with an external service that supports asynchronous callbacks for long duration tasks (e.g., a REST API to which you provide a URL to call you back when results are ready). This is even more critical in workloads where celery tasks spend a lot of time interacting with these types of services, because having the tasks wait for these responses hurts scalability (the longer the external service takes to response, the more celery tasks will be accumulated wasting resources, just waiting, so memory and CPU become the choke point).

We tackled this problem with Celery as-is. The approach we took was to break the chain of tasks in two pieces, "head" and "tail", where the "head" is the task performing the call to the external service, and the "tail" is the entrypoint for the callback. The "head" does a few things: 1) It serializes the "tail" (basically, getting the signature of the celery task in the callback and pickling it, including parameters), and stores this serialized version in a datastore (e.g., Redis) generating a URL that represents a callback to this "sleeping" task. 2) It calls the external service providing the URL callback.

When the external service finishes its work, it calls this callback URL causing the code behind it to retrieve the task from cold storage and invoke it with whatever was sent by the service. At this stage there are things that are common to any external service (e.g., retrieving the task from "cold storage", rehydrating it, and calling it), and things that might be specific to each service (e.g., manipulating the results from the callback, transformations).

This solution allows us to have a very large number of tasks waiting for external services (millions), only limited by database capacity and not by memory or CPU usage. The question then is: would it make sense to make any of this part of the standard Celery workflow? I believe that this is different from asyncio support (#1), since the requirement here is that these tasks should be dropped from memory completely in order for the solution to scale, but I'm not familiar with asyncio and what's possible.

auvipy commented 6 years ago

yes #1 has a different goal. as you described seems to be a good default to be included into core. would you mind start with sending a pr ?

jheld commented 4 years ago

Consideration for making this support into the result backend code? Or similar concept.