Closed JamesKunstle closed 7 months ago
Will fix #440, removing that issue.
I think that the following would be a good solution:
index_callbacks.py/run_queries should return a data structure of of the jobs respective of each of the queries that are run.
{f.__name__: job_id for f, job_id in zip(funcs, jobs)}
This should then be an input to all of the background callbacks. These background callbacks would then be able to wait on the status of the job that their data is coming from.
The job can have states of queued, started, succeeded, or failed.
If the job doesn't exist in the queue, we can assume that it completed (or failed) and was forgotten in the function wait_queries
. In this case, we can just check that data exists in cache for the repos that the background callback needs and proceed if so. Otherwise, we know there was an asynchronous programming glitch, or an uncaught failure, and the background callback can elegantly fail.
Will be creating an issue in the Plotly forum to try to get Celery to retry background tasks.
Will wait forever if the waited-on query fails. Need a way to cancel the callback on failure.
One idea- in 'wait_queries', can pass values to a dcc.Store that acts as the signal-passing interface for the queries that fail. All of the background callbacks that rely on that query will register changes to that dcc.Store as cancellation triggers.