Open mecampbellsoup opened 3 months ago
Do you mean the exceptions within the jobs ? Or the exceptions of the worker itself ? I think the exceptions of the worker itself should already be propagated. And if we were to propagate the job exceptions, it would mean that the worker couldn't survive to an error, but it would be very disruptive for other tasks running in parallel.
One way you could do it as of today (assuming you're using async tasks) is to define a wrapper for your tasks, that would except the exceptions and send them in an asyncio.Queue. Where you launch the worker coroutine you can use asyncio.wait
or asyncio.as_completed
or something with queue.wait
and reraise the exception. Note: it will be your responsibility to stop the worker asyncio task.
Ok that sounds reasonable. Our use case is that in our integration test suite we want to tests to fail, as a general rule, if the tasks that performed during that test run encountered an exception of any sort. Currently the tests fail silently, you might say, instead.
Oh it's for the tests ?
The best approach, in my humblest opinion, would be to run the worker with an InMemoryConnector
, run with wait=False
, introspect the results of your jobs after the worker has run (assert job.status == "done"
), and capture the logs for details (if you run with pytest, logs are captured automatically, if the test fail, you'll get the details of what was logged which will contain all the tracebacks.
In certain environments we run our procrastinate worker in-process using
App.run_worker_async
.We would like to be able to "propagate" exceptions encountered during processing of this worker to the parent process, i.e. the thread/process (I'm not sure if it's multi-threaded or multi-process, frankly) that kicked off the worker via
App.run_worker_async
.Is something like this currently possible?