ATM we only check for schema errors in our spoolers. I would like to restructure the code such that other errors (especially runtime errors) can also be handled. For this I would say we can largely keep the structure of the code we already have. But we should split tasks into more functions that means that when the spooler calls the add_job function it should only do calculations. Then this call can be placed in a try block and we can catch run time errors. This way we can just dump the runtime traceback into the status JSON and the remote client will know what went wrong during calculations by checking job_status.
Then after maintainer.py receives results dictionary from the spooler, uploading to Dropbox should be done by maintainer.py. This also makes it centralized because this part of the code is really similar for all spoolers, so why have it multiple times. Plus this ways Spoolers have nothing to do with Dropbox and they can also be used as standalone without installing Dropbox package.
ATM we only check for schema errors in our spoolers. I would like to restructure the code such that other errors (especially runtime errors) can also be handled. For this I would say we can largely keep the structure of the code we already have. But we should split tasks into more functions that means that when the spooler calls the
add_job
function it should only do calculations. Then this call can be placed in atry
block and we can catch run time errors. This way we can just dump the runtime traceback into the status JSON and the remote client will know what went wrong during calculations by checking job_status.Then after
maintainer.py
receives results dictionary from the spooler, uploading to Dropbox should be done bymaintainer.py
. This also makes it centralized because this part of the code is really similar for all spoolers, so why have it multiple times. Plus this ways Spoolers have nothing to do with Dropbox and they can also be used as standalone without installing Dropbox package.