askap-vast / vast-pipeline

This repository holds the code of the Radio Transient detection pipeline for the VAST project.
https://vast-survey.org/vast-pipeline/
MIT License
7 stars 3 forks source link

Pipeline run status stuck on running with qcluster timeout or ctrl-c #594

Open ajstewart opened 2 years ago

ajstewart commented 2 years ago

If a run is killed or hits the qcluster timeout then the status of the run is left as 'running'. This really should be switched to 'error'.

It's possible to catch such events using signal handling, for example:

def termination_signal_handler(sig, frame, pipeline, p_run) -> None:
    # Set pipeline run to error and shutdown
    # logger is globally set.
    logger.warning('Pipeline terminated, shutting down...')
    pipeline.set_status(p_run, 'ERR')
    logger.debug("Pipeline set to 'Error' status.")

    # now terminate logger process
    logging.shutdown()

    sys.exit()    

def run_pipe(...):
    ...
    # register the terminate handler
    sigterm_handler = partial(
        termination_signal_handler,
        pipeline=pipeline,
        p_run=p_run
    )
    signal.signal(signal.SIGTERM, sigterm_handler)
    signal.signal(signal.SIGINT, sigterm_handler)

However when I tried this approach if it is called on a part when the Dask multiprocessing is taking place it really didn't like it and still crashes out. I could not find a way to either gracefully wait for the children to finish or kill them early. If outside of dask then it works ok, so it might be along the right track.

How feasible it is to do this I'm not sure but it's also possible to just accept this behaviour and require admins to sort the run out.

Side note: Django-q is also a bit of a pain because as it stands right now, a run will always be retried at least once if it times out. So you could argue that leaving it as running is beneficial in this case as it will just exit on the second run attempt.