Closed akotlar closed 4 months ago
Since the main comment is busy:
I've tested this extensively on bystro-dev, under a variety of workloads, and doing my best to break it, by using iptables to drop packets, restarting beanstalkd during submissions, using conntrack -D -p tcp --dport 11300 --timeout N
to drop connections older than N seconds, etc. All works as expected. Workers, if they ever find themselves concurrently processing an annotation, which is only possible under issues of severe network fuckery, will result in the losing worker failing the job. It should also be possible under odd circumstances for the job to complete, but in either case, we will never get corruption from concurrent writes (unless FLOCK doesn't work on the filesystem). There is room for improvement here, but for now this is good enough for what should be an extreme corner case, and practically impossible with sufficiently long TTR
bystron_annotation.completed
file. in the future we can relax this behavior, after verifying that all of the expected files are present (we can write the list of completed files, and their hashes in to bystron_annotation.completed
, and have worker 2 verify that the files are present, have the expected contents, and send back a completed job with a qualification (e.g., put Duplicate submission, results verified and not modified
in the message log for the job's BystroJobs DB record ).This is live on bystro-dev, with a very aggressive 40s TTR (to increase chance of race conditions for stress testing). Please try to submit jobs and break things.
This change makes it possible to have multiple annotation workers competing for jobs, even in the presence of network instability, and removes race conditions that could result in double submission and parallel workers writing to the same directory in the case of such instability. This change also makes Python beanstalkd workers more robust to network communication issues, such as application load balancer enforced connection interruptions on idle TCP connections.
Enables annotation jobs to run indefinitely, but while also making those jobs' TTR (time to run, see https://github.com/beanstalkd/beanstalkd/blob/master/doc/protocol.txt) potentially as short as a few seconds. We now use a keep-alive mechanism on job submissions. This makes it possible to retry jobs faster if the worker that picked up the job times out.
bystro-annotate.pl and Seq/ packages now consistently output debug messages to STDERR, and you can now pipe the output from bystro-annotate.pl for downstream processing; the output is a JSON message containing the result paths and result summary:
Semaphores to make double writes impossible: Seq.pm takes an exclusive lock on
bystro_annotation.lock
file in the target output directory. If it cannot, annotation fails. This file lock is kept until annotation is completed.bystron_annotation.completed
file is written before the exclusive lock is released. Presence of this file is checked before annotation begins (after exclusive lock is acquired), to ensure that we do not accidentally re-attempt processing of an annotation after completion. The reason this is important is that in the case of submissions through the Bystro API server / UI, after the job is completed, the output files are used for automatic ancestry, search, and (potentially) PRS calculations. If an annotation worker re-attempts to write the annotation while indexing/ancestry/prs (and in the future) other workers are processing data, we have introduced a race condition that potentially leads to corruption (the annotation files could be in an incomplete state just before the indexer attempts to read them, or during read they could be modified, for instance).Remove Perl Bystro Annotation server (bystro-server.pl) in favor of calling the bystro-annotate.pl command line program from the Python beanstalkd worker. This is done to consolidate beanstalkd communication code, to improve ease of debugging.
python/beanstalkd/worker.py: now runs
handler_fn
using ProcessPoolExecutor (instead of ThreadPoolExecutor), so as to not conflict with Ray's signal handler, and to make it possible to kill all handler_fn child processes.python/beanstalkd/worker.py: kills the child processes associated with
handler_fn
if the job touch() fails with NOT_FOUND while thehandler_fn
child processes are still running.NOT_FOUND
indicates that the job is no longer available to the worker for processing: at this point the worker should attempt to stop job processing (whatever is running in handler_fn), and must ensure that any other workers of the same kind that pick up the job would supersede it.python/beanstalkd/worker.py: finer-grained error handling, and will not die on client timeout errors. Instead we sleep for 1s and re-attempt connection, indefinitely (or until the process is killed).
Example of new perl/bin/bystro-annotate.pl output:
Additional Background and Motivation:
Previously, there was a small chance that operations on jobs like delete, release could stall the worker forever, if the beanstalkd server became unavailable in the microseconds between the job connection establishment and delete/release (we always connect before attempting those, so failure would have to occur between those two operations). Additionally, if the beanstalkd server became unavailable in the millisecond or so while those operations were running, the worker could stall.
Perl MCE (Many Cores Engine), at least in our hands, could not launch a 2nd parallel process for touching jobs (would conflict with launching the process pool for annotation), making it necessary to fork the annotation process anyway.
By having the beanstalkd worker run bystro-annotate.pl, rather than calling Seq.pm directly, we ensure our command line interface gets use, and that we discover usability improvements, as we have done in this PR.
Annotation jobs required long TTR (time to run) leases, because the annotation workers would not periodically touch the job to refresh the lease. If the client became unresponsive, say due to network outage, such that the job would not be completed or failed (or communication from the worker to beanstalkd server during delete/release operations failed), the job would only be retried after the TTR lease expired. Currently that lease is 48 hours. With this change jobs can run as long as needed, even with short TTRs, so that retrying the job after unresponsive client happened much faster (we could set the TTR to say 30 minutes).