hoffmangroup / segway

Application for semi-automated genomic annotation.
http://segway.hoffmanlab.org/
GNU General Public License v2.0
13 stars 7 forks source link

Explicitly limit the number of current running segway jobs. #34

Open EricR86 opened 9 years ago

EricR86 commented 9 years ago

Original report (BitBucket issue) by Sakura Tamaki (Bitbucket: Tamaki_Sakura).


Currently, Segway will directly submit job to the cluster system as long as all the prerequisite of the job has finished. This is a reasonable behaviour with the assumption that segway should take all the available resources on a cluster.

However, on most cluster system, computational resources used by segway is also shared by other people, where others might want to run a single job that may take a significant portion of the cluster's resources. Since each individual segway jobs is small, it is very easy to become the case that the agile segway take over all the resources in the cluster for a considerable period, making the jobs of others always on hold and not have the enough resources to run.

Others ways to solve this problem without changing segway code include specifying a resource quota per user on some cluster system, which would not be agile enough as it is normally only open to cluster administrator. Force all job sent by segway to be not run instantaneously and use a job monitor script to start part of them manually to leave enough resource for reserved is also a common solution (and is currently used in our lab), but without polling the cluster status too frequently by the script (which should not be done because it will consume too much resources) it is very hard to make it efficient.My test shown it to be ~50% slower.

So, what I am hoping segway could do is that segway can explicitly limit the number of current jobs running by limit the amount of job it sent to the cluster. It will not directly send jobs to the cluster as long as the number of job already sent to the cluster minus the number of job finished is greater than a specific amount. I am also hoping that this quota is changeable throughout the period of segway runs, perhaps through a shell environment variable.

EricR86 commented 9 years ago

Original comment by Michael Hoffman (Bitbucket: hoffman, GitHub: michaelmhoffman).


Setting this kind of policy is mainly the job of the cluster system. Within that policy, we should use the cluster resources as much as possible. Keeping machines idle so that people can submit new jobs without waiting is wasteful. If you have a specific concern about your use, please email segway-internal.

That said, we already have a system that limits how many jobs are submitted at once in segway.cluster.__init__, it just isn't very easy to modify. This can be a low-priority enhancement.

EricR86 commented 9 years ago

Original comment by Michael Hoffman (Bitbucket: hoffman, GitHub: michaelmhoffman).


EricR86 commented 9 years ago

Original comment by Sakura Tamaki (Bitbucket: Tamaki_Sakura).


The problems is, some large jobs from others would have to wait for a really long time, mostly until segway start to run instance bundle jobs (cos otherwise it is very likely that segway could fill the cluster with training job), which would be on average an hour.

Besides, from segway/cluster/__init__.py

#!python

# these settings limit job queueing to 360 at once

really?

#!SHELL

[stamaki@mordor:~/groupfile/xzeng/runtime/results/20150612-1019]$ qstat -s r | grep -v ".93." | grep emt | wc -l
422

Besides according to #28 each segway job currently only takes about one core, I've seen lots of situations where segway use up to 450 cores.

Was it like 360 per instance?

EricR86 commented 9 years ago

Original comment by Michael Hoffman (Bitbucket: hoffman, GitHub: michaelmhoffman).


The 360 is an estimate from a calculation that seems to be inaccurate. An hour is a short time to wait in a batch queuing system. People should not expect there will be idle machines waiting for their jobs at any time (except for in debug queues).