Closed sa9 closed 9 years ago
Hi Saeed, I am not aware of a direct way to do this in COSMOS since it delegates by default this. However, you can setup a local installation of SGE and configure it the way you wish.
Yassine
On Sep 6, 2015, at 12:57 AM, Saeed Al Turki notifications@github.com wrote:
It seems Cosmos doesn't limit itself to the available number of cpus when DRM='local' when its task manager keeps executing all jobs. For example, I have a workflow with 10 samples but since I have 4 cores on my laptop the whole system froze up after 30 seconds and couldn't cope with all jobs running at once.
In production, this is not an issue since it is usually managed by LSF et al. I wonder if there is an easy trick to let Cosmos respect the number of cpus? queues?
— Reply to this email directly or view it on GitHub https://github.com/LPM-HMS/COSMOS-2.0/issues/20.
Ah, good idea. I'll give it a shot. Thanks Yassine.
Hey guys, sorry I was away (was at burning man). Yes, you can set the max_cores parameter when you call execution.run() or cosmos.start() (can't remember which atm and I'm on a phone). In your case, set it to 4. It works for both a local drm or grid engine.
On Sep 5, 2015 5:05 PM, "Saeed Al Turki" notifications@github.com wrote:
Ah, good idea. I'll give it a shot. Thanks Yassine.
— Reply to this email directly or view it on GitHub https://github.com/LPM-HMS/COSMOS-2.0/issues/20#issuecomment-138010592.
Awesome! I hope you enjoyed it :)
Thanks for providing the max_cores solution!
Is there a way to assign a different queue value for a specific task in the workflow? I made the default as 'medium' for the workflow during Cosmos initiation but for few tasks like GATK Base Recalibrater I'd like to run them through the 'BIG' queue which has bigger ram (but fewer slots available).
Yup, just define and pass your own get_submit_args function when you call execution.run()
See the default one for an example
On Sun, Sep 13, 2015 at 3:19 PM, Saeed Al Turki notifications@github.com wrote:
Is there a way to assign a different queue value for a specific task in the workflow? I made the default as 'medium' for the workflow during Cosmos initiation but for few tasks like GATK Base Recalibrater I'd like to run them through the 'BIG' queue which has bigger ram (but fewer slots available).
— Reply to this email directly or view it on GitHub https://github.com/LPM-HMS/COSMOS-2.0/issues/20#issuecomment-139923352.
I already did this, I guess my point is how to use multiple queues within the same execution.run()?
Just put a condition your the get_submit_args
if task.mem_req > some_number: # or task.stage.name in ['IndelRealigner'] queue = 'large_mem' else: queue = 'normal'
On Sun, Sep 13, 2015 at 5:03 PM, Saeed Al Turki notifications@github.com wrote:
I already did this, I guess my point is how to use multiple queues within the same execution.run()?
— Reply to this email directly or view it on GitHub https://github.com/LPM-HMS/COSMOS-2.0/issues/20#issuecomment-139929759.
How did I miss that! Arggh, I shouldn't work on the weekend!
Thanks a lot Erik!
No problem :D
On Sun, Sep 13, 2015 at 5:37 PM, Saeed Al Turki notifications@github.com wrote:
How did I miss that! Arggh, I shouldn't work on the weekend!
Thanks a lot Erik!
— Reply to this email directly or view it on GitHub https://github.com/LPM-HMS/COSMOS-2.0/issues/20#issuecomment-139931139.
It seems Cosmos doesn't limit itself to the available number of cpus when DRM='local' as its task manager keeps executing all jobs. For example, I have a workflow with 10 samples but since I have 4 cores on my laptop the whole system froze up after 30 seconds and couldn't cope with all jobs running at once.
In production, this is not an issue since it is usually managed by LSF et al. I wonder if there is an easy trick to let Cosmos respect the number of cpus? queues?