CoBrALab / qbatch

The Unlicense
28 stars 13 forks source link

Default chunk_size and cores to ppj #79

Open pipitone opened 8 years ago

pipitone commented 8 years ago

Back in issue #49 we discussed having chunk size defaulting to the ppj, but we lost track of that issue so I'm filing this.

I'm thinking it the logic should be: unless overridden in the environment or command-line, -c and -j both default to --ppj. That way, you default to running as many commands in parallel in a job as you have processors allocated, and the user only needs to change one option, --ppj, if they want to scale that up or down.

pipitone commented 8 years ago

Er, to be clear, setting --ppj on the command line should adjust -j and -c unless they are also set. This isn't the case right now.

gdevenyi commented 8 years ago

Yes, definitely agree.

Sidenote: do we handle infinite chunk size yet? I think we said if chunks are "0" we allow arbitrarily large chunks?

pipitone commented 8 years ago

Gabe, can you have a look at this fix and let me know what you think.

gdevenyi commented 8 years ago

This fix up looks okay to me, other than to generalize the default class a bit further if possible.

pipitone commented 8 years ago

We could generalize but then there might be too much metaprogramming black magic. I'll have a think about it this evening.

On Apr 21, 2016, at 10:35 AM, Gabriel A. Devenyi notifications@github.com wrote:

This fix up looks okay to me, other than to generalize the default class a bit further if possible.

— You are receiving this because you were assigned. Reply to this email directly or view it on GitHub

gdevenyi commented 8 years ago

Lets handle this via #85