Closed ColonelPanicks closed 5 years ago
Yup, nice -- as "group" and "grouping" is getting a bit overloaded as a term, I'd suggest the syntax:
admin batch run-family -g computenodes compute
...And have an (optional) family
configuration parameter in the config.yaml
syntax.
Note this may be multiple "batches", one for each script, so the output from the command is going to need some work -- or it might not be multiple batches though I'm not sure how it will fit in the data model otherwise!
Can i just confirm if this is a feature you just want for batch
or would it also be wanted for open
?
For batch
only, open
is intended for more interactive commands so having a sort of "stack" of them wouldn't really suit the idea of it
sweet, cheers :slightly_smiling_face:
Another question - how are you wanting arguments to work for this? I assume disabled or otherwise could have them apply to a specified command in the family?
Good question, I think it's probably best to have the arguments disabled in this case as it's a lot of bulk script running
okay, sound - can always add something later
@ColonelPanicks another question - if you want the families each command is a member of to be stored with the command in that command's config file, how do you want the order of the commands in that family to be specified?
I think the order should just consistently be alphabetically ordered, we shouldn't really be too dependent on the orderings in a family and can then just manually control the ordering by prepending a number to the tool name (e.g. 01-dothisfirst
, 02-afterfirst
, etc)
Ok, great, that's easiest for me anyway :smile:
And possibly final question @ColonelPanicks - would you prefer running all commands on one node then to moving on to the second node or running the first command on all nodes then moving on to the second command?
Dealers choice but if I had to make a decision I'd say a node at a time
:+1:
With cluster customisation scripts there's consistency to where these are run so it would be useful to be able to "group" scripts such that they can all be executed on a remote system from one command.
For example, on a cluster say we have 3 scripts (
script[1-3].sh
) and thatscript1.sh
andscript2.sh
are customisations forcompute
nodes whilescript3.sh
is a customisation forgpu
nodes. Nodes in thegpu
group are also consideredcompute
nodes.If I wanted to run
script1.sh
andscript2.sh
on compute nodes it would half the number of commands to be able to do something like:In turn the script groupings could be flagged in the config file for batch commands and then some wizardry can work out which scripts are to be run.