Closed pjvandehaar closed 7 years ago
Option 1 finished by commit https://github.com/statgen/pheweb/commit/9db09299fb2b68c9c2e4452659a8ff4e6cb3aa51 for pheweb parse
using pheweb slurm-parse
.
Next up:
pheweb slurm <subcommand>
, which plugs into the architecture those commands are already using.pheweb slurm-<subcommand>
for each subcommand I want to SLURMify.option 3
option 1: make
pheweb qq --phenos=1-100
and run that via SLURM/full/path/pheweb
uses the correctPYTHONPATH
I think. Soenv PHEWEB_DATADIR=$(pwd) $(which pheweb) cmd --phenos=1-100 --local
should be correct and I can usesbatch --cpus-per-task=1 --mem=2048 --error=$(get_tmp_path()) --quiet --time=$((3600*24)) $(which pheweb) config data_dir={conf.data_dir} n_cpus=1 {cmd} --phenos=1-100
.using array jobs:
(docs)
pheweb slurm parse N
would create a shell script which can besbatch
ed and will make N single-core jobs with--time=0
.option 2: make a master-worker architecture with message-passing
pheweb worker --connect {ip}:{port}
to SLURM a bunch of times.{conf.data_dir}/tmp/mp/$(hostname)
. it'll require.ssh/authorized_keys
, so assert that exists.{cmd:'augment-phenos', phenos:[0,1,2,3]}
or{cmd:'exit'}
.some packages to do a large portion of this: