Closed zhao-cy closed 1 year ago
have a text file of list of expected branches (based on the input dataset: availability of the MRI data in each session). babs-status
and babs-submit
should refer to this text file.
e.g., class job_status
bottom line: not to write as python pickle series (with multiple functions?)...
Best to have a text (or csv) file (probably not tracked by datalad)
Every time of babs-submit
or babs-status
, check this file and update it.
This file should have columns of e.g., :
Might try out: Sge array job: an array of jobs, submit only one array but that contains all 1000 jobs; easier to submit and kill and check; still keep the graininess of a job (i.e., can still check each job’s stats)
testing:
only thing left: where to run? (cubic tmpdir, comp_space, etc)
can add it in the config yaml file (e.g., section compute_job_working_dir
)
when submitting the job(s), check argument
where_to_run
: e.g.,${CBICA_TMPDIR}
or/cbica/comp_space/$(basename $HOME)
participant_job.sh
first check out file of
status.csv
(see below); no need to check successfully jobs any morecheck if a subject's job has been submitted
if submitted:
qsub
--rerun
is requested--rerun
is requestedBefore writing
babs-status
andbabs-submit
, check existing effort in:qsub_rerun.sh
in slack channelqsub
); otherwise, the job hasn't been submitted yet: to submit it!~/Curation/RBC/PennLINC/Generic
get_qsub_calls_rerun_multises.py
get_qsub_calls_rerun.py