I'm having trouble coming up with a truly minimal example, but on the latest commit ( 0bbed99 ), when I try to submit a job, the .sbatch file that gets generated is:
#SBATCH --ntasks 1
#SBATCH --cpus-per-task 10
#SBATCH --time=05:33:00
#SBATCH --mem=61440
#SBATCH -p dict_keys([6792024, 6724208, 6724209, 6724766, 6724868, 6724878, 6724932, 6724986, 6724993, 6724998, 6725191, 6725409, 6725497, 6725501, 67255...
#SBATCH --workdir=/home/users/pcombs/HybridSliceSeq
#SBATCH -o /home/users/pcombs/HybridSliceSeq/logs/fit_and_eval.1374.e5f5d3cb._logistic_0000.out
#SBATCH -e /home/users/pcombs/HybridSliceSeq/logs/fit_and_eval.1374.e5f5d3cb._logistic_0000.err
mkdir -p $LOCAL_SCRATCH > /dev/null 2>/dev/null
if [ -f /home/users/pcombs/HybridSliceSeq/logs/fit_and_eval.1374.e5f5d3cb._logistic_0000.script ]; then
srun bash /home/users/pcombs/HybridSliceSeq/logs/fit_and_eval.1374.e5f5d3cb._logistic_0000.script
exit $?
else
echo "/home/users/pcombs/HybridSliceSeq/logs/fit_and_eval.1374.e5f5d3cb._logistic_0000.script does not exist, make sure you set your filepath to a "
echo "directory that is available to the compute nodes."
exit 1
fi
(though the #SBATCH -p line actually goes on a lot longer than that). Some of the relevant lines of code for creating the job are:
Turns out that this was because I attempt to auto-typecast arguments. This is foolish behavior and I am going to remove it from all branches now and throw an error instead.
I'm having trouble coming up with a truly minimal example, but on the latest commit ( 0bbed99 ), when I try to submit a job, the .sbatch file that gets generated is:
(though the
#SBATCH -p
line actually goes on a lot longer than that). Some of the relevant lines of code for creating the job are:This then fails because the sbatch cannot figure out which partition to submit to.