Open emarti opened 1 year ago
This looks great! Two quick checks:
--cpus-per-task
on the cluster is set to 1 (so essentially adding it doesn't change current workflows)When the memory specified is large (on our nodes bigger than 16 GB) then slurm automatically assigns more CPUs. This change will fix this.
Gotcha. In this case, we don't want to set a default to be 1, because it will break the default behavior. Let's keep the variable unset, and then only add it to the command if it's defined.
Two changes: First, I added jupyterlab.sbatch so we can use jupyter lab notebooks instead of the modern ones. Second, I added a CPUS_PER_TASK parameter in param.sh so we can change the number of CPUs requested.