Currently we only have slurm_memory_gigabytes_per_cpu for controlling the Slurm memory allocation. Unfortunately, I don't think --mem-per-cpu is the most intuitive argument to be configuring. For one, Slurm might decide to give you double the amount of memory that you requested because of hyperthreading (see discussion here). Secondly, multithreading isn't enormously popular in R, especially when we already have parallelism provided by targets. Instead, I prefer --mem which just simply sets the total memory per job.
Of course, you can currently configure this using:
script_lines = c(
"#SBATCH --mem 500G",
...
)
However, making it a first-class argument would be even more user friendly.
Proposal
Currently we only have
slurm_memory_gigabytes_per_cpu
for controlling the Slurm memory allocation. Unfortunately, I don't think--mem-per-cpu
is the most intuitive argument to be configuring. For one, Slurm might decide to give you double the amount of memory that you requested because of hyperthreading (see discussion here). Secondly, multithreading isn't enormously popular in R, especially when we already have parallelism provided bytargets
. Instead, I prefer--mem
which just simply sets the total memory per job.Of course, you can currently configure this using:
However, making it a first-class argument would be even more user friendly.