Open nick-youngblut opened 1 month ago
More generally, I don't see the point of setting params.max_memory
and parmas.max_cpus
lower than process.memory
and process.cpus
, since the latter values will be used for setting the cluster job resources.
Also, resources are globally applied to processes, but steps like STEP4b_matrix
don't set CPUs (or memory). Why not use the nf-core approach:
def check_max(obj, type){
if(type == 'memory'){
if(obj.compareTo(params.max_memory as nextflow.util.MemoryUnit) == 1){
return params.max_memory as nextflow.util.MemoryUnit
}
} else if(type == 'time'){
if (obj.compareTo(params.max_time as nextflow.util.Duration) == 1){
return params.max_time as nextflow.util.Duration
}
} else if(type == 'cpus'){
if (obj > params.max_cpus as int){
return params.max_cpus as int
}
}
return obj
}
process {
cpus = { check_max( 8, "cpus" ) }
memory = { check_max( 32.GB, "memory" ) }
time = { check_max( 48.h, "time" ) }
withLabel:process_low {
cpus = { check_max( 1, "cpus" ) }
memory = { check_max( 8.GB, "memory" ) }
time = { check_max( 8.h, "time" ) }
}
}
The
nextflow.config
includes:process.clusterOptions = '-S /bin/bash'
, but this throws the following error when run with-process.executor slurm
: