We should limit the number of different JobResource subclasses used by different scheduler plugins, I think, because these make different schedulers behave differently and so it's harder for the user to know which resources we pass.
For this scheduler, we clearly need to specify the total number of cores.
Memory can probably be removed as discussed in #7
Do we need a different class, and in particular both num_mpiprocs and num_cores?
Or can we just reuse e.g. this below (ParEnvJobResource), simply specifying the tot_num_mpiprocs? (and a parallel_env, which is a string - I imagine this would be matched in the future to the name of the alloc on which you want to run - e.g. GPU vs CPU etc.).
We should limit the number of different JobResource subclasses used by different scheduler plugins, I think, because these make different schedulers behave differently and so it's harder for the user to know which resources we pass.
For this scheduler, we clearly need to specify the total number of cores.
Memory can probably be removed as discussed in #7
Do we need a different class, and in particular both num_mpiprocs and num_cores? Or can we just reuse e.g. this below (
ParEnvJobResource
), simply specifying thetot_num_mpiprocs
? (and a parallel_env, which is a string - I imagine this would be matched in the future to the name of the alloc on which you want to run - e.g. GPU vs CPU etc.).https://github.com/aiidateam/aiida-core/blob/ff1318b485a8b803e115b78946cc4593fc661153/aiida/schedulers/datastructures.py#L177