Closed thehrh closed 8 years ago
Is there a reason why the memory needs to be scaled?
The memory needs to be given in kB over here.
Ah. That would be good to add in the comments.
Come to think about it, I see no reason why this would be a problem specific to LSF (?). Would it make sense to implement a function for the submit base class that will scale the requested memory for every derived class when this parameter is configured?
In general, we probably want to prevent people from implementing config parameters with the same functionality but different names across the various submit classes, right?
Does the LSF not have units? PBS usually allows one to define units of some sort, i.e. #PBS -l pmem=2700mb
If there is a way, I haven't found it. As far as I know, the user is dependent on the way the cluster has been configured.
If all LSF versions measure in KB, I'd be inclined to hard-code the scaling factor instead of leaving it up to the user. The user will eventually forget to set it and bad things will happen.
Reading a bsub
man page (http://www.vub.ac.be/BFUCC/LSF/bsub.1.html). This appears to be the case. One could still make the scaling factor an option. It should scale the memory safety margin rather than the difference between kB and MB.
Please take a look at the changes I made above and let me know if this is along the lines of what you had in mind. Note the additional modification of the advertised memory for GPU-Glideins with PBS when the requested memory exceeds the per-core limit: it seemed like it should also match the per-core limit in that case (?).
This last change allows overriding LSF memory unit defaults in the config.
L332 and L335 in
submit.py
should useself.config
notsubmit_conf
.... Brain fart, it is fine. Don't code before coffee