Closed SalomeRonja closed 3 years ago
The example pipeline I'm using to test the changes is still running at the moment, but once it's finished without any issues, we can merge this into the eoff rebalancing branch, if you're okay with the changes @SchroederB
This pull request introduces 4 alerts and fixes 3 when merging e5326e705069354b5b8ae243cfdad2ced10dff5f into 730b43ff425fe0991c310552f8c61fd389699f62 - view on LGTM.com
new alerts:
fixed alerts:
from my side, the merge is good to go :)
This pull request introduces 2 alerts and fixes 3 when merging 55f0e6dcd79f2cbe95178e877deec6218d66b660 into 730b43ff425fe0991c310552f8c61fd389699f62 - view on LGTM.com
new alerts:
fixed alerts:
This pull request introduces 1 alert and fixes 3 when merging b5bce836efe554dfcf5db86f54444f58c441dc06 into dd356491868a00142261783d8d5a4d9621f56cdf - view on LGTM.com
new alerts:
fixed alerts:
My jobs have been getting stuck in the queue on Euler for longer wait times recently. I know that Benjamin experienced the same issue. At the moment, we're reserving replicas*nmpi_per_replica*1024MB of memory for every submitted job, which is the default for a job submitted to Euler. This results in our jobs getting submitted to the bigmem queue, even though we actually need much less memory in practice.
Example: for a production run of my 28 benzenes in water, I reserve 376'000.00 MB, the maximally used memory during the simulation is only 10'280 MB.
To reduce the amount of memory that's reserved for a job, I extended the option which already existed for steps a and b to the rest of the submission scripts. I also cleaned up and adapted the example pipeline accordingly.