Closed eirrgang closed 1 year ago
The original example had mdrun arguments hard coded in the python script that made the script only valid for thread-MPI GROMACS installations.
--mdrun-arg
In a Docker container with 4 cores, I installed gmxapi 0.4.0 with an MPI-enabled GROMACS 2023, and ran the following.
python rp_basic_ensemble.py --resource docker.login --access ssh --venv $VIRTUAL_ENV/ --pilot-option cores=4 --procs-per-sim 2 --size 2 --mdrun-arg maxh 0.01 --log-level DEBUG
Based on the terminal output, I then checked the MD log to confirm the simulation ran successfully with 2 MPI ranks.
cat /home/rp/radical.pilot.sandbox/rp.session.2c419e24-bcec-11ed-9770-0242ac120003/pilot.e6f81a7e-d847-4e28-810e-24c632616aef/rp-basic-ensemble-1/mdrun_6d00f4b07eabc852c23b016d1c2ca339_i0_0/md.log
I believe this has now been confirmed with MPI-gromacs on bridges2. @wehs7661 do you have any feedback or requested changes or should this be merged?
Summary
The original example had mdrun arguments hard coded in the python script that made the script only valid for thread-MPI GROMACS installations.
--mdrun-arg
.Validation
In a Docker container with 4 cores, I installed gmxapi 0.4.0 with an MPI-enabled GROMACS 2023, and ran the following.
Based on the terminal output, I then checked the MD log to confirm the simulation ran successfully with 2 MPI ranks.