uabrc / uabrc.github.io

UAB Research Computing Documentation
https://docs.rc.uab.edu
21 stars 12 forks source link

GROMACS software update with MPI and GPU support on Cheaha #496

Open Premas opened 1 year ago

Premas commented 1 year ago

Based on ticket request #RITM0560743, the GROMACS software is updated on Cheaha with support of GPUs. Amidst GPU, it also supports multi-thread level parallelism.

This updated GROMACS version is available in Cheaha and can be loaded as a module using,

module load rc/GROMACS/2022.3-gpu

Optimization flags tested with this new version and Sample test case

-nb (Utilizes GPU) -ntmpi (Works at MPI-thread level) -ntomp (Supports OpenMP paralleism)

Sample execution pipeline from Nvidia Documentation is tested as shown below. $ DATA_SET=water_GMX50_bare $ wget -c https://ftp.gromacs.org/pub/benchmarks/${DATA_SET}.tar.gz $ tar xf ${DATA_SET}.tar.gz $ cd ./water-cut1.0_GMX50_bare/1536 $ gmx grompp -f pme.mdp $ gmx mdrun -ntmpi 4 -nb gpu -pin on -v -noconfout -nsteps 5000 -ntomp 10 -s topol.tpr

Execution of gmx mdrun with 4 MPI threads, 10 OpenMPI threads and GPUs is shown below. This version executes gmx mdrun within single node. Utilization of the optimization flags are available in GROMACS Documentation .

gromacs_run

Note

This version doesn’t support multi-node execution. Execution of mdrun on more than one node requires configuration of GROMACS with an external MPI library. However, the current module rc/GROMACS/2022.3-gpu is built with singularity container, and doesn’t have MPI support, basically the gmx_mpi executable is not available within this container.

Currently, the module rc/GROMACS/2022.3-gpu is suggested for the users hoping that the hybrid parallelism (GPU acceleration + multi threaded MPI and/or OpenMP) would yield good speedup. In future, if this will not suffice the researcher, for higher scalability, we can try building GROMACS with MPI library and test them out.

Issues while building GROMACS Software:

@uabrc/devops opted building GROMACS with singularity over building it from source/via EasyBuild. As building this software with EasyBuild requires OpenMPI, and OpenMPI failed to build due to linking issue.

Also, the GROMACS singularity container could not find the GPUs outside the container. This was fixed by @uabrc/devops after setting the LD_LIBRARY PATH and binding the CUDA lib path with the container.

Detailed discussion on the issues involved in building Gromacs software can be found here: https://gitlab.rc.uab.edu/rc/devops/-/issues/57

Premas commented 2 months ago

We prefer to add this as a tutorial page at this link: https://docs.rc.uab.edu/cheaha/tutorial/.