sdsc / spack

A flexible package manager that supports multiple versions, configurations, platforms, and compilers.
https://spack.io
Other
0 stars 4 forks source link

SDSC: PKG - expanse/0.17.3/cpu/a - Missing Amber (example application) #34

Closed nwolter closed 1 year ago

nwolter commented 1 year ago

This is a example application

mkandes commented 1 year ago

@nwolter - Amber22 was built successfully, at least on the CPU side of things for now. Please test when you get a chance.

[mkandes@login02 ~]$ module purge
[mkandes@login02 ~]$ module load slurm
[mkandes@login02 ~]$ module use /cm/shared/apps/spack/0.17.3/cpu/a/share/spack/lmod/linux-rocky8-x86_64/Core
[mkandes@login02 ~]$ module load gcc/10.2.0
[mkandes@login02 ~]$ module load openmpi/4.1.3
[mkandes@login02 ~]$ module load amber/22
[mkandes@login02 ~]$ which pmemd.MPI
/cm/shared/apps/spack/0.17.3/cpu/a/opt/spack/linux-rocky8-zen2/gcc-10.2.0/amber-22-wpgzyekxtx76ssui4zkyvytpgni4y3g3/bin/pmemd.MPI
[mkandes@login02 ~]$
nwolter commented 1 year ago

Getting an error " mca_bml_base_open() failed"

nwolter commented 1 year ago

Works. In examples we have an mvpich version. Is this still coming? Example script needs to be modified to remove reference to openib.
(excerpt from ompi_Info confirms)
. . MCA backtrace: execinfo (MCA v2.1.0, API v2.0.0, Component v4.1.3) MCA btl: self (MCA v2.1.0, API v3.1.0, Component v4.1.3) MCA btl: tcp (MCA v2.1.0, API v3.1.0, Component v4.1.3) MCA btl: vader (MCA v2.1.0, API v3.1.0, Component v4.1.3) MCA compress: bzip (MCA v2.1.0, API v2.0.0, Component v4.1.3) . .

mkandes commented 1 year ago

@nwolter - Yes, I actually ran a test this morning as well. This was my batch job script ...

#!/usr/bin/env bash

#SBATCH --job-name=amber-gin
#SBATCH --account=use300
#SBATCH --partition=ind-shared
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=32
#SBATCH --cpus-per-task=1
#SBATCH --time=00:30:00
#SBATCH --output=%x.o%j.%N

module purge
module load slurm
module use /cm/shared/apps/spack/0.17.3/cpu/a/share/spack/lmod/linux-rocky8-x86_64/Core
module load gcc/10.2.0
module load openmpi/4.1.3
module load amber/22

export OMPI_MCA_btl='self,vader'

export UCX_TLS='shm,rc,ud,dc'
export UCX_NET_DEVICES='mlx5_2:1'
export UCX_MAX_RNDV_RAILS=1

export OMP_NUM_THREADS="${SLURM_CPUS_PER_TASK}"
export OMP_PLACES='cores'
export OMP_PROC_BIND='close'

printenv

time -p mpirun pmemd.MPI -O -i gin -c md12.x -o gbin.v22.out

As for whether or not we also need an mvapich2 version, we can always install more versions. It's more a question if we need to do so. Idk. If we can get one version of everything we need installed before the software stack goes into production, that's a win in my book.

Anyhow, I think we can close this issue and revisit if more versions are needed after more in-depth benchmarking finds a better combination. More important to get a GPU version installed and tested now.