Closed NilaBlueshirt closed 2 months ago
#!/bin/bash
# 2024-05-23 by Nil with help from Alan
# the tar balls were copied from Sol amber/22v3
# install guide: https://ambermd.org/doc12/Amber24.pdf
# the clean_build script was edited to leave this one out
# the GCC compiler and cuda-12.4 didn't work
# need cuda<12 and gcc<=11, so the selected cuda module is made by
# cuda@11.8.0%gcc@11.2.0+allow-unsupported-compilers
module load cmake-3.27.7-tt \
intel-oneapi-compilers-2023.2.1-5l \
intel-oneapi-mkl-2023.2.0-me \
intel-oneapi-mpi-2021.10.0-gcc-12.3.0 \
cuda-11.8.0-gcc-11.2.0-66 \
mamba/latest
amber_home=/packages/apps/build/amber/amber22_v3/amber22_src
cd $amber_home
cd build
# amber 22 has miniconda on Sol, and it's the default option
# added mkl
./configure_cmake.py --prefix /packages/apps/amber/22_v3 \
--compiler INTEL \
--mpi \
--cuda \
-mkl \
-mkl-multithreaded \
--no-gui
# after this step, edit the run_cmake script:
# cmake $AMBER_PREFIX/amber22_src \
# -DCMAKE_INSTALL_PREFIX=/packages/apps/amber/22_v3 \
# -DCOMPILER=INTEL \
# -DMPI=TRUE -DCUDA=TRUE -DINSTALL_TESTS=TRUE \
# -DDOWNLOAD_MINICONDA=TRUE \
./run_cmake
make install
source /packages/apps/amber/22_v3/amber.sh
Testing codes: https://github.com/John-Kazan/MDGuide
#!/bin/bash
# step 3 - add the MPI support
module load cmake-3.26.5-gcc-11.2.0-ed \
gcc-8.5.0-gcc-11.2.0-kn \
intel-oneapi-mkl-2023.2.0-me \
openblas-0.3.24-5f \
openmpi/4.1.5 \
cuda-11.8.0-gcc-11.2.0-66
# edit the run_cmake script:
# -DCMAKE_INSTALL_PREFIX=/packages/apps/amber/24 \
# -DMPI=TRUE -DCUDA=TRUE
export CUDA_HOME=/packages/apps/spack/21.2/opt/spack/x86_64_v3/gcc-11.2.0/cuda-11.8.0-66eeijo
./run_cmake
make install
Contact email
John.Kazan@asu.edu
ASURITE
ikazan
Software Name
Amber
Software version
22
Notes
Spack only has v20 and has cuda conflicts. The user needs MPI and cuda.
Cluster
Phoenix