TinkerTools / tinker-hp

Tinker-HP: High-Performance Massively Parallel Evolution of Tinker on CPUs & GPUs
http://tinker-hp.org/
Other
78 stars 24 forks source link

v1.1: Regression of newinduce_pme.f #4

Closed e-kwsm closed 4 years ago

e-kwsm commented 4 years ago

Thanks @lhjolly for fixing #3, but I get another error like the following:

Fatal error in MPI_Irecv: Invalid communicator, error stack:
MPI_Irecv(170): MPI_Irecv(buf=0x2ab145551140, count=8192, dtype=0x4c000829, src=4, tag=5, comm=0x0, request=0x2ab13ad27d00) failed
MPI_Irecv(90).: Invalid communicator
Fatal error in MPI_Irecv: Invalid communicator, error stack:
MPI_Irecv(170): MPI_Irecv(buf=0x2ae6f689af40, count=8192, dtype=0x4c000829, src=20, tag=21, comm=0x0, request=0x2ae6eaff7d00) failed
MPI_Irecv(90).: Invalid communicator

The error comes from https://github.com/TinkerTools/Tinker-HP/blob/b1f4c8172933d1cf30cbc368b6109f5e5bb734d6/v1.1/source/newinduce_pme.f#L1434-L1436 and also https://github.com/TinkerTools/Tinker-HP/blob/b1f4c8172933d1cf30cbc368b6109f5e5bb734d6/v1.1/source/newinduce_pme.f#L1483-L1485

Again I tried replacing comm_rec with MPI_COMM_WORLD but execution hanged at https://github.com/TinkerTools/Tinker-HP/blob/b1f4c8172933d1cf30cbc368b6109f5e5bb734d6/v1.1/source/newinduce_pme.f#L1488-L1493


Tinker-HP: b1f4c8172933d1cf30cbc368b6109f5e5bb734d6, v1.1 MPI: Intel MPI 2018U3

lhjolly commented 4 years ago

Hi,

There should not be any problem there. Can you provide us your input files, the command line you used and your execution setup.

Thanks

e-kwsm commented 4 years ago

Thanks @lhjolly for quick reply.

I built Tinker-HP with the follows configuration:

diff --git a/v1.1/2decomp_fft/src/Makefile.inc b/v1.1/2decomp_fft/src/Makefile.inc
index c03d26eede12..bb6d05f40c57 100644
--- a/v1.1/2decomp_fft/src/Makefile.inc
+++ b/v1.1/2decomp_fft/src/Makefile.inc
@@ -37 +37 @@ else ifeq ($(FFT),fftw3_f03)
-  FFTW_PATH=/usr/local/fftw-3.3.4/
+  FFTW_PATH=/path/to/fftw/3.3.8
@@ -55 +55 @@ endif
-F90=mpif90
+F90=mpiifort
@@ -60 +60 @@ CPPFLAGS=-cpp
-CRAYPTR=-fcray-pointer
+CRAYPTR=#-fcray-pointer
diff --git a/v1.1/source/Makefile b/v1.1/source/Makefile
index 5e0924696e7d..e0c168c792a4 100644
--- a/v1.1/source/Makefile
+++ b/v1.1/source/Makefile
@@ -4 +4 @@
-RunF77 = mpif90
+RunF77 = mpiifort
@@ -9 +9 @@ FFTDECOMP = -I$(FFTDECOMPDIR)/include -L$(FFTDECOMPDIR)/lib -l2decomp_fft
-BLAS   = -I$(MKLDIR)/include  -L$(MKLDIR)/lib/intel64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread #-lm
+BLAS   = -mkl
@@ -14 +14 @@ BLAS   = -I$(MKLDIR)/include  -L$(MKLDIR)/lib/intel64 -lmkl_intel_lp64 -lmkl_int
-FFLAGS = -O3 -fopenmp -ffast-math
+FFLAGS = -O3 -qopenmp

Then

$ cd v1.1/examples
$ ./dhfr.run

     ######################################################################
   ##########################################################################
  ###                                                                      ###
 ###            Tinker-HP  ---  Software Tools for Molecular Design         ###
 ##                                                                          ##
 ##                        Version 1.1  June 2018                            ##
 ##                                                                          ##
 ##     Copyright (c) Washington University in Saint Louis (WU)              ##
 ##                   The University of Texas at Austin                      ##
 ##                   Sorbonne Universites, UPMC (Sorbonne)                  ##
 ##                              1990-2018                                   ##
 ###                       All Rights Reserved                              ###
  ###                                                                      ###
   ##########################################################################
     ######################################################################

License Number : PUBLIC_LICENSE

Cite this work as :

   Tinker-HP: a Massively Parallel Molecular Dynamics Package for Multiscale
   Simulations of Large Complex Systems with Advanced Polarizable Force Fields.

   Louis Lagardere, Luc-Henri Jolly, Filippo Lipparini, Felix Aviat,
   Benjamin Stamm, Zhifeng F. Jing, Matthew Harger, Hedieh Torabifard,
   G. Andres Cisneros, Michael J. Schnieders, Nohad Gresh, Yvon Maday,
   Pengyu Y. Ren, Jay W. Ponder and Jean-Philip Piquemal,

   Chem. Sci., 2018, 9, 956-972   doi: 10.1039/c7sc04531j

 3D Domain Decomposition
Nx =     2  Ny =     2  Nz =     2
 In auto-tuning mode......
 factors:            1           2           4           8
 processor grid           1  by            8  time=  9.121298789978027E-004
 processor grid           2  by            4  time=  3.706514835357666E-004
 processor grid           4  by            2  time=  3.453195095062256E-004
 processor grid           8  by            1  time=  2.660751342773438E-004
 the best processor grid is probably            8  by            1

 ***** Using the FFTW (F2003 interface) engine *****

 Smooth Particle Mesh Ewald Parameters :

    Ewald Coefficient      Charge Grid Dimensions      B-Spline Order

          0.5446               64    64    64                 5
 3D Domain Decomposition
Nx =     2  Ny =     2  Nz =     2

 Molecular Dynamics Trajectory via r-RESPA MTS Algorithm
Fatal error in MPI_Irecv: Invalid communicator, error stack:
MPI_Irecv(170): MPI_Irecv(buf=0x2b7b79ffcd40, count=65536, dtype=0x4c000829, src=5, tag=6, comm=0x0, request=0x2b7b704ef520) failed
MPI_Irecv(90).: Invalid communicator
lhjolly commented 4 years ago

Hi again,

We messed up the different versions, but it should be OK now.

e-kwsm commented 4 years ago

Thanks, now I can run all the examples.