AMReX-Astro / wdmerger

A software package for simulating white dwarf mergers with CASTRO
MIT License
3 stars 3 forks source link

wdmerger_collision crashes on 2nd-gen Xeon Phi (KNL) with Intel compilers #2

Closed bcfriesen closed 7 years ago

bcfriesen commented 7 years ago

The test problem wdmerger_collision crashes somewhere in the multigrid solver when using Intel compilers v16 and v17 (beta) compiled for 2nd-gen Xeon Phi ("Knights Landing"). The exact point where it crashes tends to vary. Below are a few example outputs:

Example 1:

Initializing the data at level 1
Done initializing the level 1 data 
STEP = 0 TIME = 0 : REGRID  with lbase = 0
  Level 1   24 grids  393216 cells  5.555555556 % of domain
            smallest grid: 32 x 32 x 16  biggest grid: 32 x 32 x 16

... multilevel solve for new phi at base level 0 to finest level 1
Gravity::make_radial_phi() time = 0.3054199219
 ... Making bc's for phi at level 0 
Gravity::fill_multipole_BCs() time = 3.019109011
*** Error in `/global/u2/f/friesen/wdmerger/tests/wdmerger_3D/./Castro3d.intel.MPI.OMP.ex': free(): invalid next size (normal): 0x00000000033c1380 ***

Example 2:

Castro::numpts_1d at level  1 is 340
Initializing the data at level 1
Done initializing the level 1 data 
STEP = 0 TIME = 0 : REGRID  with lbase = 0
  Level 1   12 grids  393216 cells  5.555555556 % of domain
            smallest grid: 32 x 32 x 32  biggest grid: 32 x 32 x 32

... multilevel solve for new phi at base level 0 to finest level 1
Gravity::make_radial_phi() time = 0.007010936737
 ... Making bc's for phi at level 0 
Gravity::fill_multipole_BCs() time = 0.1369400024
 BOXLIB ERROR: fab_dataptr_bx_c: bx is too large
forrtl: severe (40): recursive I/O operation, unit -1, file unknown
forrtl: severe (40): recursive I/O operation, unit -1, file unknown
forrtl: severe (40): recursive I/O operation, unit -1, file unknown
forrtl: severe (40): recursive I/O operation, unit -1, file unknown
forrtl: severe (40): recursive I/O operation, unit -1, file unknown
forrtl: severe (40): recursive I/O operation, unit -1, file unknown
forrtl: severe (40): recursive I/O operation, unit -1, file unknown
forrtl: severe (40): recursive I/O operation, unit -1, file unknown
forrtl: severe (40): recursive I/O operation, unit -1, file unknown
*** Error in `/global/u2/f/friesen/wdmerger/tests/wdmerger_3D/./Castro3d.intel.MPI.OMP.ex': free(): corrupted unsorted chunks: 0x0000000001e61050 ***
*** Error in `/global/u2/f/friesen/wdmerger/tests/wdmerger_3D/./Castro3d.intel.MPI.OMP.ex': double free or corruption (!prev): 0x0000000001e61620 ***
*** Error in `/global/u2/f/friesen/wdmerger/tests/wdmerger_3D/./Castro3d.intel.MPI.OMP.ex': free(): corrupted unsorted chunks: 0x0000000001e60a50 ***

Example 3:

Castro::numpts_1d at level  1 is 340
Initializing the data at level 1
Done initializing the level 1 data 
STEP = 0 TIME = 0 : REGRID  with lbase = 0
  Level 1   12 grids  393216 cells  5.555555556 % of domain
            smallest grid: 32 x 32 x 32  biggest grid: 32 x 32 x 32

... multilevel solve for new phi at base level 0 to finest level 1
Gravity::make_radial_phi() time = 0.008671045303
 ... Making bc's for phi at level 0 
Gravity::fill_multipole_BCs() time = 0.1363129616
0::Segfault !!!
See Backtrace.rg_0_rl_0.0 file for details
Rank 0 [Fri Nov 25 13:52:41 2016] [c10-4c0s0n1] application called MPI_Abort(comm=0x84000000, -1) - process 0
srun: error: nid11137: task 0: Aborted
srun: Terminating job step 3155671.7

Interestingly, when compiled with DEBUG set to TRUE, it fails with this error:

Gravity::make_radial_phi() time = 0.008027076721
 ... Making bc's for phi at level 0 
Gravity::fill_multipole_BCs() time = 0.1858928204
forrtl: severe (408): fort: (2): Subscript #1 of the array IN has value 2 which is greater than the upper bound of 1

Although I've been unable to figure out which array it's talking about (the DDT debugger doesn't capture errors from forrtl so it can't show a call stack).

This error does not occur with either the GCC or Cray compilers targeting 2nd-gen Xeon Phi.

Another data point is that another user of Nyx on a different 2nd-gen Xeon Phi system also sees segfaults in Nyx using the Intel compilers. So the evidence suggests this may not be a Castro/wdmerger problem.

bcfriesen commented 7 years ago

Further investigation has revealed the following additional data points:

  1. The problem exists only with OpenMP codes. The pure MPI version of wdmerger does not have this problem.
  2. The problem occurs for other OpenMP codes which use the multigrid solver, e.g., the DustCollapse problem in Castro (using the inputs_3d_poisson_regtest).
  3. Increasing the thread stack size via OMP_STACKSIZE (or KMP_STACKSIZE for the Intel OpenMP runtime) to arbitrarily large values does not seem to fix the problem.
  4. Swapping MPI processes for OpenMP threads does make the problem go away, e.g., running the same wdmerger problem with 64 MPI processes with 1 OpenMP thread per process works without issue, while the same problem with 1 MPI process and 64 OpenMP threads segfaults in multigrid.
WeiqunZhang commented 7 years ago

At least one of the examples shows a Backtrace.rg_0_rl_0.0 file was generated. Can you look at that file? You will need to run the following to find out the line number and file name of addresses in the file.

addr2line -Cfie Castro3d.intel.MPI.OMP.ex 0xaddress

Weiqun

On Sat, Nov 26, 2016 at 7:49 AM, Brian Friesen notifications@github.com wrote:

Further investigation has revealed the following additional data points:

  1. The problem exists only with OpenMP codes. The pure MPI version of wdmerger does not have this problem.
  2. The problem occurs for other OpenMP codes which use the multigrid solver, e.g., the DustCollapse problem in Castro (using the inputs_3d_poisson_regtest).
  3. Increasing the thread stack size via OMP_STACKSIZE (or KMP_STACKSIZE for the Intel OpenMP runtime) to arbitrarily large values does not seem to fix the problem.
  4. Swapping MPI processes for OpenMP threads does make the problem go away, e.g., running the same wdmerger problem with 64 MPI processes with 1 OpenMP thread per process works without issue, while the same problem with 1 MPI process and 64 OpenMP threads segfaults in multigrid.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/BoxLib-Codes/wdmerger/issues/2#issuecomment-263070276, or mute the thread https://github.com/notifications/unsubscribe-auth/AHf54UiBYTkBqJTSs_7IPXWUW1gLKNBAks5rCFTvgaJpZM4K8x0U .

WeiqunZhang commented 7 years ago

Brian,

Could you show us the slurm script you used for the dustcollpase problem?

Weiqun

On Sat, Nov 26, 2016 at 8:16 AM, Weiqun Zhang weiqunzhang@lbl.gov wrote:

At least one of the examples shows a Backtrace.rg_0_rl_0.0 file was generated. Can you look at that file? You will need to run the following to find out the line number and file name of addresses in the file.

addr2line -Cfie Castro3d.intel.MPI.OMP.ex 0xaddress

Weiqun

On Sat, Nov 26, 2016 at 7:49 AM, Brian Friesen notifications@github.com wrote:

Further investigation has revealed the following additional data points:

  1. The problem exists only with OpenMP codes. The pure MPI version of wdmerger does not have this problem.
  2. The problem occurs for other OpenMP codes which use the multigrid solver, e.g., the DustCollapse problem in Castro (using the inputs_3d_poisson_regtest).
  3. Increasing the thread stack size via OMP_STACKSIZE (or KMP_STACKSIZE for the Intel OpenMP runtime) to arbitrarily large values does not seem to fix the problem.
  4. Swapping MPI processes for OpenMP threads does make the problem go away, e.g., running the same wdmerger problem with 64 MPI processes with 1 OpenMP thread per process works without issue, while the same problem with 1 MPI process and 64 OpenMP threads segfaults in multigrid.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/BoxLib-Codes/wdmerger/issues/2#issuecomment-263070276, or mute the thread https://github.com/notifications/unsubscribe-auth/AHf54UiBYTkBqJTSs_7IPXWUW1gLKNBAks5rCFTvgaJpZM4K8x0U .

bcfriesen commented 7 years ago

Hi @WeiqunZhang, I deleted the backtrace file but will regenerate it later today. I have run wdmerger repeatedly through DDT, and each time DDT says the segfault occurs in the deallocate(fb%p) on line 1010 of fab.f90, which ultimately gets called from the dtor of MGT_Solver. Walking up the call stack from that deallocate() statement, it looks like the first sign of trouble is in the call to multifab_destroy() on line 1104 of cc_mg_cpp.f90. In particular, DDT says it cannot access the memory at the address pointed to by mgts%uu(i).

bcfriesen commented 7 years ago

@WeiqunZhang DustCollapse segfaults on the allocate() line of line 221 of boxarray_f.f90 (descended from the MGT_Solver::solve() call. This was with 1 MPI proc and 64 OpenMP threads.

bcfriesen commented 7 years ago

Running DustCollapse with 16 OpenMP threads and 4 MPI procs, now it crashes on the allocate() line of line 729 of multifab_f.f90, which ultimately is called in MGT_Solver::build().

WeiqunZhang commented 7 years ago

I think this might be an intel compiler bug. Both cray and gnu work for the dustcollapse test.

Weiqun

On Mon, Nov 28, 2016 at 12:13 PM, Brian Friesen notifications@github.com wrote:

Running DustCollapse with 16 OpenMP threads and 4 MPI procs, now it crashes on the allocate() line of line 729 of multifab_f.f90, which ultimately is called in MGT_Solver::build().

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/BoxLib-Codes/wdmerger/issues/2#issuecomment-263380983, or mute the thread https://github.com/notifications/unsubscribe-auth/AHf54Yk-vU6M5FFBseq8eh4jLo1zEmTDks5rCzX6gaJpZM4K8x0U .

bcfriesen commented 7 years ago

Another data point: if I compile and run on Xeon (Haswell) instead of Xeon Phi, using 2 MPI procs and 16 threads quits with this error:


... old-time level solve at level 0
 ... solve for phi at level 0
 ... Making bc's for phi at level 0 
Gravity::fill_multipole_BCs() time = 0.07604002953
F90mg: Initial rhs                  =   11.472376    
F90mg: Initial residual (resid0)    =   75.184443    
F90mg: Iteration     1 Lev 1 resid/bnorm =   0.0000000    
 Converged res < rel_eps*max_norm   0.000000000000000E+000
  1.000000000000000E-010
F90mg: Final Iter.   1 resid/bnorm  =   0.0000000    
F90mg: Solve time:  0.578094E-02 Bottom Solve time:  0.584126E-03

 Solve Time =   5.311965942382812E-003
Gravity::solve_for_phi() time = 0.08549404144
... Entering hydro advance

BoxLib::Abort::1::State has NaNs in the density component::check_for_nan() !!!

... Leaving hydro advance

BoxLib::Abort::0::State has NaNs in the density component::check_for_nan() !!!
See Backtrace.rg_1_rl_1.0 file for details
See Backtrace.rg_0_rl_0.0 file for details
Rank 1 [Tue Nov 29 14:17:47 2016] [c0-0c0s2n2] application called MPI_Abort(comm=0x84000002, -1) - process 1
Rank 0 [Tue Nov 29 14:17:47 2016] [c0-0c0s2n2] application called MPI_Abort(comm=0x84000004, -1) - process 0
srun: error: nid00010: task 1: Aborted
srun: Terminating job step 3175288.6
slurmstepd: error: *** STEP 3175288.6 ON nid00010 CANCELLED AT 2016-11-29T14:17:47 ***
srun: Job step aborted: Waiting up to 32 seconds for job step to finish.
srun: error: nid00010: task 0: Aborted

If I use 32 MPI procs and 1 OpenMP thread, the error goes away. The backtrace file is as follows:

    If necessary, one can use 'readelf -wl my_exefile | grep my_line_address'
    to find out the offset for that line.

 0: /global/u2/f/friesen/wdmerger/tests/wdmerger_collision/./Castro3d.intel.MPI.OMP.ex() [0x545685]
    BLBackTrace::print_backtrace_info(_IO_FILE*)
    /global/homes/f/friesen/BoxLib/Src/C_BaseLib/BLBackTrace.cpp:82

 1: /global/u2/f/friesen/wdmerger/tests/wdmerger_collision/./Castro3d.intel.MPI.OMP.ex() [0x5461b5]
    BLBackTrace::handler(int)
    /global/homes/f/friesen/BoxLib/Src/C_BaseLib/BLBackTrace.cpp:46

 2: /global/u2/f/friesen/wdmerger/tests/wdmerger_collision/./Castro3d.intel.MPI.OMP.ex() [0x5cbc15]
    std::string::_Rep::_M_dispose(std::allocator<char> const&)
    /usr/include/c++/4.8/bits/basic_string.h:240
    std::basic_string<char, std::char_traits<char>, std::allocator<char> >::~basic_string()
    /usr/include/c++/4.8/bits/basic_string.h:539
    Castro::check_for_nan(MultiFab&, int)
    /global/homes/f/friesen/Castro/Source/Castro.cpp:3258

 3: /global/u2/f/friesen/wdmerger/tests/wdmerger_collision/./Castro3d.intel.MPI.OMP.ex() [0x5dc0d5]
    Castro::do_advance(double, double, int, int, int, int)
    /global/homes/f/friesen/Castro/Source/Castro_advance.cpp:214

 4: /global/u2/f/friesen/wdmerger/tests/wdmerger_collision/./Castro3d.intel.MPI.OMP.ex() [0x5db264]
    Castro::advance(double, double, int, int)
    /global/homes/f/friesen/Castro/Source/Castro_advance.cpp:95
bcfriesen commented 7 years ago

Another update: the problem is that the Intel compiler has a bug. It showed up between v16.0.2.181 and v16.0.3.210, and persists in v17. Unfortunately the oldest version available on Cori is 16.0.3.210, which means we cannot run ANY BoxLib code that uses F_MG on Cori if it was built with any of the available Intel compilers; we're stuck with CCE and GCC.

maxpkatz commented 7 years ago

Closed; if this is still an issue, it can be reopened on Castro.