Closed bcfriesen closed 7 years ago
Further investigation has revealed the following additional data points:
DustCollapse
problem in Castro (using the inputs_3d_poisson_regtest
).OMP_STACKSIZE
(or KMP_STACKSIZE
for the Intel OpenMP runtime) to arbitrarily large values does not seem to fix the problem.At least one of the examples shows a Backtrace.rg_0_rl_0.0 file was generated. Can you look at that file? You will need to run the following to find out the line number and file name of addresses in the file.
addr2line -Cfie Castro3d.intel.MPI.OMP.ex 0xaddress
Weiqun
On Sat, Nov 26, 2016 at 7:49 AM, Brian Friesen notifications@github.com wrote:
Further investigation has revealed the following additional data points:
- The problem exists only with OpenMP codes. The pure MPI version of wdmerger does not have this problem.
- The problem occurs for other OpenMP codes which use the multigrid solver, e.g., the DustCollapse problem in Castro (using the inputs_3d_poisson_regtest).
- Increasing the thread stack size via OMP_STACKSIZE (or KMP_STACKSIZE for the Intel OpenMP runtime) to arbitrarily large values does not seem to fix the problem.
- Swapping MPI processes for OpenMP threads does make the problem go away, e.g., running the same wdmerger problem with 64 MPI processes with 1 OpenMP thread per process works without issue, while the same problem with 1 MPI process and 64 OpenMP threads segfaults in multigrid.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/BoxLib-Codes/wdmerger/issues/2#issuecomment-263070276, or mute the thread https://github.com/notifications/unsubscribe-auth/AHf54UiBYTkBqJTSs_7IPXWUW1gLKNBAks5rCFTvgaJpZM4K8x0U .
Brian,
Could you show us the slurm script you used for the dustcollpase problem?
Weiqun
On Sat, Nov 26, 2016 at 8:16 AM, Weiqun Zhang weiqunzhang@lbl.gov wrote:
At least one of the examples shows a Backtrace.rg_0_rl_0.0 file was generated. Can you look at that file? You will need to run the following to find out the line number and file name of addresses in the file.
addr2line -Cfie Castro3d.intel.MPI.OMP.ex 0xaddress
Weiqun
On Sat, Nov 26, 2016 at 7:49 AM, Brian Friesen notifications@github.com wrote:
Further investigation has revealed the following additional data points:
- The problem exists only with OpenMP codes. The pure MPI version of wdmerger does not have this problem.
- The problem occurs for other OpenMP codes which use the multigrid solver, e.g., the DustCollapse problem in Castro (using the inputs_3d_poisson_regtest).
- Increasing the thread stack size via OMP_STACKSIZE (or KMP_STACKSIZE for the Intel OpenMP runtime) to arbitrarily large values does not seem to fix the problem.
- Swapping MPI processes for OpenMP threads does make the problem go away, e.g., running the same wdmerger problem with 64 MPI processes with 1 OpenMP thread per process works without issue, while the same problem with 1 MPI process and 64 OpenMP threads segfaults in multigrid.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/BoxLib-Codes/wdmerger/issues/2#issuecomment-263070276, or mute the thread https://github.com/notifications/unsubscribe-auth/AHf54UiBYTkBqJTSs_7IPXWUW1gLKNBAks5rCFTvgaJpZM4K8x0U .
Hi @WeiqunZhang, I deleted the backtrace file but will regenerate it later today. I have run wdmerger repeatedly through DDT, and each time DDT says the segfault occurs in the deallocate(fb%p)
on line 1010 of fab.f90
, which ultimately gets called from the dtor of MGT_Solver
. Walking up the call stack from that deallocate()
statement, it looks like the first sign of trouble is in the call to multifab_destroy()
on line 1104 of cc_mg_cpp.f90
. In particular, DDT says it cannot access the memory at the address pointed to by mgts%uu(i)
.
@WeiqunZhang DustCollapse
segfaults on the allocate()
line of line 221 of boxarray_f.f90
(descended from the MGT_Solver::solve()
call. This was with 1 MPI proc and 64 OpenMP threads.
Running DustCollapse
with 16 OpenMP threads and 4 MPI procs, now it crashes on the allocate()
line of line 729 of multifab_f.f90
, which ultimately is called in MGT_Solver::build()
.
I think this might be an intel compiler bug. Both cray and gnu work for the dustcollapse test.
Weiqun
On Mon, Nov 28, 2016 at 12:13 PM, Brian Friesen notifications@github.com wrote:
Running DustCollapse with 16 OpenMP threads and 4 MPI procs, now it crashes on the allocate() line of line 729 of multifab_f.f90, which ultimately is called in MGT_Solver::build().
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/BoxLib-Codes/wdmerger/issues/2#issuecomment-263380983, or mute the thread https://github.com/notifications/unsubscribe-auth/AHf54Yk-vU6M5FFBseq8eh4jLo1zEmTDks5rCzX6gaJpZM4K8x0U .
Another data point: if I compile and run on Xeon (Haswell) instead of Xeon Phi, using 2 MPI procs and 16 threads quits with this error:
... old-time level solve at level 0
... solve for phi at level 0
... Making bc's for phi at level 0
Gravity::fill_multipole_BCs() time = 0.07604002953
F90mg: Initial rhs = 11.472376
F90mg: Initial residual (resid0) = 75.184443
F90mg: Iteration 1 Lev 1 resid/bnorm = 0.0000000
Converged res < rel_eps*max_norm 0.000000000000000E+000
1.000000000000000E-010
F90mg: Final Iter. 1 resid/bnorm = 0.0000000
F90mg: Solve time: 0.578094E-02 Bottom Solve time: 0.584126E-03
Solve Time = 5.311965942382812E-003
Gravity::solve_for_phi() time = 0.08549404144
... Entering hydro advance
BoxLib::Abort::1::State has NaNs in the density component::check_for_nan() !!!
... Leaving hydro advance
BoxLib::Abort::0::State has NaNs in the density component::check_for_nan() !!!
See Backtrace.rg_1_rl_1.0 file for details
See Backtrace.rg_0_rl_0.0 file for details
Rank 1 [Tue Nov 29 14:17:47 2016] [c0-0c0s2n2] application called MPI_Abort(comm=0x84000002, -1) - process 1
Rank 0 [Tue Nov 29 14:17:47 2016] [c0-0c0s2n2] application called MPI_Abort(comm=0x84000004, -1) - process 0
srun: error: nid00010: task 1: Aborted
srun: Terminating job step 3175288.6
slurmstepd: error: *** STEP 3175288.6 ON nid00010 CANCELLED AT 2016-11-29T14:17:47 ***
srun: Job step aborted: Waiting up to 32 seconds for job step to finish.
srun: error: nid00010: task 0: Aborted
If I use 32 MPI procs and 1 OpenMP thread, the error goes away. The backtrace file is as follows:
If necessary, one can use 'readelf -wl my_exefile | grep my_line_address'
to find out the offset for that line.
0: /global/u2/f/friesen/wdmerger/tests/wdmerger_collision/./Castro3d.intel.MPI.OMP.ex() [0x545685]
BLBackTrace::print_backtrace_info(_IO_FILE*)
/global/homes/f/friesen/BoxLib/Src/C_BaseLib/BLBackTrace.cpp:82
1: /global/u2/f/friesen/wdmerger/tests/wdmerger_collision/./Castro3d.intel.MPI.OMP.ex() [0x5461b5]
BLBackTrace::handler(int)
/global/homes/f/friesen/BoxLib/Src/C_BaseLib/BLBackTrace.cpp:46
2: /global/u2/f/friesen/wdmerger/tests/wdmerger_collision/./Castro3d.intel.MPI.OMP.ex() [0x5cbc15]
std::string::_Rep::_M_dispose(std::allocator<char> const&)
/usr/include/c++/4.8/bits/basic_string.h:240
std::basic_string<char, std::char_traits<char>, std::allocator<char> >::~basic_string()
/usr/include/c++/4.8/bits/basic_string.h:539
Castro::check_for_nan(MultiFab&, int)
/global/homes/f/friesen/Castro/Source/Castro.cpp:3258
3: /global/u2/f/friesen/wdmerger/tests/wdmerger_collision/./Castro3d.intel.MPI.OMP.ex() [0x5dc0d5]
Castro::do_advance(double, double, int, int, int, int)
/global/homes/f/friesen/Castro/Source/Castro_advance.cpp:214
4: /global/u2/f/friesen/wdmerger/tests/wdmerger_collision/./Castro3d.intel.MPI.OMP.ex() [0x5db264]
Castro::advance(double, double, int, int)
/global/homes/f/friesen/Castro/Source/Castro_advance.cpp:95
Another update: the problem is that the Intel compiler has a bug. It showed up between v16.0.2.181 and v16.0.3.210, and persists in v17. Unfortunately the oldest version available on Cori is 16.0.3.210, which means we cannot run ANY BoxLib code that uses F_MG
on Cori if it was built with any of the available Intel compilers; we're stuck with CCE and GCC.
Closed; if this is still an issue, it can be reopened on Castro.
The test problem
wdmerger_collision
crashes somewhere in the multigrid solver when using Intel compilers v16 and v17 (beta) compiled for 2nd-gen Xeon Phi ("Knights Landing"). The exact point where it crashes tends to vary. Below are a few example outputs:Example 1:
Example 2:
Example 3:
Interestingly, when compiled with
DEBUG
set toTRUE
, it fails with this error:Although I've been unable to figure out which array it's talking about (the DDT debugger doesn't capture errors from
forrtl
so it can't show a call stack).This error does not occur with either the GCC or Cray compilers targeting 2nd-gen Xeon Phi.
Another data point is that another user of Nyx on a different 2nd-gen Xeon Phi system also sees segfaults in Nyx using the Intel compilers. So the evidence suggests this may not be a Castro/wdmerger problem.