Closed JhonCordova closed 7 months ago
Hi,
I applied formatting to your issue so it is easier to parse. Comments in input files are #
and not //
. So you should definitely fix that. Can you try not using the "isothermal" wall capability and just use a periodic in x/y and plain walls in z? That will rule out some obvious mistakes. Then when that works, you can remove the periodicity and see if your x/y BC work. Then finally you can turn back on the isothermal wall capability. A step by step approach will make sure you build your case bit by bit and make sure that each step works. Let us know if you need any help with these steps.
Another option is to build the code with no optimization with debug symbols and run with amrex.signal_handling=1 amrex.fpe_trap_invalid=1 amrex.fpe_trap_zero=1 amrex.fpe_trap_overflow=1
and that can help find the line numbers where NaNs first appear.
Hi, @marchdf I've followed your instructions to narrow down the source of error. It appears that the errors occur with these options in the input file:
pelec.lo_bc = "UserBC" "NoSlipWall" "Interior"
pelec.hi_bc = "FOExtrap" "NoSlipWall" "Interior"
#pelec.do_isothermal_walls = true
#pelec.domlo_isothermal_temp = -1 422.5 -1
#pelec.domhi_isothermal_temp = -1 -1 -1
The condition being imposed at the inlet is a constant velocity profile, specifically associated with the freestream (U_inf, T_inf, rho_inf). This is being enforced through the following lines of code:
amrex::Real velx = 0.0, vely = 0.0, velz = 0.0;
if (idir == 0 && sgn == 1) {
velx = PeleC::h_prob_parm_device->U_inf;
amrex::Real massfrac[NUM_SPECIES] = {0.0};
massfrac[O2_ID] = PeleC::h_prob_parm_device->massfrac_O2;
massfrac[N2_ID] = PeleC::h_prob_parm_device->massfrac_N2;
for (int n = 0; n < NUM_SPECIES; n++) {
s_ext[UFS + n] = PeleC::h_prob_parm_device->rho_inf * massfrac[n];
}
auto eos = pele::physics::PhysicsType::eos();
amrex::Real eint;
// interior pressure
amrex::Real p_int,rho;
amrex::Real massfrac_int[NUM_SPECIES] = {0.0};
for (int n = 0; n < NUM_SPECIES; n++) {
massfrac_int[n] = s_int[UFS + n] / s_int[URHO];
}
eos.RTY2P(s_int[URHO],
s_int[UTEMP],
massfrac_int,
p_int);
s_ext[UTEMP] = PeleC::h_prob_parm_device->T_inf;
eos.PYT2RE(p_int,
massfrac,
s_ext[UTEMP],
rho,
eint);
s_ext[URHO] = rho;
s_ext[UMX] = rho * velx;
s_ext[UMY] = rho * vely;
s_ext[UMZ] = rho * velz;
s_ext[UEINT] = rho * eint;
s_ext[UEDEN] = s_ext[UEINT] +
PeleC::h_prob_parm_device->rho_inf * (eint + 0.5 * (velx * velx +
vely * vely +
velz * velz));
}
Do you have any suggestions regarding the imposed BC (UserBC)?
Hi, @jrood-nrel I have enabled debug mode in PeleC with the following setting in GNUmakefile:
# AMReX
DIM = 3
COMP = gnu
PRECISION = DOUBLE
# Profiling
PROFILE = FALSE
TINY_PROFILE = FALSE
COMM_PROFILE = FALSE
TRACE_PROFILE = FALSE
MEM_PROFILE = FALSE
USE_GPROF = FALSE
# Performance
USE_MPI = TRUE
USE_OMP = FALSE
USE_CUDA = FALSE
USE_HIP = FALSE
USE_SYCL = FALSE
# Debugging
DEBUG = TRUE
FSANITIZER = FALSE
THREAD_SANITIZER = FALSE
# PeleC
PELE_CVODE_FORCE_YCORDER = FALSE
PELE_USE_MAGMA = FALSE
PELE_COMPILE_AJACOBIAN = FALSE
Eos_Model := Fuego
Chemistry_Model := air
Transport_Model := Sutherland
# GNU Make
Bpack := ./Make.package
Blocs := .
PELE_HOME := ../../..
include $(PELE_HOME)/Exec/Make.PeleC
The compilation has been running with the following line:
mpiexec -np 8 ./PeleC3d.gnu.DEBUG.MPI.ex inputs.inp amrex.signal_handling=1 amrex.fpe_trap_invalid=1 amrex.fpe_trap_zero=1 amrex.fpe_trap_overflow=1
This is what has been printed in the terminal:
--------------------------------------------------------------------------
PMIx was unable to find a usable compression library
on the system. We will therefore be unable to compress
large data streams. This may result in longer-than-normal
startup times and larger memory footprints. We will
continue, but strongly recommend installing zlib or
a comparable compression library for better user experience.
You can suppress this warning by adding "pcompress_base_silence_warning=1"
to your PMIx MCA default parameter file, or by adding
"PMIX_MCA_pcompress_base_silence_warning=1" to your environment.
--------------------------------------------------------------------------
Initializing AMReX (24.02-31-g97512176d8e5)...
MPI initialized with 8 MPI processes
MPI initialized with thread support level 0
Initializing SUNDIALS with 1 threads...
SUNDIALS initialized.
AMReX (24.02-31-g97512176d8e5) initialized
Starting run at 15:19:33 UTC on 2024-04-16.
Successfully read inputs file ...
PeleC git hash: 57d285f-dirty
AMReX git hash: 24.02-31-g97512176d
PelePhysics git hash: v23.03-98-g50b990b8
AMReX-Hydro git hash: v6.6.2
0 Advected Variables:
2 Species:
O2 N2
0 Auxiliary Variables:
Initializing EB2
Successfully read inputs file ...
Starting to call amrex_probinit ...
Successfully run amrex_probinit
Initializing the data at level 0
Done initializing level 0 data
...estimated hydro-limited timestep at level 0: 3.819262928e-09
PeleC::estTimeStep (hydro-limited) at level 0: estdt = 3.819262928e-09
TIME = 0 MASS = 0.0131625
TIME = 0 XMOM = 1276.367625
TIME = 0 YMOM = 0
TIME = 0 ZMOM = 0
TIME = 0 RHO*e = -32879765.95
TIME = 0 RHO*K = 61884684.3
TIME = 0 RHO*E = 29004918.35
TIME = 0 FUEL PROD = 0
TIME = 0 density MIN = 0.00013 MAX = 0.00013
TIME = 0 x_velocity MIN = 96970 MAX = 96970
TIME = 0 y_velocity MIN = 0 MAX = 0
TIME = 0 z_velocity MIN = 0 MAX = 0
TIME = 0 eint_e MIN = -2.4979879e+09 MAX = -2.4979879e+09
TIME = 0 Temp MIN = 65 MAX = 65
TIME = 0 pressure MIN = 24351.762 MAX = 24351.762
TIME = 0 massfrac MIN = 0.233 MAX = 0.767
TIME = 0 sumYminus1 MIN = 0 MAX = 0
INITIAL GRIDS
Level 0 12 grids 51840 cells 100 % of domain
smallest grid: 24 x 12 x 12 biggest grid: 32 x 12 x 12
PLOTFILE: file = plt00000
Write plotfile time = 0.270524077 seconds
...estimated hydro-limited timestep at level 0: 3.819262928e-09
PeleC::estTimeStep (hydro-limited) at level 0: estdt = 3.819262928e-09
[Level 0 step 1] ADVANCE at time 0 with dt = 3.819262928e-10
Erroneous arithmetic operation
... Computing diffusion terms at t^(n)
... Computing Godunov hydro advance
See Backtrace.4 file for details
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 4 in communicator MPI_COMM_WORLD
Proc: [[14250,1],4]
Errorcode: 8
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
this are the details in the Backtrace.4 file:
Host Name: Blue
=== If no file names and line numbers are shown below, one can run
addr2line -Cpfie my_exefile my_line_address
to convert `my_line_address` (e.g., 0x4a6b) into file name and line number.
Or one can use amrex/Tools/Backtrace/parse_bt.py.
=== Please note that the line number reported by addr2line may not be accurate.
One can use
readelf -wl my_exefile | grep my_line_address'
to find out the offset for that line.
0: ./PeleC3d.gnu.DEBUG.MPI.ex(+0x42fde4) [0x5b9aa2116de4]
amrex::BLBackTrace::print_backtrace_info(_IO_FILE*) at /home/jhon/Work/PeleC/Submodules/PelePhysics/Submodules/amrex/Src/Base/AMReX_BLBackTrace.cpp:189
1: ./PeleC3d.gnu.DEBUG.MPI.ex(+0x42f896) [0x5b9aa2116896]
amrex::BLBackTrace::handler(int) at /home/jhon/Work/PeleC/Submodules/PelePhysics/Submodules/amrex/Src/Base/AMReX_BLBackTrace.cpp:99
2: /lib/x86_64-linux-gnu/libc.so.6(+0x42520) [0x786c39242520]
3: /lib/x86_64-linux-gnu/libm.so.6(+0x1349c) [0x786c3985c49c]
?? ??:0
4: ./PeleC3d.gnu.DEBUG.MPI.ex(+0xec214c) [0x5b9aa2ba914c]
pele::physics::eos::Fuego::RTY2Cs(double, double, double const*, double&) at /home/jhon/Work/PeleC/Submodules/PelePhysics/Source/Eos/Fuego.H:115
(inlined by) pc_ctoprim(int, int, int, amrex::Array4<double const> const&, amrex::Array4<double> const&, amrex::Array4<double> const&) at /home/jhon/Work/PeleC/Exec/RegTests/TBL/../../../Source/Utilities.H:252
(inlined by) operator() at /home/jhon/Work/PeleC/Exec/RegTests/TBL/../../../Source/Diffusion.cpp:130
5: ./PeleC3d.gnu.DEBUG.MPI.ex(+0xf079a8) [0x5b9aa2bee9a8]
decltype ({parm#1}(0, 0, 0)) amrex::detail::call_f<PeleC::getMOLSrcTerm(amrex::MultiFab const&, amrex::MultiFab&, double, double, double)::{lambda(int, int, int)#1}>(PeleC::getMOLSrcTerm(amrex::MultiFab const&, amrex::MultiFab&, double, double, double)::{lambda(int, int, int)#1} const&, int, int, int) at /home/jhon/Work/PeleC/Submodules/PelePhysics/Submodules/amrex/Src/Base/AMReX_GpuLaunchFunctsC.H:30
6: ./PeleC3d.gnu.DEBUG.MPI.ex(+0xf05835) [0x5b9aa2bec835]
void amrex::ParallelFor<PeleC::getMOLSrcTerm(amrex::MultiFab const&, amrex::MultiFab&, double, double, double)::{lambda(int, int, int)#1}>(amrex::Box const&, PeleC::getMOLSrcTerm(amrex::MultiFab const&, amrex::MultiFab&, double, double, double)::{lambda(int, int, int)#1}&&) at /home/jhon/Work/PeleC/Submodules/PelePhysics/Submodules/amrex/Src/Base/AMReX_GpuLaunchFunctsC.H:167 (discriminator 3)
7: ./PeleC3d.gnu.DEBUG.MPI.ex(+0xef9288) [0x5b9aa2be0288]
amrex::BaseFab<double>::array() at /home/jhon/Work/PeleC/Submodules/PelePhysics/Submodules/amrex/Src/Base/AMReX_BaseFab.H:399
(inlined by) PeleC::getMOLSrcTerm(amrex::MultiFab const&, amrex::MultiFab&, double, double, double) at /home/jhon/Work/PeleC/Exec/RegTests/TBL/../../../Source/Diffusion.cpp:168
8: ./PeleC3d.gnu.DEBUG.MPI.ex(+0xa952cd) [0x5b9aa277c2cd]
PeleC::do_sdc_iteration(double, double, int, int, int, int) at /home/jhon/Work/PeleC/Exec/RegTests/TBL/../../../Source/Advance.cpp:303
9: ./PeleC3d.gnu.DEBUG.MPI.ex(+0xa94c77) [0x5b9aa277bc77]
PeleC::do_sdc_advance(double, double, int, int) at /home/jhon/Work/PeleC/Exec/RegTests/TBL/../../../Source/Advance.cpp:223 (discriminator 2)
10: ./PeleC3d.gnu.DEBUG.MPI.ex(+0xa9395b) [0x5b9aa277a95b]
PeleC::advance(double, double, int, int) at /home/jhon/Work/PeleC/Exec/RegTests/TBL/../../../Source/Advance.cpp:39
11: ./PeleC3d.gnu.DEBUG.MPI.ex(+0x475cf2) [0x5b9aa215ccf2]
amrex::Amr::timeStep(int, double, int, int, double) at /home/jhon/Work/PeleC/Submodules/PelePhysics/Submodules/amrex/Src/Amr/AMReX_Amr.cpp:2022
12: ./PeleC3d.gnu.DEBUG.MPI.ex(+0x476595) [0x5b9aa215d595]
amrex::Amr::coarseTimeStep(double) at /home/jhon/Work/PeleC/Submodules/PelePhysics/Submodules/amrex/Src/Amr/AMReX_Amr.cpp:2133
13: ./PeleC3d.gnu.DEBUG.MPI.ex(+0xeb6341) [0x5b9aa2b9d341]
main at /home/jhon/Work/PeleC/Exec/RegTests/TBL/../../../Source/main.cpp:165
14: /lib/x86_64-linux-gnu/libc.so.6(+0x29d90) [0x786c39229d90]
15: /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80) [0x786c39229e40]
16: ./PeleC3d.gnu.DEBUG.MPI.ex(+0x1fb15) [0x5b9aa1d06b15]
?? ??:0
Does it give you any clues about the issue?
https://github.com/AMReX-Combustion/PelePhysics/blob/80f2cfa0edd2e928a93bb9dff45e68d32d8699f7/Source/Eos/Fuego.H#L115
likely has a negative number in the sqrt
. Would have to back track why that's happening.
s_ext[UEDEN] = s_ext[UEINT] +
PeleC::h_prob_parm_device->rho_inf * (eint + 0.5 * (velx * velx +
vely * vely +
velz * velz));
am I missing something? Seems like there is 2 internal energies in here? I feel like s_ext[UEINT] +
should be removed? And are you sure you want to use rho_inf
and not rho
?
s_ext[UFS + n] = PeleC::h_prob_parm_device->rho_inf * massfrac[n];
Also this. you are using rho_inf
to set the rho Y
in the external state, but then you use rho
when you set the other variables in the external state:
s_ext[URHO] = rho;
s_ext[UMX] = rho * velx;
s_ext[UMY] = rho * vely;
s_ext[UMZ] = rho * velz;
s_ext[UEINT] = rho * eint;
Feels inconsistent?
Here is an example of how you can do things: https://github.com/AMReX-Combustion/PeleC/blob/development/Exec/RegTests/EB-ConvergingNozzle/prob.H#L84
Now, the case is running. I'll give it a look at the nozzle case. Thanks a lot !
I'm attempting to simulate the zero pressure gradient flat plate, but I'm encountering these sorts of issues:
can someone please help me? The files I am using to run the case are the following:
inputs.inp
prob.cpp
prob.H
prob_parm.H