Closed mcgratta closed 1 year ago
Also, update your fds repo. I changed the location of the PROFile devices in the original case to better highlight the problem.
I'll look at it. I wonder if this is some kind of SPACING issue?
I think it is due to the redistribution of temperature when a grid cell shrinks. I think we should do this redistribution over layers rather than the entire solid.
After we do pyrolysis, we adjust each node size based on mass and density changes. This gives us an unrenoded set of node boundary locations. We then renode layer-by-layer. The boundary between layers does not change in this process. Pre-pyrolysis say we have two layers each with two nodes with X locations of: 0,0.1,0.2,0.3,0.4
Post-pyrolysis we loose half the first node we would wind up with a unrenoded set of x locations of: 0,0.05,0.15,0.25,0.35
Then renoding may redistribute within each layer. The second layer won't change as it had no change in mass / density: 0.075,0.15,0.25,0.35
The layer bounary at 0.15 doesn't change. INT_WGT should be picking up that in the old noding the node is from 0.15 to 0.25 and in the new it is still 0.15 to 0.25 and there should just be an INT_WGT=1 for mapping node 3 back to node 3. Since it isn't there is some kind of round-off error happening. Before just going the route of layer-by-layer in getting INT_WGT, I'd like to make sure that the root cause of the round off isn't going to cause problems in some other set of circumstances.
In GET_WALL_COORDINATES we have an loop over each layer and an inner loop over the wall nodes in that layer. We do a running sum of X_S(II) = X_S(II-1) + DX_S. However, this means that we can accumulate some roundoff error due to the STRETCH_FACTOR exponent and the running sum of X_S at the layer interface after renoding may not equal X_S_OLD at the lauer interface. It should be identical though. I think the inner loop here needs to be DO I=1,N_LAYER_CELLS(NL)-1 rather than DO I=1,N_LAYER_CELLS(NL) and after the loop we just set X_S at the layer interface to X_S_OLD.
This slight shift in the layer interface after renoding was causing INT_WGT to pick up some of the 1st layer at the first node in the second layer.
I'll make that change and check on node weights.
Yes, it's definitely a small change in the density, but that is enough to trigger pyrolysis.
Should be fixed. Ran some of the other pyrolysis cases and things look OK. Since this changes slightly noding with more than one layer there are some small differences in results. For example, here is the liquid evaporation case. The slight noding changes cause tiny shifts in the time where the different distillation steps are so overall the temperature profile is very close but some differences comparing time step to time step.
The original problem is fixed -- the case no longer fails in debug mode. However, if you look at ground_vegetation_load_prof_5.csv
there is still some char leaching into the second layer. Just make sure that is OK. It doesn't cause the case to fail.
Before it leached within a couple of timesteps. As soon as char appeared in the last grass node (and a E-10 shows up right away) it showed up in the next timestep in the first soil node. I hadn't run the case all the way to 45 s. I ran for a bit and nothing was leaching right away like it was before. Running it all the way now. I am at 12 s now with 2E-2 in the last grass node and still 0 in the first soil node.
I see these values in the first soil node, according to the prof_5 file.
0.00E+00
1.97E-18
3.19E-15
1.12E-13
6.92E-13
1.69E-12
2.73E-12
3.47E-12
3.83E-12
3.98E-12
4.03E-12
I am seeing this. After 37 s it is still zero.
compiled and running now on our cluster an it is still zero after 20 s
I'll run it again.
The case failed for me
Fire Dynamics Simulator
Current Date : October 3, 2023 15:58:51
Revision : FDS-6.8.0-718-g0193f02-master
Revision Date : Tue Oct 3 14:43:40 2023 -0400
Compiler : ifort version 2021.7.1
Compilation Date : Oct 03, 2023 15:25:38
MPI Enabled; Number of MPI Processes: 2
OpenMP Disabled
Time Step: 100, Simulation Time: 4.40 s
Time Step: 200, Simulation Time: 6.25 s
Time Step: 300, Simulation Time: 7.17 s
forrtl: error (73): floating divide by zero
Image PC Routine Line Source
libpthread-2.17.s 00002B1BEF692630 Unknown Unknown Unknown
fds_impi_intel_li 0000000003093CC7 wall_routines_mp_ 2972 wall.f90
fds_impi_intel_li 0000000003026D9C wall_routines_mp_ 1967 wall.f90
fds_impi_intel_li 0000000002F5D42A wall_routines_mp_ 163 wall.f90
fds_impi_intel_li 0000000002F52D36 wall_routines_mp_ 57 wall.f90
fds_impi_intel_li 00000000040D380D MAIN__ 840 main.f90
fds_impi_intel_li 0000000000409B1D Unknown Unknown Unknown
libc-2.17.so 00002B1BF1BB8555 __libc_start_main Unknown Unknown
fds_impi_intel_li 0000000000409A36 Unknown Unknown Unknown
srun: error: burn001: task 0: Aborted
What git hash are you working with?
Do a git describe
That was my comit. 0193 is the post merge
OK, this is what I am running
FDS-6.8.0-718-g0193f029e
Debug or release? I just pulled down the post merge so now have 0193.
release mode:
C:\github\fds\Verification\WUI>\github\fds\Build\impi_intel_win\fds_impi_intel_win.exe ground_vegetation_load.fds
Starting FDS ...
MPI Process 0 started on RTP0922PW03G7X0
Reading FDS input file ...
WARNING: SPEC WOOD FUEL VAPOR is not in the table of pre-defined species. Any unassigned SPEC variables in the input were assigned the properties of nitrogen.
Fire Dynamics Simulator
Current Date : October 3, 2023 16:58:06 Revision : FDS-6.8.0-718-g0193f02-master Revision Date : Tue Oct 3 14:43:40 2023 -0400 Compiler : Intel ifort 2021.10.0 Compilation Date : Tue 10/03/2023 04:49 PM
MPI Enabled; Number of MPI Processes: 1 OpenMP Disabled
MPI version: 3.1 MPI library version: Intel(R) MPI Library 2021.10 for Windows* OS
Job TITLE : Comparison of boundary models Job ID string : ground_vegetation_load
Time Step: 1, Simulation Time: 0.05 s Time Step: 2, Simulation Time: 0.10 s Time Step: 3, Simulation Time: 0.15 s Time Step: 4, Simulation Time: 0.20 s Time Step: 5, Simulation Time: 0.25 s Time Step: 6, Simulation Time: 0.30 s Time Step: 7, Simulation Time: 0.35 s Time Step: 8, Simulation Time: 0.40 s Time Step: 9, Simulation Time: 0.45 s Time Step: 10, Simulation Time: 0.50 s Time Step: 20, Simulation Time: 1.01 s Time Step: 30, Simulation Time: 1.51 s Time Step: 40, Simulation Time: 2.02 s Time Step: 50, Simulation Time: 2.52 s Time Step: 60, Simulation Time: 3.02 s Time Step: 70, Simulation Time: 3.44 s Time Step: 80, Simulation Time: 3.79 s Time Step: 90, Simulation Time: 4.10 s Time Step: 100, Simulation Time: 4.40 s Time Step: 200, Simulation Time: 6.25 s Time Step: 300, Simulation Time: 7.17 s Time Step: 400, Simulation Time: 7.88 s
It always worked in release mode. It fails for me in debug mode.
Ah. OK. Trying debug now. Will take a little time to get there
2972 in wall.f90 is:
LENGTH_SCALE = 1._EB/(SF%SURFACE_VOLUME_RATIO(LAYER_INDEX)*SF%PACKING_RATIO(LAYER_INDEX))
All my change did was add passing in SF% or ONE_D%LAYER_THICKNESS (SF% in read and ONE_D% elsewhere) as INTENT(IN) to GET_WALL_NODE_COORDINATES so the last node in each layer can be set to exactly hit the layer thickness rather than having a little rounding error from the exponent for STRETCH_FACTOR. That isn't something that should have changed any of the variables in line 2972.
Right. The problem is related to the special BOUNDARY_FUEL_MODEL
. Usually with the BFM, the first layer is vegetation and the second layer is soil. SURFACE_VOLUME_RATIO
and PACKING_RATIO
are only defined for the first layer. In this case, we are accidently processing the second layer because char and other materials of the first layer are not less than TWO_EPSILON_EB
. That means that the inert second layer is going into the pyrolysis routine when it should just be kicked out.
We could fix this problem by changing the BFM set up, but I thought it might be a good idea to prevent this leaching of material components across layers. I would think something like this is going to happen again.
I get what you want done. My fix should be stopping this. I don't know why debug is doing one thing and release another. This will take some time to figure out.
Let's let firebot chew on it, and then take a look tomorrow.
Found the issue. This line in GET_WALL_NODE_COORDINATES worked in release but not debug. Debug still had a slight (E-12) error in the layer boundary in X_S:
X_S(II) = X_S(II-1) + LAYER_THICKNESS(NL) - DX_SUM
If I changed it to this it worked in both:
DX_S = LAYER_THICKNESS(NL) - DX_SUM X_S(II) = X_S(II-1) + DX_S
Yikes. Good sleuthing.
This seems to have solved the problem Thanks.
Run the case
Verification/WUI/ground_vegetation_load.fds
and look at the fileground_vegetation_load_prof_5.csv
. This case compares two ways of describing vegetation, the particle method vs the boundary fuel method. The latter is demonstrated in Mesh 1. The BFM creates a two-layer surface. The first layer is vegetation with a given surface area to volume ratio and packing ratio. The second layer is just inertSOIL
. The first layer is noded with 14 cells; the second with 15. This case fails when run in debug mode, but it does not fail in release mode. What happens is that some char formed in the first layer "leaches" into the second layer, ever so slightly. When the density ofCHAR
in the first cell of the second layer increases aboveTWO_EPSILON_EB
(about 4.4e-16), theBOUNDARY_FUEL_MODEL
is invoked for theSOIL
layer and there is a divide by zero error that is only trapped in debug mode. But the error is there regardless of the compilation.I think the only way that char can find its way into the second layer is due to
INTERPOLATE_WALL_ARRAY
which is invoked when there is a re-gridding of the solid nodes. If so, can we make it so that re-gridding is done layer by layer so that it is impossible for a material component of one layer to appear in another, assuming we have no explicit diffusion mechanism.Note that this has nothing to do with the reworking of PROFile. I confirmed the problem via direct outputs of raw values in the code.