firemodels / fds

Fire Dynamics Simulator
https://pages.nist.gov/fds-smv/
Other
674 stars 626 forks source link

Mass Balance Disagreement with Particle Generation from Decomposing Surface #12013

Open jutrasi opened 1 year ago

jutrasi commented 1 year ago

Hi, I believe I may have found a bug in version 6.8. I am generating particles from a surface that has material properties and an ignition temperature on that surface. The particles released then decompose based on their material properties. The issue is that the mass balance of what the starting theoretical mass is (based on the surface density and surface area) vs the amount of gas being released by the particles does not agree. One thing that I noticed is that the weighing factor for the particles is always equal to 1. Additionally, if I use a reference temperature on the material line as opposed to an ignition temperature on the surface line the model will crash. Below is a summary of test cases that I have run varying parameters as well as the error I receive when trying to use the reference temperature. My input files are attached for both the ignition temperature case (Decomposing_particle_flux-IT) and the reference temperature case (Decomposing_particle_flux-RT). Thank you for your help. image image

Decomposing_particle_flux-IT.txt Decomposing_particle_flux-RT.txt

ericvmueller commented 1 year ago

In your first approach you are directly dictating the number of particles generated per cell per second with NPPC and DT_INSERT. There is no connection between this generation rate and the pyrolysis of your solid, and there is no reason for these to have a weighting factor other than 1. You should be able to alter the weighting factor by adding PARTICLE_MASS_FLUX input on the SURF line where you specify NPPC and DT_INSERT ...but the source of this mass still wont be coupled to the pyrolysis (this method is a way to generate new mass in the form of particles from any surface - it does not have to be burning).

Your second approach of using NU_PART on the SURF line should be the proper way to convert the solid into a mass of particles and conserve mass. But I'm not sure what is going on with the memory error so we will have to look into it.

jutrasi commented 1 year ago

Thanks for the feedback. The second approach was what I was originally trying to get to work in version 6.7.9 however I was seeing a similar issue with the conservation of mass. At that point, I tried running it in the latest version to see if the issue still existed and ran into the above issue.

drjfloyd commented 1 year ago

There is a bug. Running with a debug version it is given an unallocated variable error. Travelling this week and don't have a lot of time to dig into this. It may be a couple of days before I can fix this.

ericvmueller commented 1 year ago

I think the memory issue is because we weren't updating the ONE_D pointer after allocating the particles in part.f90. So it was possible for it to no longer point to the correct wall cell when we use ONE_D%PART_MASS. I added this in my own branch and the case runs. If you agree Jason, I can run firebot and merge.

image

As for the mass issue... even when the case runs, things are not conserved. If I make the particles inert we recover most of the mass of the solid (particle mass of 9.33 kg from a starting mass of 9.6 kg).

image

However, when the particles react things disappear. It seems related to the overlap of the two reactions and how intense they are, because the numbers change if I adjust the pyrolysis parameters.

image

One other note @jutrasi. In your second file you specify PART_ID on both the SURF and MATL lines. This will produce particles directly from the surface as I mentioned before, and as a by-product of pyrolysis from the MATL. In your case you only want to specify PART_ID on the MATL line. The extra mass is pretty small, but it adds a lot of extra particles that you dont want depending on your DT_INSERT and NPPC

jutrasi commented 1 year ago

Thank you for the additional note and to both of you for looking into it.

drjfloyd commented 1 year ago

@ericvmueller Good find. Thanks.

ericvmueller commented 1 year ago

Some incremental progress, just on making sure the correct mass of particle is created (forgetting the second reaction for now)...

something which I think can cause an issue is that if the solid material is fully pyrolyzed while some mass remains in ONE_D%PART_MASS, that value can get set to zero in wall.f90 before it is converted to particles:

https://github.com/firemodels/fds/blob/452377ed066bb2f60587bb02787d8b9509bff09f/Source/wall.f90#L2344-L2357

This can be tested by introducing a second inert material layer behind the first one, so even when the outer layer is fully pyrolyzed the total THICKNESS does not go to zero. You can see how the mass is clipped before it can be produced as particles in the original case.

image

Decomposing_particle_flux-RT_test.txt

drjfloyd commented 1 year ago

That may be trickier to deal with as we would no longer have a wall surface anymore to inject a particle at in the part routines. Had this wall gone to all gas we would be seeing the same errors (we are seetting the M_DOT_G_PP_X arrays to 0 as well).

drjfloyd commented 1 year ago

There are a lot of things making this a difficult case. There is a low REFERENCE_TEMPERATURE with no HEAT_OF_REACTION, the surface is 1 mm thick, the obstacle is being blasted on all sides with 180 kW/m2, and the FDS time step is sitting near 0.1 s.

In a fire simulation we wouldn't likely see such high uniform fluxes over a surface and the time step would be a lot lower. We would expect different wall cells to loose mass at lower rates, different rates, and start and end at different times. The particle insert clock is tied to the surface type. All wall cells of the same surface type inject particles at the same time. In a typical simulation we wouldn't have the case of all wall cells going to zero thickness at the exact time and we would likely have longer times to loose all the mass. We would still see error, but I don't think we would expect to see quite so extreme an error.

ericvmueller commented 1 year ago

I agree this is a pretty extreme example. My only thought is that the error from zeroing PART_MASS could be much greater than that of say M_DOT_G_PP_X, because the user could set a large DT_INSERT which allows a lot of mass to accumulate in this nebulous state and then it gets deleted ...but I don't really have a good solution for the reason you mentioned, unless we could somehow keep the wall cell for one more step and then force any remaining mass to particles.

Something that came up while looking at this - are we sure that the matl_e_cons_9.fds verification case is demonstrating what we want? It doesn't seem to produce any particle when I run it.

drjfloyd commented 1 year ago

The point of the matl_e_cons cases is to show that the process for automatically determining material reference enthalpies is working correctly. The is no heat source in this case. We are just looking to see that the correct initial MATL entahlpy is seen. It should be identical to case 8 where MATL M2 is still in the SURF.

ericvmueller commented 1 year ago

gotcha - I just had done a quick search for cases testing NU_PART and when I saw it wasn't actually generating the particle I wasn't sure