smenon / dynamicTopoFvMesh

Parallel Adaptive Simplical Remeshing for OpenFOAM
http://smenon.github.com/dynamicTopoFvMesh/
38 stars 20 forks source link

Incompatible size before mapping error in Port 2.3.0 (parallel) #11

Closed tanyakarn closed 10 years ago

tanyakarn commented 10 years ago

Hi again Sandeep,

I pulled your latest commit and I was no longer getting errors about patchFields, however I'm now getting errors when fields are being mapped when running in parallel. The first refinement works fine, but in the second refinement, I'm getting errors. I've included the error message below:

ReOrdering edges...Done.
 Reordering time: 0.533109 s
void dynamicTopoFvMesh::mapFields(const mapPolyMesh&): Mapping fv fields.
[0] 
[0] 
[0] --> FOAM FATAL ERROR: 
[0] Incompatible size before mapping.
 Field: phi.air_0
 Field size: 12246
 map size: 15594
[0] 
[0] 
[0]     From function 

void topoSurfaceMapper::mapInternalField<Type>
(
    Field<Type>& iF
) const
[0] 
[0]     in file lnInclude/topoSurfaceMapperTemplates.C at line 49.
[0] 
FOAM parallel run aborting
[0] 
[1] 
[1] 
[1] --> FOAM FATAL ERROR: 
[1] Incompatible size before mapping.
 Field: phi.air_0
 Field size: 11375
 map size: 14965
[0] #0  Foam::error::printStack(Foam::Ostream&)[1] 
[1] 
[1]     From function 

void topoSurfaceMapper::mapInternalField<Type>
(
    Field<Type>& iF
) const
[1] 
[1]     in file lnInclude/topoSurfaceMapperTemplates.C at line 49.
[1] 

I've tested the same simulation in serial and everything works fine. I'm also including the link to my case and solver (the link works this time, I promise): https://dl.dropboxusercontent.com/u/3309795/cylinder.tar.gz https://dl.dropboxusercontent.com/u/3309795/multiphaseEulerDyMFoam.tar.gz

Thank you, Tanya

smenon commented 10 years ago

Hello Tanya,

I've fixed this with commit: 3017b0cb933809b6c9dcc3418ba917b2bcd860c3

Your solver will have to be modified a little bit though.

You will need to move the line:

include "createPcorrTypes.H"

from multiphaseEulerDyMFoam.C (line 61), into createPhi.H, somewhere before pcorr is instantiated. The way your case is currently partitioned, there is a strong possibility that the number of processor patches will change at run-time (i.e, processors [0] and [3] talk to each other, when they didn't at a prior time-step). In such a scenario, pcorrTypes cannot be a fixed list, and will need to be re-initialized.

Thanks for the report, and let me know if it works. Sandeep

tanyakarn commented 10 years ago

Hi Sandeep,

I assume you meant correctPhi.H? I moved the include into correctPhi.H and it's now working for when I comment out the turbulence model. From the stack trace, it seems that the turbulence model is having problems with number of processor patches changing. I guess I'll have to dig deeper into the turbulence model.

Thank you for your help, Tanya

smenon commented 10 years ago

Sorry - yes, I meant correctPhi.H.

Could you show me the stack trace?

tanyakarn commented 10 years ago

After adding the mass flux correction after the mesh update for all cases where the mesh has changed, I'm no longer getting seg faults when the turbulence model is called. I originally had the solver call correctPhi.H only if the correctPhi option is enabled, but I figured it should always be enabled if the mesh is changing.

I do have question about the mapping of fields after the mesh has been changed. What happens to the value at a particular face/cell when mapping fails? Does it take on some default value? What I'm noticing right now is that the phase sum fraction in my simulation is not adding up to one. After the fields are mapped, there will be areas where the volume fraction keeps decreasing, eventually becoming negative for a phase. I ran the same simulation without the mesh refinement and the phase sum volume fraction is one.

MULES: Solving for alpha.gas
gas volume fraction, min, max = 0.118958 0.0594931 1
MULES: Solving for alpha.liquid
liquid volume fraction, min, max = 0.880686 0 0.989048
Phase-sum volume fraction, min, max = 0.999644 0.118582 1.13389

The min and max of the phase sum volume fraction should be both around 1.

smenon commented 10 years ago

Hmm... Looks like you have a situation where you're trying to map an inherently discontinuous field, which I don't support yet. I'll ask you this - if you're given a cell that contains a phase fraction between 0 and 1, and you choose to subdivide it into two cells, how would you distribute the fraction between them?

When I'm mapping scalars conservatively, I use its gradient in conjunction with the geometric fractions to achieve second-order mapping accuracy. This works well for continuous fields, but as soon as they become discontinuous, the gradients would presumably give you over-shoots / under-shoots (and may explain the values that you're seeing). One approach would be to selectively limit the gradient, or perhaps discard it altogether, but it still doesn't answer how you would want to distribute the fraction among the two cells after refinement.

The most obvious (and involved) approach to me, is to determine some sort of interface orientation within the cell (if this is a VOF-type fraction), and geometrically reconstruct / preserve the interface on the two new cells, and automatically determine a fraction from that. But in an Eulerian two-fluid model, I guess you don't really have a well-defined "interface", per se. So, if you tell me how you think it should be mapped, we could figure out an optimal way to do it.

tanyakarn commented 10 years ago

My original thought was that the values I'm seeing are a result of some values not being mapped (as in I get a mapping error on the cell/face). But after taking another look at the log file, the phase sum fractions stopped adding up to one well before that error showed up.

My understanding is that in the Eulerian model, all fields are continuous. Unlike VOF, everything is treated like a continuum. When I divide a cell into two, the values of the two new cells should be a result of interpolating the original value.

Regardless of what the interpolated value is, the solver recomputes the phase fraction after refinement. The multiphase solver uses MULES, which is explicit and only constrains the phase sum fraction on average, not per cell. This might be the source of what I'm seeing.