Open gregadzrodz opened 1 year ago
To add. This is related to how decomposePar distributes the cells. Because in single run, no FOAM warning is given, on 2 processors again this FOAM warining is seen, but then running on 4, this dissapears again.
Are you sure the fluid properties are correct?
In reality, this is a micrometer size system. In simulation I scale it up a factor of 10^6, in order to avoid making precission error with small volumes (1e-6)^3 m. Bacisally its a conversion to a micrometer system units. Going from m, kg, s, to μm, pg, μs. All the material properties are scaled accordingly and are set up correctly.
The interface is not perfectly sharp, there are tiny bubble in the fluid.
Nevertheless, that should not happen. But in this stage it very hard to debug. Could you send the case were it crashes in the next time step? So i can debug it faster?
Here is a case, but much closer to the crash. If you run it, it should crash in few seconds. See the output file. The interesting thing is that if I decompose it again (and I guess the cells will be split differently on the processors) the crash point will move a little. And if I do not run it in parallel, I do not get a crash at all. So it must have something to do with parallization of RDF code. I now attach the case with also added split to 18 processors. Hopefully it is helpful.
As I understand you are not able to reproduce the crash?
Is there a way of setting up isoAdvector to avoid this smear completely? One thing that I find is that if I increase the surfCellTol from 1e-05 to 1e-07, the smearing is reduced, but the before mentioned problem gets worse, crashes happen earlier and more frequently.
The compressibleInterFlow and geometric reconstruction already perform massively better with regards to smearing than compressibleInterFoam. But this is quite a specific use case, interface in a high speed transonic environment, so this smearing is still present and apparently causing issues.
This is a special use case there appears to be a problem with the mass conservation. (interface cells appear in the middle of the domain) You could try p_rgh add minIter 2; snapTol 1e-6; // snaps values below 1e-6 to 0 and (1-1e-6) to 1. You could also try lowering this value (bit hacky) surfCellTol 1e-6; // nCorrectors 6; <-- better mass convergence (PISO algorithms needs at least one)
you could also try the transonic options (transonic yes in fvSolution/pimple) and the momentuum predictor
another possibility could be that you limit the temperature in the fvOptions
Hi, I seem to be running into a similar issue of interface cells in the middle of the domain, see the photo attached. It's an axisymmetric mesh with the satellite bubbles preferring to form close to the axis. The problem seems to get much worse with mesh refinement. I have tried using the recommended settings you gave above, and while it has improved the stability of my case, it hasn't stopped the unwanted formation of these bubbles. Here is a look at my fvSolutions: `alpha.water { nAlphaCorr 4; nAlphaSubCycles 4; cAlpha 1;
MULESCorr no;
nLimiterIter 5;
solver smoothSolver;
smoother symGaussSeidel;
tolerance 1e-8;
relTol 0;
advectionScheme isoAdvection;
reconstructionScheme plicRDF; //isoAlpha;
vof2IsoTol 1e-8;
surfCellTol 1e-6;
snapTol 1e-6;
writeVTK true;
}
psiFinal
{
solver PCG;
preconditioner DIC;
tolerance 1e-7;
relTol 0.00;
}
rhoCpLFinal
{
solver diagonal;
preconditioner DILU;
tolerance 1e-7;
relTol 0.1;
}
rhoCpVFinal
{
solver diagonal;
preconditioner DILU;
tolerance 1e-7;
relTol 0.1;
}
rho
{
solver PCG;
preconditioner DIC;
tolerance 1e-7;
relTol 0.1;
}
rhoFinal
{
$rho;
tolerance 1e-7;
relTol 0;
}
p_rgh
{
solver GAMG;
tolerance 1e-7;
relTol 0.01;
smoother DIC;
minIter 3;
maxIter 15;
nCellsInCoarsestLevel 100;
}
p_rghFinal
{
$p_rgh;
tolerance 1e-7;
relTol 0;
minIter 3;
maxIter 20;
}
"(U|h|T.*|k|epsilon|R)"
{
solver smoothSolver; //PBiCGStab;
smoother symGaussSeidel;
//preconditioner DILU;
tolerance 1e-7;
relTol 0.0;
minIter 15;
maxIter 50;
}
"(U|h|T.*|k|epsilon|R)Final"
{
$U;
tolerance 1e-7;
relTol 0;
maxIter 50;
}
}
PIMPLE { momentumPredictor no; nCorrectors 6; nNonOrthogonalCorrectors 2; }
relaxationFactors { equations { "h." 1; "U." 1; }`
Could it maybe be due to excessive superheat? At the moment I am initializing a thermal boundary layer that might be a bit excessive and am going to investigate if it has an impact. Any advice would be appreciated, thank you in advance.
Is the maxCapillaryNum in the controlDict smaller than 1; maxCapillaryNum 0.5;
Try:
{ $p_rgh; tolerance 1e-8; relTol 0; minIter 3; maxIter 20; }
use pointCellLeastSquare as gradient scheme
Thank you for the quick reply! I will give your suggestions a try.
The attached simulation fails and gives sigSegv issue. This is related to the RDF code, because if this is changed to gradAlpha, the simulation runs through the crash point easily. No setup in fvSolutution seem to drastically improve the situation. This is most probably related to the way things are parallelized, because running on single core the simulation gets the crash point. Note that there is a FOAM warning at each time step when run in parallel, while this dissapears when running on a single core.
I am attaching the log file, along with my setup. I run on WSL with version 2112, on 18 processes in parallel. Can you tell me if you can replicate the issue?
RDFIssue.zip