Open philipcardiff opened 2 months ago
But we do not have elastoplasticity, which would be easy to include and nice to show: we could run Cook's membrane as elastoplastic too (@iBatistic thoughts?).
Why not, I will also add cases for linear-elastic material. Later, we simply do not include what we consider redundant.
I opened a new branch called ivan
, when things accumulate, I'll open a PR. The things I added are:
Runs and gives reasonable answers (the scale seems off for this benchmark ... very large displacements... maybe E should be x1000)
The problem is that the domain depth is set to 1 mm instead of 1 m. To correct this, we need to scale the traction to 6.25.
Ah, OK.
But, since a traction is used (force per unit area), the solution is not a function of the depth (as the total force increases as the depth increases). If we instead used a force then scaling the depth would work (assuming I am not misunderstanding). Nonetheless, using 6.25 Pa for the traction will work and the problem is linear so the answers just scale linearly anyway.
You are totally right. We just need to set domain depth to be one and the solution is fine (checked). I'm currently working with Cooks membrane, later today I will have PR which will correct this.
EDIT: I wrote above that changing depth and leaving 6250Pa for load will get right solution; which is wrong as you explained.
The correct parameters should be a Young's modulus of 70 MPa and a load of 6.25 MPa. CoFEA's page has an incorrect case description, but I found the correct input files on their GitHub page, where the values match those mentioned above.
Philip, I noticed that when I run the cases, more than one core is being utilized. I suspect this is due to PETSc's internal multithreading.
On my installation, the solids4Foam
command uses all available cores, while mpirun -np 1
runs on two cores and is slightly faster. EDIT: tried some one another case and solids4Foam
is using 3 cores
Initially, I thought this might explain why the PETSc segregated solver is faster than the OpenFOAM solver. However, the PETSc segregated solver appears to be using only one core.
I also tested using export OMP_NUM_THREADS=1
, and with this, PETSc uses just one core.
I tried with OMP_NUM_THREADS=1
on one case, and the clock time was the same as when multiple cores were being used.
Do you observe the same behavior with your installation?
UPDATE
I tested on my workstation, but I didn't encounter the described behavior. It appears to be related specifically to my laptop installation. Anyhow, I wanted to inform you so you can check this for your system as well.
Hi Ivan,
Yes, I have noticed this behaviour on some of my installations. I looked into it, and it depends on how PETSc is configured. If PETSc is configured with OpenMP then it seems to use all the available cores for at least some of the linear solvers, e.g. the MUMPS/LU direct solvers use all the cores.
But good point: we should be careful when reporting and comparing times that we are using only one core or the same number of cores.
@philipcardiff, please let me know if there's anything specific I can help with in the paper at the moment.
Thanks @iBatistic.
Section 4.1 is almost ready:
I am now starting to look at the remainder of the paper (Sections 4.2-4.7). We can probably use a combination of tables and figures to show the speed comparisons. Feel free to make suggestions. Once I start filling it out more, I may ask for help.
Hyperelastic and hyperelastoplastic: our results appear to be converging to a different answer than the reference results. Do you know why? Reviewers may question if our method is correct in these cases. I will run the cases in Abaqus and see what results it gives. The good thing is that these are quick cases.
I'm not sure why we have this discrepancy, but it's not related to snes. As you said, it could potentially raise concerns with reviewers. Let's wait for the results from Abaqus, and based on that, we can decide whether to include this case in the paper.
I am now starting to look at the remainder of the paper (Sections 4.2-4.7). We can probably use a combination of tables and figures to show the speed comparisons. Feel free to make suggestions. Once I start filling it out more, I may ask for help.
Sure, ok
I think I found one reason for the difference: at deal.ii, I noticed they say and a uniform traction load is applied at the other end such that the total force acting on this surface totals 1 Newton. This means the traction is applied per unit undeformed area, where solidTraction
applies the traction per deformed area. This probably explains why our case predicts a different displacement than the deal.ii case.
Abaqus using ~1 mm quadratic elements gives:
solidTraction
approach. I need to do a mesh study to see if it converges to the same value as we are getting.OK, I looked further into deal.ii page (... I am slowly writing this post as I watch TV...); in particular, I looked at how they calculate the stress for a neo-Hookean material. They calculate the Kirchhoff volumetric stress as
SymmetricTensor<2,dim,NumberType>
get_tau_vol(const NumberType &det_F) const
{
return NumberType(get_dPsi_vol_dJ(det_F) * det_F) * Physics::Elasticity::StandardTensors<dim>::I;
}
where
NumberType
get_dPsi_vol_dJ(const NumberType &det_F) const
{
return (kappa / 2.0) * (det_F - 1.0 / det_F);
}
so the volumetric Kirchoff stress is (subbing 2 into 1 and replacing det_F
with J
)
(kappa / 2.0) * (J - 1.0 / J)* J
== (kappa / 2.0) * (J - 1.0)
== 0.5*K*(J - 1.0)
But we calculate it as
0.5*K()*(pow(J, 2.0) - 1.0)
So, they are using a different definition of pressure than us. I think this may also be a reason for the differences we are seeing.
By definition, "neo-Hookean" only defines the deviatoric response so they (and we) are not wrong in how we define the pressure/volumetric response, as we are free to choose whatever form we like (as long as it reduces to the small strain version in the limit). This is normally not an issue as neo-Hookean is often used in the incompressible or quasi-incompressible form, where $J \approx 1$ so the volumetric strains are small and all the definitions coincide.
So.... what do we do... maybe the easier is for us to:
I changed the pressure definition (but not solidTraction) and got:
30 -0.016383 0.0156883 0 0.0226832
where our original approach gave
30 -0.0141313 0.0141904 0 0.0200265
so it definitely makes a difference.
We are now overshooting the deal.ii value of 14.74
.
Hopefully, if I also use the undeformed areas in solidTraction it will bring us close...
OK, with the alternate pressure approach and solidTraction using the undeformed areas I get
30 -0.0156737 0.0151653 0 0.0218094
for mesh 4.
We are closer, but there is still a difference ... hmm. It is interesting that Abaqus also gives a different answer, but they seem to use another definition of pressure.
I will push the changes to the code and case. Feel free to have a look!
Looking back, I see I made a mistake in the second line of
(kappa / 2.0) * (J - 1.0 / J)* J
== (kappa / 2.0) * (J - 1.0)
== 0.5*K*(J - 1.0)
It should be
(kappa / 2.0) * (J - 1.0 / J)* J
== (kappa / 2.0) * ((J^2 - 1.0)/J)*J
== 0.5*kappa * (J^2 - 1.0)
which is actually the same definition we use for the Kirchhoff volumetric stress!
In any case, it would need to be kappa * (J - 1.0)
(without the 0.5) to be consistent in the small strain limit.
If we only use the change in solidTraction, we get
30 -0.0136162 0.0137893 0 0.019379
which is further away from the deal.ii reference....
But actually, Abaqus gives 13.7725
mm using the undeformed area (for the single quadratic element I tried), which is very close!
I suspect we are misinterpreting something in the deal.ii case or material definition, but if Abaqus agrees with the mesh refinement limit, then we can use that.
I was about to go to bed but then I found the following text on the deall.ii site:
Specifically, figure 5 of Miehe (1994) plots the vertical displacement at the midway point on the traction surface for the compressible 3d case.
The displacement is measured at the midway point on the traction surface, not the top corner.
Confusingly, the deall.ii page gives a table titled Tip y-displacement (in mm) but I think this is actually the midway point y-displacement.
I ran our model (using the undeformed area option) and it gives the following for the midway point:
30 -0.00917811 0.0147262 0 0.0173522
which is very close to the converged value given by deal.ii: 14.74 mm.
I think it now makes sense.
Time to sleep.
@philipcardiff, it was interesting going through these posts! 😄 I'm glad you managed to solve it. I apologize for not carefully reading the text earlier, which caused some unnecessary confusion and wasted your time. Typically, the literature provides results for the tip, so it's unclear to me why they gave it for the midpoint in this case.
Ha, no problem. It was an interesting journey!
I suspect the elastoplastic case has a similar journey, although hopefully shorter.
@philip in the latest commit (#fda35b2) in the Cook's membrane PR (benchmark repository), I have added a script to check the effects of the Rhie-Chow scaling factor. The AllrunMeshStudy
script will run simulations with different scaling factor values on the specified mesh, while AllpostScalingStudy
will extract time and memory data.
Inputs for the scripts are defined in USER_VARIABLE_NAMES
.
The scripts are currently located in the narrowTmember
case, but they can be used with any case to generate data for Table 6. If you'd like, let me know which case you want to use to assess the effect on accuracy, and I can modify or create a script for it accordingly.
Other comments: