Open dbaumgaertner opened 2 years ago
Hey @dbaumgaertner :tada: Some brief thoughts from my side :)
* Do you think such a simulation is readily feasible with the current implementation?
I think so, at least I don't have anything in mind that would prevent it. The pyramid geometries were added only recently and no solid elements were created based on them afaik. I think it should be straight forward (maybe @loumalouomega knows more). 10 M dofs is quite a bit, I don't have experience with that many (max ~0.5M, solving time ~20s iirc) You would need to see how well it scales
* Is the orthotropic material a problem? Does anyone have an example in the Kratos repo, where an orthotropic material is used?
Orthotropic like in composite materials? I am not sure how it works with solids, @AlejandroCornejo added some functionalities recently
* The corresponding solver type would be the `eigen_eigensystem`. Is that still referring to the wrapper of the solver from the library "Eigen"?
Thx to @armingeiser we now have the spectra eigensolver (check the readme in the LinearSolversApp) which in my experience is ~10 times faster
* Can I run such a modal analyses in parallel?
Shared memory yes (not sure how well it scales). MPI-parallel: not at the moment (could be implemented with some effort)
* Is it a problem if I mixed 3D solid and beam elements?
I think this should also work fine. I think @emiroglu did sth in that direction
- Do you think such a simulation is readily feasible with the current implementation?
I think so, at least I don't have anything in mind that would prevent it. The pyramid geometries were added only recently and no solid elements were created based on them afaik. I think it should be straight forward (maybe @loumalouomega knows more). 10 M dofs is quite a bit, I don't have experience with that many (max ~0.5M, solving time ~20s iirc) You would need to see how well it scales
It is straighforward, it is in my TODO list.
* Can I run such a modal analyses in parallel?
Shared memory yes (not sure how well it scales). MPI-parallel: not at the moment (could be implemented with some effort)
Yes, integrating SLEPc for example.
@philbucher @loumalouomega Thanks for the answers already! And interesting to hear about the spectra possibility. Thats a great point of departure.
About the orthotropic material: Its not about composite materials or any anisotropy from a layered structure, rather about standard anisotropic (orthotropic) material characteristics for solid structures. As I am considering parts made from only solid silicon, this is actually a crucial requirement. Anyways, it should be only a matter of the material definition. So, any idea if I can specify orthotropic solid materials in Kratos already? Or are you aware of any application with Kratos in this regards. Hope this is not a show-stopper.
Hi! Yes, we have two options: linear elasticity with an standard orthotropic law in which you define the elastic tensor in the material properties. Another option is meant for non linear regime, which is not your case of interest. You can find the orthotropic law inside the constitutive laws app, it’s called like linear orthotropic law
@AlejandroCornejo Thanks! Thats what I was hoping for ;). I consider only linear elasticity. Do you know of any example / test where I can check how to specify the material constants in such a case. Would be great if you could pinpoint me to something, if possible.
Sure, you can see the test here https://github.com/KratosMultiphysics/Kratos/blob/master/applications/StructuralMechanicsApplication/tests/cpp_tests/test_user_provided_linear_elastic_law.cpp, @rubenzorrilla do you have any example that he can use? :)
Sure, you can see the test here https://github.com/KratosMultiphysics/Kratos/blob/master/applications/StructuralMechanicsApplication/tests/cpp_tests/test_user_provided_linear_elastic_law.cpp, @rubenzorrilla do you have any example that he can use? :)
Here it is :wink:
{
"properties" : [{
"model_part_name" : "Structure.Parts_Solid",
"properties_id" : 1,
"Material" : {
"constitutive_law" : {
"name" : "StructuralMechanicsApplication.UserProvidedLinearElastic3DLaw"
},
"Variables" : {
"ELASTICITY_TENSOR": [
[5.99E+11,5.57E+11,5.34E+11,0,0,4.44E+09],
[5.57E+11,5.71E+11,5.34E+11,0,0,-3.00E+09],
[5.34E+11,5.34E+11,5.37E+11,0,0,9.90E+05],
[0,0,0,1.92E+09,9.78E+06,0],
[0,0,0,9.78E+06,2.12E+09,0],
[4.44E+09,-3.00E+09,9.90E+05,0,0,2.56E+10]]
},
"Tables" : {}
}
}]
}
Hi @dbaumgaertner , I have an application with combination of many element types including: Tetrahedrals, CR Shells, Truss and Point Mass elements. The model also has many MPCs defined. It has in total ~1Mio DOFs. It worked so far quite fine but the simulation in general takes about 30-40 Min with 6-8 Eigenvalues and with 8 OMP Threads (scales only on the build phase). The found bugs were also pushed to the main. Only thing is that I only used the Eigen and Intel MKL libraries. I dont know the performance with spectra. One more thing, it only worked with block b&s. The reason was the MPCs but I dont't know if there are any updates in that area to make it work.
Thanks to all of you guys for the valuable hints! With that I was able to set up a minimal test case which already includes the physics and geometries I need. At this point I must say, the new installation and application process through pip works like charms. That's really a big boost for users.
Regarding the modal analysis: I noticed even for rather small problems (300k Dofs) the solution process for a simple beam built with only 20-node hex elements takes forever. Also, it takes a huge amount of memory (>32gb). If I change that to only 120k dofs. Than it suddenly takes only one minute. And everything is fine. I use the spectra eigensolver spectra_sym_g_eigs_shift
. Here is my Parameter-file. Thats some weird behavior. Do you have an idea why it takes that long in the first case? It almost seems like it is constructing some dense matrix which in the first case hits the memory limit. Does anyone have experience regarding the performance in this context?
As a side question just in case: Is it possible to get the system matrices (stiffness and mass matrix) through the python interface, e.g., as numpy arrays such hat I can do the modal analysis somewhere else?
Did you check with a higher echo level? How many threads? Is the build or the solve the slower part? Maybe try first with an isotropic material?
Check this solver and the test to see how to get the matrices. Also check the related issues/PRs: #6111 & #6205
EDIT: I see you edited your comment. I didn't observe the behavior you describe The dense matrices are the eigenvectors (both global and on the nodes), you can check the implementation here: https://github.com/KratosMultiphysics/Kratos/blob/fa9ab531a49e491f022b21e3b36b1fb92377e80c/applications/StructuralMechanicsApplication/custom_strategies/custom_strategies/eigensolver_strategy.hpp#L383
I tested it with isotropic and and anisotropic material. No changes.
But I noticed, that it gets stuck either here: https://github.com/KratosMultiphysics/Kratos/blob/fa9ab531a49e491f022b21e3b36b1fb92377e80c/applications/LinearSolversApplication/custom_solvers/spectra_sym_g_eigs_shift_solver.h#L116
Anyways, its definitely in the solving not the build stage. More precisely, I guess its the first line (Right now I only have the compiled Kratos available and cant debug). I guess so, because for the ~300k dof model it gets stuck in a process of allocating memory. If I choose a smaller model (~100k dofs), then I indeed see that first a large, but feasible, amount of memory is allocated. In this step, there is only little cpu load, so it does not seem to run on parallel threads. Then, once all memory is allocated, the cpu load peaks, which is, I guess, then the compute step running on all threads.
Here is the minimal example I am looking at with the relevant two models (small with 46k nodes and larger with 120k nodes): minimal_example.zip. I would really appreciate if somebody ran that example and at least check if it also gets stuck. Any hint / help is welcome.
It almost seems like it is constructing some dense matrix which in the first case hits the memory limit. Does anyone have experience regarding the performance in this context?
If the necessary memory for the initialization or computation is more than what your physical memory is, it starts using the swap area which is normally on your hard-drive. after that good luck with waiting and your electricity bill 🤣
You can actually check if it goes there using the system monitor on ubuntu
Hi @dbaumgaertner,
@armingeiser
spectra_sym_g_eigs_shift
use / require a direct solver internally?Spectra uses a direct solver internally (from MKL if available, from Eigen if not). I am actually not sure if the release version has MKL available, but for this system size it would definitely be required.
From my notes of some old test case with solid elements on a system with 64GB Ram:
Nodes | Elements | Solver | Memory [GB] | time [s] |
---|---|---|---|---|
8000 | 6859 | pardiso_llt | 0,1 | 0,35 |
64000 | 59319 | pardiso_llt | 3 | 7,8 |
216000 | 205379 | pardiso_llt | 15 | 76 |
512000 | 493039 | pardiso_llt | 45 | 461 |
1000000 | 970299 | pardiso_llt | killed |
The spectra solver could ev. be updated to allow usage of an iterative solver, but since the spectra library highly depends on Eigen this requires a bit of custom code and is also the reason that until now we did not make other solvers than Eigen (and MKL) available here. Also i am not sure how efficient this would be, since the same system is going to be solved with several RHS, which is efficient with the direct solver, since it has to be factorized only once. This is the code location that would have to be updated
According to the configure scripts for the wheels it does not include MKL @roigcarlo can you confirm please?
Currently no MKL is included in the wheels.
Mainly because license issues. I do not know the situation now but with MKL being under ISSL and us distributing different packages apparently there should be no problem.
If you are interested in having this i can target it for 9.2
MKL is a must have for any larger structural example IMO. It is so much faster than anything from Eigen (~ factor of 5 in my experience). Hence adding it in 9.2 would be great!
I fully agree with @philbucher. From an application point of view, MKL is a must. The license situation I cannot comment.
Maybe back to the eigenvalue analysis. I tried to run the above mentioned 120k model on a machine with plenty of RAM and computing power. Still the analysis gets stuck. It indeed remains stuck in a loop of allocating memory before actually calling the spectra solver. More precisely, it seems to get exponentially slower the more memory is allocated. Thats why its fine for the small model but quickly becomes infeasible. Not sure if this is a consequence of the required direct solver or due to an unfortunate matrix filling somewhere inside.
@armingeiser Do you know, what direct solver it is using internally? A Kratos solver or something spectra / Eigen internal?
Maybe back to the eigenvalue analysis. I tried to run the above mentioned 120k model on a machine with plenty of RAM and computing power. Still the analysis gets stuck. It indeed remains stuck in a loop of allocating memory before actually calling the spectra solver. More precisely, it seems to get exponentially slower the more memory is allocated. Thats why its fine for the small model but quickly becomes infeasible. Not sure if this is a consequence of the required direct solver or due to an unfortunate matrix filling somewhere inside.
@armingeiser Do you know, what direct solver it using internally? A Kratos solver or something spectra / Eigen internal?
If you can generate the matrix market files I can try to run the analysis with SLEPc, then we can think about integrating PETSc/SLEPc in Kratos
the matrices will be written in MatrixMarket format when setting the echo_level
to 4:
https://github.com/KratosMultiphysics/Kratos/blob/aca8179a05296d55e44693447cf081026df0ace9/applications/StructuralMechanicsApplication/custom_strategies/custom_strategies/eigensolver_strategy.hpp#L417-L436
the matrices will be written in MatrixMarket format when setting the
echo_level
to 4:
can anyone generate them and attach them here?
I am just about trying. But obviously the analysis gets stuck before it reaches this point.
@loumalouomega, above I provided the ready-to-run example that I use ("minimal_example.zip"). So if you want and have time, you can also download it and run it to see where it fails (note that I use the release version 9.1 of Kratos).
I tested it with isotropic and and anisotropic material. No changes.
But I noticed, that it gets stuck either here:
or here:
Anyways, its definitely in the solving not the build stage. More precisely, I guess its the first line (Right now I only have the compiled Kratos available and cant debug). I guess so, because for the ~300k dof model it gets stuck in a process of allocating memory. If I choose a smaller model (~100k dofs), then I indeed see that first a large, but feasible, amount of memory is allocated. In this step, there is only little cpu load, so it does not seem to run on parallel threads. Then, once all memory is allocated, the cpu load peaks, which is, I guess, then the compute step running on all threads.
Here is the minimal example I am looking at with the relevant two models (small with 46k nodes and larger with 120k nodes): minimal_example.zip. I would really appreciate if somebody ran that example and at least check if it also gets stuck. Any hint / help is welcome.
I am just about trying. But obviously the analysis gets stuck before it reaches this point.
But is calls Solve
after writing the mm files
Oh wait, my bad. I didn't realize from the first glance that it requires the echo level to be equal to 4. Sorry for the noise. Now the matrices are written.
Description
Hi together,
I want to check, whether I can use Kratos for some micro-mechanical applications that I have. The goal is fairly straightforward: to perform modal analyses of 3D structures which I assume are discretized by standard small displacement solid elements (hex, tet, prism or pyramid). Furthermore I assume the material to be silicon (orthotropic linear elastic) and the shape functions to be quadratic. The model size may be as large as 10 million DOFs.
As I am not familiar with the latest developments, I would have some questions to anyone involved before trying it myself:
eigen_eigensystem
. Is that still referring to the wrapper of the solver from the library "Eigen"?Sorry for the many questions and thanks already in advance for your hints. Obviously I would really like to use Kratos, as you might guess. ;)
Scope