BAMresearch / fenics-constitutive

Complex constitutive models beyond the FEniCS UFL.
https://bamresearch.github.io/fenics-constitutive
MIT License
13 stars 2 forks source link

More efficient model/experiment setup #23

Closed TTitscher closed 5 months ago

TTitscher commented 2 years ago

There is a setup method in both Experiment (creates the mesh) and LinearElastic (creates form) that are (currently) always called whenever a parameter changes. In some cases that is necessary:

How could that be solved automatically and efficiently?

eriktamsen commented 2 years ago

Can you maybe give some example of the different cases? I would say when we have a function that refines the mesh, it is clear, that we need to call both. In which cases would we only need to update the mesh but not the function spaces?

joergfunger commented 2 years ago

In the optimization/calibration run we might have a geometry parameter as a variable, thus in each computation the mesh has to be rebuilt, but if the parameter is "just" the Young's modulus, then the meshing and function space generation needs only to be done once in the constructor (and not in the call method). So depending on the parameters that you would like to change, there are different procedures to be called. Could it be possible to just have an standard constructor that builds the mesh/function space using separate methods (build_mesh(mesh_parameter). In the call function, there is a check if the parameters are changed compared to the ones the mesh was built with, and if so, this function is executed again. A second option is to define this by the user (as a parameter e.g. to the constructor, i.e. update_mesh_in_call = True) and have a check that if this is false and the mesh parameters have changed to return an exception. A third option is to separate between a list of variable and static variables (the static one being passed in the constructor, the dynamic ones in the call method).

TTitscher commented 2 years ago

Can you maybe give some example of the different cases?

Yes, have a look at the parameter study. There is a (very convenient) loop over all parameters and no distinction if a parameter belongs to the model or the experiment. Or if a function space or mesh should be rebuilt when this parameter changes. So, to be on the safe side, we call setup on both the experiment and the problem.

Regarding Jörgs second option: Fully automated processes (i.e. "What constitutive model can explain these data sets best?") require a way to access (e.g.) this model-specific update_mesh_in_call information, maybe as an additional parameter somewhere. But for a given model or experiment type, this parameter is actually a constant. As an example, changing the length parameter L in the uniaxial truss experiment always requires a mesh update. Logically, this update_mesh_in_call parameter would then belong inside the model. And then we are at my much preferred first option of Jörg: The problem/experiment classes are the ones that know, what internal steps a change of a certain parameter should cause.

Also, in most cases the user can be aware of the overheads: Changing a mesh parameter will also require completely new function spaces, forms, sparsity patters and so on. Thus, it will (intuitively) be much slower than just changing a constitutive parameter.

eriktamsen commented 2 years ago

I think I understand the problem and the need for this now better, thanks.

Would it be a solution to, instead of working with the properties as one object that collects all, to define all our model parameter as python @property variables. When initializing a problem, we can still pass paramers, which are then initialized. If I understand this correctly, we can then define for each a set_property() function, where we could call the respective build/setup functions, when changed. My knowledge of this is limited, so it might not be that easy. The only disadvantage I see here, is that if we would like to change two paramters, we would rebuild the problem twice.

TTitscher commented 2 years ago

From the top of my head, I would probably implement a way to see, which parameter was changed. That could be flag that is set when a parameter value is assigned, e.g. in a __setitem__. Then, roughly

def setup(self, ...):
   rebuild_required = self.parameters.has_changed(self.parameters.that_require_rebuild)
   if rebuild_required:
     self.rebuild()
     self.parameters.reset_changed_flags()

But, if we agree on solving that issue within the problem/experiment rather than controlled by the user, those are "just implementation details".

ajafarihub commented 2 years ago

I was wondering, why the parameters of Experiment and Problem have been blended here. I would say, one important advantage of separating parameters of Experiment (Structure+Data) and Problem (Model) from each other is to keep the meaning/effects of each set of parameters. This way, here we could check an instance of which class (Experiment or Problem) each given parameter is, and accordingly we can call the respectful _setup method (of either Experiment or Problem). IMO, by keeping a distinction between category of parameters (Structure-related or model-related), the handling of the change of parameters should become more transparent. Or am I missing some aspect, yet?

eriktamsen commented 2 years ago

My feeling is, that if we can assure that changing an experimental parameter is handled correctly, e.g. as Thomas suggested, then I do not see the need to separate the two sets. I see your point Abbas, that you could more easily identify which parameter belongs to which set, but why is that important, if I do not need to worry about it, as they are handled automatically. I would say it just might make it more confusing, or give more possibilities for errors.

ajafarihub commented 2 years ago

One motive in my mind to distinguish between Experiment and Model parameters also regards automatization and workflow (for example, prior to inference and for initiating different models). Suppose all parameters are blended and stored in a metadata-file yaml file, which is going to decide whether a model must be rebuilt or not (in FEniCS). If only a model parameter like "E" is changed, the workflow still considers the whole thing as changed and then re-initiate the mesh. But an alternative could be that:

It might look too tiny an advantage, but maybe for too huge meshes, it becomes quite important.

Besides, I would find this separation "technically" transparent: IMO, we should be able to define/set mesh (and data) totally independent and prior to any modeling setup. I think, this is also very much related to the fact that, ideally, we intend to introduce the same Experiment to different models, so that the best model in the sense of a better simulation will be selected.

TTitscher commented 2 years ago

IMO, we should be able to define/set mesh (and data) totally independent and prior to any modeling setup.

Already possible. Try:

parameters = bending_parameters(element_length=10.) # no model parameters!
experiment = get_experiment("Bending3Point2D", parameters)

Suppose all parameters are blended and stored in a metadata-file yaml file, which is going to decide whether a model must be rebuilt or not (in FEniCS).

That sounds like this separation is an absolute requirement for solving the issue of "when should we rebuilt what". And it may indeed be one possible solution. But my previous comment outlines another way. Big meshes could be cached based on its parameters.

Joining both parameter sets is IMO advantageous as the distinction should not matter (e.g. to a statistician who uses the modules as a blackblox and is unfamiliar with FEM). And sometimes it cannot even be clear: Think of a uniaxial test where the parameter "thickness" would count as a constitutive parameter in 2D and 1D (just a constant factor in a form), but as a mesh parameter in 3D where the thickness dimension must be meshed.

eriktamsen commented 2 years ago

I think the last point is especially valid. If there are parameters, that (depending on other parameters) might not always be one or the other type, it does not make sense to separate them.

I have a small issue with the current setup, however it could be, that the way I am using it is just not intended. I would like to hear your thoughts. For my personal use, I implemented a set of default parameters in my experiment and problem description. Practically this is of course not usually what you want. You should have to define your materials etc. However, I think it is advantageous to be able to "just run" something that is working. In addition, a working set of parameters can at least give an idea of reasonable values/correct units. Certainly it is straight forward to do that without changing any interfaces. However if we setup our problem as

experiment = get_experiment("Experiment", parameters)
problem = MaterialProblem(experiment,parameters)

The parameters defined in the experiment would not be available in the problem (which might not even be a problem, but it feels wrong not to have a full list. Certainly we can now go and say in the material problem

problem_parameters += experiment.parameters

but this now starts to become ugly and redundant as we first pass them to the experiment and then to the problem again. Should we only pass parameters to the experiment?

ajafarihub commented 2 years ago

When I think about methods and attributes of classes, I think about whether something must always be prescribed or not. Thus, IMO, it would be better to avoid adding material parameters to an Experiment, since one should be able to define a certain experiment prior to any decision about which model is going to be selected for simulating that experiment.

Regarding the case that Thomas mentioned, a way to go could be to have the constructor of the material_parameters class with an input argument that could depend on experiment parameters; e.g.:

class UniaxialExperiment1D:
    self.L = 1
    self.A = 5
    self.F = 10
    # .... methods for mesh and and ...

Class ElasticModel:
    def __init__(self, E):
        self.E = E
   # .... methods for variationals, solvers, etc.

e = UniaxialExperiment1D()
m = ElasticModel(E=e.A)

These are my points of view and I still think separation of parameters makes things more transparent in the long-term :) , but I might be wrong.

ajafarihub commented 2 years ago

Nevertheless, let me also criticize may idea (the separation of parameters) from another perspective:

I can imagine the situation where a mesh (which now belongs to Experiment) should be adjusted to certain models, e.g. an elastic model might be happy with a relatively coarse mesh, whereas a damage model would require a finer mesh at certain regions.

On the other hand, this perspective would push me towards another setup, where we consider specialized "meshes" for different models, which could imply that the mesh should belong to model rather than to Experiment.