Closed eriktamsen closed 9 months ago
I would agree with Erik. Recomputing the complete solution for a sensor is certainly not a good solution, however there might be cases where at least some processing steps have to be performed in order to get the solution (e.g. if you are interested in the reaction force, you would have to compute the residual at the relevant nodes and sum them up). You could certainly always compute this residual and use it afterwards only as a postprocessing in the sensor routine, but in cases where there is no such sensor this is overhead. Could we either group the sensors (such that this computation is performed e.g. for all reaction forces sensors once and then passed to all of them) or have flags (that are set based on which sensors are used) that compute global quantities (for all sensors and the results are stored on the problem level) and that then be reused? The first solution is then more sensor specific and would not clutter the main file, the second solution allows to reuse solutions for multiple sensor groups.
Good points. How about:
problem.solve()
or problem.solve(t)
does nothing but equilibrate the structureproblem.postprocess(sensors)
can be called by the user and is implemented roughly like:
result = {}
for sensor in sensors:
result[sensor] = sensor.measure(self) # self == problem
return result
ForceSensor
would then, within its measure
method, do the additional work of assembling the residual forces"ForceSensor would then, within its measure method, do the additional work of assembling the residual forces". Would it then store the result at the problem level such that another force sensor would be able to reuse this result?
I can not see the advantage of having the problem.postprocess() called sperately. If you only add sensors to your problem that you are interested in, whould you not want them to measure each step (I guess special cases can exisit and these could also be dealt with). With the same logic, if we can check the list of sensor for force sensors, and then compute the residual at problem level and still pass the whole problem to all sensors. If the computing of the residual is performed by the sensor.measure, then adding two force sensor would require two solves, right?
No, the first force sensor (in each time step) would compute the residual and store it at the problem level, the second one would reuse it. But there are some challenges (i.e. we would have to remove all temp data between load steps, e.g. the R and we would have to make sure that two sensor types do not use the same variables with different meanings).
can not see the advantage of having the problem.postprocess() called sperately.
I thought that was the point of this issue -- to separate a solve() from sensor evaluation.
If your concern is that each individual sensor evaluation requires a solve, have a look here. Passing a list of sensors will solve once and then loop over all sensors. That is, however, not demonstrated in any of the examples.
Then there seems to be a missunderstanding, I will try to clarify my view. If I see correctly, the current process is something like:
sensor_object = SomeSensor() # initializing sensor
new_problem = SomeProblem(...) # initalizing problem
# to measure something the sensor is then passed to the problem
new_problem(sensor_object) # this now calls evalute and would even do some timestepping...
-> calls new_problem.evaluate(..)
-> calls new_problem.solve()
From my point of view this feels counterintuitive of how I would controll my problem. What I am suggesting is something along these lines:
sensor_object = SomeSensor() # initializing sensor
new_problem = SomeProblem(...) # initalizing problem
new_problem.add_sensor(sensor_object) # passing the sensor to the problem, it is only added to a list, not triggering a solve
# now I can solve my problem (in a time loop or not)
new_problem.solve()
-> if a force sensor is detected, an additional solve for R can be performed
-> calls sensor.measure(new_problem) # postproccess the evaluated fields
I dont think we need to seperate problem.solve() and the sensor totaly, but turn around the controll.
I am not sure if what I say contributes or not. A long time ago, I realized that, in python one cannot define a variable as the pointer to another variable (see here for instance). How relevant is this, though?
If we have e.g.: b = whatever_sensor_quantity_that_is_instantiated_lateron and_changes_over_time I would like to define something like: a = pointer_to_b so that in the future I can use it in my inference; e.g.: my_sensor_values = b.get_pointed_variable But this is not possible in python. After all, I came up with this helper class. I feel it can be helpful. We can define our desired sensor based on ANY attributes that still is not instantiated (basically, it is still None).
For example: problem = MyProblem(...) my_desired_field_changing_over_time = problem.desired_field
sensor = CallToGetUpdatedAttribute(obj=problem, attributes_list=['desired_field'])
Eriks I totally agree with Erik, only one remark, e.g. if you are interested in two reaction forces, you would compute the auxiliary R twice, if these two sensors are totally independent, but I think that can be handled by grouping sensors so that e.g. all reaction force sensors are in one group computing R beforehand.
I think, we might have two approaches, in general: 1) we attach a sensor to a problem within its class (I think, this is the approach being discussed currently). 2) we borrow some attributes of the problem to establish a sensor object (This is related to my past comment). Personally, I would prefer the second approach, since it avoids adding more complications to the implementation of each problem's class. Instead, we can develop separate classes for sensors of our desire. I am curious to hear your comments on the up/down-sides of each approach.
I have some trouble understanding your points. Is it correct that ...
problem.add_sensor(sensor)
followed by solve
, the current implementation directly uses solve(sensor_list)
to accomplish the same. Or I am missing something... :Sproblem.u
to a DisplacementSensor
and destroying that link by reassigning problem.u = ...
?
problem.get_u() / problem.u()
method that is passed to the sensor (instead of metaprogramming...)Some other upsides of the approach 2, IMO:
@TTitscher You are right, we focus on FEniCS and of course, problem.get_u() gives a reference to the same python object all the time. But let me try to justify some tricky usefulness of that CallToGetUpdatedAttributes class. It is almost the same thing as you suggested ( problem.get_u() ), and only one difference exists which actually happenned to me: we might wish to define our sensor based on a problem-object and before having solved that problem. In this case, problem.get_u() might be still Null or something whose link will be lost after it is modified. So, at the end, this class is a SAFE way of borrowing our desired attributes of any object to define sensors (in approach-2). But still I think this is a side-aspect of approach-2, and that approach can be taken without inclusion of that CallToGetUpdatedAttributes class.
Hmm, especially with the idea of #22 in mind, the code may look like
def measure(self, problem): # self = DisplacementFieldSensor
return problem.u
and accesses whatever problem.u is at time of evaluation. Reassigning problem.u will not even cause problems.
Here is a minimal code:
class BaseModelSensor:
def __init__(self, model_object, attrs, _name=None):
self.model_object = model_object
self.attrs = attrs
if _name is None:
_name = f"sensor_{model_object.__class__.__name__}"
self.name = _name
self._values = len(attrs) * [None]
@property
def values(self):
for i, a in enumerate(self.attrs):
self._values[i] = getattr(self.model_object, a)
return self._values
class MyProblem:
def __init__(self):
self.a = None
def solve(self):
self.a = 19
if __name__ == "__main__":
p = MyProblem()
s = BaseModelSensor(p, ['a'])
print(f"Value of sensor: {s.name} before solve: {s.values}")
p.solve()
print(f"Value of sensor: {s.name} after solve: {s.values}")
Quite detailed discussion. Anways, I don't see a benefit over a much simpler
class SensorThatMeasuresA:
def __init__(self, problem):
self._problem = problem
@property
def values(self):
return self._problem.a
It is indeed too detailed now :). Just one remark: we could use the same class 'BaseModelSensor' for any kind of measuremenet we will need for any model, however, we would need to define a separate class for each attribute 'a' of certain problems. So, the only benefit is to sum all such latter classes to a single more general one (BaseModelSensor). And this is thanks to the flexibility that 'getattr' provides to us.
Not wrong in general. Just note that our (currently) 6 sensors (which cover almost all mechanics cases, I'd argue) perform different stuff in measure
and the BaseModelSensor would not be able save any code.
well ..., on the other hand, I would not see any side-effect in using "BaseModelSensor", though. Or would you find any?
IMO, this class is more a work-around to overcome the fact that python lacks the "pointer" concept. And I think, a "sensor" is very much fitting to this concept: it can be viewed as something pointing to a quantity, which is stored/manipulated/computed/simulated through a python variable.
Less/simpler code >> more/complex code. Both for reading and maintaining.
But, the good thing of the interfaces we talk about is, as long as everyone implements it correctly, the internal details won't matter and you are free to use your version.
So, you two have lost me in the details in your discussing, so I will try to answer the question of @TTitscher instead: "You propose individual calls to a problem.add_sensor(sensor) followed by solve, the current implementation directly uses solve(sensor_list) to accomplish the same. Or I am missing something... :S"
My point is not that the one would work on a technical level any different than the other, but at least from my point of view the one I proposed makes more sense to me in an intuitive way.
Step 1: I define my problem, i.e. choose the experiment, the material model, setup required sensors (could be included in the experiment). Step 2: I deal with solving my model (define load/time steps etc). Step 3: I can evaluate what my sensors have measured.
I feel this aligns better to how "real" experiments are setup. But as I said, this probably comes down to personal preference. I don't see a technical problem with the other. I just confused me when trying to follow the code.
I see your point! I was more coming from an automated point of view, where I mainly provide a model with valid parameters and let the experiment decide what sensors to evaluate (i.e. what data is available). An example would be a calibration of a single model w.r.t. three different experiments, each with (multiple) different sensors.
Just a matter of preference/definition and both ideas do not even exclude each other:
problem = ...
problem.add_sensor(DisplacementFieldSensor(...))
problem.solve() # or problem.solve(parameters) ?
# ...
for exp in [exp1, exp2, exp3]:
response = problem.solve(exp.sensors) # ignores the one from .add_sensor
response = problem.solve_with_sensors(exp.sensors) # better name?
# alternative, not my favorite, but possible
problem.clear_sensors()
problem.add_sensors(exp.sensors)
response = problem.solve()
I don't see a problem implementing both options. In contrary to my last post in #18, I guess we would at least all have to have an ideas about how this probeye integration, or better how the fenics integration into probeye will look like. From my experience the example you presented would currently look something like the following, when using probeye:
problem = InferenceProblem("Some inference problem")
# these are (more or less) the parameter that are being inferred
problem.add_parameter( 'x', info = ..., prior = ...)
problem.add_parameter( 'y', ... )
# this is the defined forward model
problem.add_forward_model("FEMModel", fem_model)
for i, exp in enumerate([exp1, exp2]):
problem.add_experiment(f'experiement_{i}',
fwd_model_name=f"FEMModel",
sensor_values={'given paramters, experimental data etc...'}}
problem.add_likelihood_model(...)
# example setup
scipy_solver = ScipySolver(problem)
inference_data = scipy_solver.run_max_likelihood()
results = inference_data.x
The actual model definition, setup etc is happening inside the forward model. if you want to have different input from different experiment, you would currently need to define different forward models.
so the fenics definitions, problem setup, solve or whatever need to be defined in the forward model, a specific probeye class.
I would like to rethink the current sensor implementation. As far as I can follow the code, currently if I call a sensor, eg. 'LinearElastcity.evaluate(sensor)' the problem is solved. I feel this is backward. I would suggest, that all a sensor does is give a specified output for already computed data. One example is, if I have multiple sensors, I would not want to solve the problem multiple times for the same states. I would would prefer something like a function 'problem.add_sensor(sensor)' when setting up my simulation (could also already be included in the experimental setup is there are standard sensor for some experiment). and in problem.solve() there is a loop where each attached sensor is evaluated.