Closed joergfunger closed 2 years ago
For me, the time would be in input_sensor to the forward model as in the multiple_sensors example. But it would be interesting to see some code for your example.
The problem with that is that this does not allow to have multiple sensors with different times being used. E.g. for the current Hannover demonstrator, we had two measurement recording systems (one for laser and forces) and the other one for the stereo data with different sampling frequencies. If this is an input to the forward model (and thus the same for all sensors), the returned outputs do not have the same length as the experimental values. In addition, even if we have only a single dataset, but large measurement data (in the elastic test they use at least 10Hz, or even more), thus we would need for 90s 900 forward model evaluations, even though an approximation with just one (for the elastic case) and interpolating would be totally sufficient (this is the extreme case, but usually an adative time integration scheme is really beneficial and the error when interpolating can be adjusted and reduced to small values depending on what is needed.
I'm currently looking at a time dependent problem with a fixed time discretization, where for each time step the PositionSensors are supposed to return a value. When thinking about how to implement this, there are actually several options.
For the first option with the interpolation (my favorite), we could define this also independent of the experiment (so directly passing to the constructor), or as an input_sensor (so different for each experiment). I guess all of the options are valid, but when writing a TimePositionSensor (I already did that in my local file), we are somehow fixing that. What is your opinion on that?