Closed mstimberg closed 4 years ago
It looks nice. Note that in your example, you can't have a definition of both k
and penalty
. If you use penalty
, then k
is being optimized. You might then have an increasing factor for that penalty; e.g. penalty = iteration*(k*ms*mV)**2
.
Did you have a look at Abarbanel's papers? It seems that he actually uses a k(t)
. But that seems a bit complicated to implement (the entire function k(t)
is being tuned, with some smoothness constraint).
I think the idea of the initial value of iteration
is not bad.
Note that in your example, you can't have a definition of both
k
andpenalty
. If you usepenalty
, thenk
is being optimized. You might then have an increasing factor for that penalty; e.g.penalty = iteration*(k*ms*mV)**2
.
Yes, indeed, that did not make much sense. But I now realized that it makes everything a bit more complicated: since k
is a fitted parameter, we cannot easily set it to 0 for another optimization run. So we'd need a second variable (e.g. a boolean) to deal with this.
Did you have a look at Abarbanel's papers? It seems that he actually uses a
k(t)
. But that seems a bit complicated to implement (the entire functionk(t)
is being tuned, with some smoothness constraint).
I only had a cursory look, but I don't think we can implement this approach in our current framework. As you said, this fits a value for every time step with additional constraints and then uses specific optimization methods. This would certainly something interesting to have, but it would be a major addition and be completely tailored to this approach (e.g. it wouldn't use nevergrad
).
Yes I think we should keep it simple.
I'd say the basic case would be that k
is not fitted (but could depend on iteration).
This PR is not ready yet (at the very least it is missing tests and documentation), but I wanted to discuss the approach first. The general idea is to support dynamical coupling of the model to the target variable (#23). But since for now we don't have much experience in what the best approach is (how should the coupling decay over time, how should the error be adapted, etc.), I wanted to first lay the foundation by making "everything" possible. Later, when we are sure about the details, we can add convenient shortcuts.
Since #33, a model can already refer to the target values of a variable
x
asx_target
. This branch adds two additional functionalities:iteration
, which simply contains the number of the current iterationWith these two mechanisms, one could formulate a model like this:
The
k
term makes use of theiteration
variable and scales the coupling as an inverse power law of the iteration (but that could also beexp(-iteration/something)
or similar of course). When creating theFitter
, or during the call tofit
one then additionally needs to specifypenalty='penalty'
to define which equation defines the penalty term.There are options to change the start value of the
iteration
variable of afit
call so that one can run it again with a very high iteration value and therefore switch off the coupling. The same is done by default for calls togenerate
and alsorefine
, because I wasn't sure that this coupling would play well with the gradient-based methods (which assume that running the model with the same parameters always give the same result).The "high iteration value to switch off coupling" approach feels a bit hacky, but it felt like the most simple. Another approach would have to pass in some boolean variable that could be used with something like
+ int(use_coupling)*k*(v_target - v)
in the equations. One can already do this manually by (ab)using the parameter initialization mechanism.Again, to be clear: in the long run it could be nice to just have a convenient "use coupling" switch that does all this automatically. For now, I do not want to commit to a specific implementation, though, but just make sure that the model has access to everything that is needed to implement the approach.