Closed nathanaelbosch closed 3 years ago
Good question. I think this is indeed a bug but should be fixable by replacing
def postprocess(self, times, rvs):
"""Rescale covariances with sigma square estimate and (if specified), smooth the estimate."""
rvs = [
RandomVariable(
distribution=Normal(rv.mean(), self.sigma_squared_global * rv.cov())
)
for rv in rvs
]
odesol = super().postprocess(times, rvs)
if self.with_smoothing is True:
odesol = self._odesmooth(ode_solution=odesol)
return odesol
with
def postprocess(self, times, rvs):
"""Rescale covariances with sigma square estimate and (if specified), smooth the estimate."""
rvs = [
RandomVariable(
distribution=Normal(rv.mean(), self.sigma_squared_global * rv.cov())
)
for rv in rvs
]
self.prior.diffconst *= self.sigma_squared_global # changed here!
odesol = super().postprocess(times, rvs)
if self.with_smoothing is True:
odesol = self._odesmooth(ode_solution=odesol)
return odesol
right?
Now that I am thinking about it, it might be good to make the fix slightly more complex than this single line. There should be a test for the sigma change (take a test problem with an overly small prior sigma and assure that the sigma of the prior is larger after the solve, perhaps checking that the returned final RV is a (positively) scaled version of the last intermediate random variable). Is that too cryptic or do you understand what I mean?
The fix you proposed seems correct to me, and might be a good solution for the current code. But I agree that we might want to start by designing a test that specifically targets this part of the "correctness" of the kalman posterior. I didn't 100% understand what you meant though.
Also, for time-varying sigmas this fix would not be enough. That's not really relevant right now of course, and I'm not really sure how to best implement them with the current codebase anyways, but I hope that I have some success with them and that I'll therefore want to add them at some point.
I didn't 100% understand what you meant though. for instance: prior sigma == 1e-8; solve; calibrate; test that initial covariance is the input initial cov times a larg(ish) number. I guess it would work for all kinds of sigma adaption strategies. Do you agree?
"initial covariance" is a bit misleading to me. As I understand it, this error only applies to the dense evaluation of the Kalman posterior. So the test I'd imagine after what you said would be (after starting with a small sigma and solving an ODE) to evaluate the covariance on some "in-between" point, once "by hand" (hard-coded) with the original sigma and once with the KalmanPosterior implementation, and then make sure that the latter returns a larger covariance?
probably both? My proposal would check that the solver rv-list is scaled, yours that the dense output knows the correct sigma.
Ok sure. I was just suprised, because the case you described is already working fine right now, isn't it? The discrete list of rv
s is correctly scaled with the estimated global sigma, so it's not really part of this issue. But of course, more tests are always better :)
Sorry, that was miscommunicated on my side then. I suggested that as well because there is no test for this sigma change as far as I know. It is not directly related to the current issue though :)
I don't think we explicitly use it anywhere. Therefore, I would imagine that the used
Q
does not actually use the estimated sigma squared, and therefore the prediction does not actually correspond to the true posterior prediciton.@pnkraemer Is this the case, or did I miss something? If this is indeed currently a bug we should discuss how to best fix that.