Add more options to feedback process on $\log R_t$, described here, which are efficient for longer running inference.
Context
IMO from basic principles fitting on data only from after a peak hospitalisation and before the next peak hospitalisation then the action of the feedback term will be pretty unidentifiable. However, the obvious downside of running the inference over longer time periods is that we would require a very long history of infections within the model to carry for inside scan.
From f2f discussion with @dylanhmorris we had a quick think about easier ways to model this.
IMO, the easiest way to have a long period of effect from feed back whilst avoiding heavy compute is:
Where $\lambda_f \in [0,1]$ is a forgetting factor, and $T_f = \infty$. Whilst exponential smoothing is pretty simplistic, it is handy because it has an on-line update rule:
F(t+1) = \gamma I(t) + \lambda_f F(t).
This means that scan only has to carry forward the feedback strength $F(t)$ rather than a long vector of past infections, in addition to the recent infections required for the new infection calculation.
NB:
This approach would very likely work better with infection on a per capita scale and $F(0)$ is likely to be a parameter that needs inference.
@dylanhmorris pointed out that more complicated models that exponential decay are possible e.g. a multi-stage Erlang decay.
Goal
Add more options to feedback process on $\log R_t$, described here, which are efficient for longer running inference.
Context
IMO from basic principles fitting on data only from after a peak hospitalisation and before the next peak hospitalisation then the action of the feedback term will be pretty unidentifiable. However, the obvious downside of running the inference over longer time periods is that we would require a very long history of infections within the model to carry for inside
scan
.From f2f discussion with @dylanhmorris we had a quick think about easier ways to model this.
IMO, the easiest way to have a long period of effect from feed back whilst avoiding heavy compute is:
Where $\lambda_f \in [0,1]$ is a forgetting factor, and $T_f = \infty$. Whilst exponential smoothing is pretty simplistic, it is handy because it has an on-line update rule:
This means that
scan
only has to carry forward the feedback strength $F(t)$ rather than a long vector of past infections, in addition to the recent infections required for the new infection calculation.NB:
Required features
new_double_convolve_scanner
.