Open samclifford opened 4 years ago
Incidence object made for the branching process as of cf651c480bad109935ca72904b6c98e63fee5b65
Model of exponential growth implemented in 48122c72c77ba1b79b2a9d481f4c5583f882167e
This is probably fine for now until someone tries using the doubling time approach and fitting some data that doesn't neatly fit into exponential growth or decay. It might be worth considering, in such a case, using only the last serial interval or two of data, or considering some sort of weighting scheme that down-weights data from far in the past (e.g. exponential decay with a half-life of one week).
Currently only the final day's cases are used to simulate the branching process outbreak. We can use the whole data set, convert to an incidence object (after filling in the unreported values) and simulate the branching process from there.
For the exponential change in cases with the doubling/halving time model, it may be worth estimating the constant term in a Poisson GLM with log(λ) = a + r t and then predicting what the number of admissions should be on the final date of admissions in order to give behaviour that doesn't result in a huge drop in recent cases leading to a low baseline for future projections. We might need to go down the road of estimating the doubling time from a time series of cases if we are going to look at modelling.
For the time being we can probably just suck in all the data, check if we're using the branching process and then pass in only the most recent row if using doubling/halving.