Our calibration system uses approximate bayesian computation to identify 100 (default) parameter sets that fit the data with a maximum error of 2 (default). If we were to pick just one parameter set then we are at risk of picking a parameter set that fits the data but may, in fact, be unrealistic. We simulate all 100 accepted parameter sets to get a consensus from the model, so we know that the majority of good fitting parameter sets all do the same thing.
This works well, but when we come to simulating interventions we hit a stumbling block. We have six interventions and simulate 10 strengths of each. There are 4,069 intervention combinations that must be simulated. However, if we were to simulate these for all 100 parameter sets we would be here a while.
As a result, the decision was made to either:
Take the mean value of all accepted parameter sets (along with the mean initial value for state compartments and mean incidence values) and use that as the 'status quo' model.
Identify the 'best fitting run' i.e. the parameter set that results in the smallest total error (and use the initial values and incidence estimates associated with that).
A comparison of the two methods is shown below, with the plot illustrating how their fit differs between 2010 and 2015. The question now is which do we pick, and is there an alternative that would speed up the optimisation algorithm at all?
Our calibration system uses approximate bayesian computation to identify 100 (default) parameter sets that fit the data with a maximum error of 2 (default). If we were to pick just one parameter set then we are at risk of picking a parameter set that fits the data but may, in fact, be unrealistic. We simulate all 100 accepted parameter sets to get a consensus from the model, so we know that the majority of good fitting parameter sets all do the same thing.
This works well, but when we come to simulating interventions we hit a stumbling block. We have six interventions and simulate 10 strengths of each. There are 4,069 intervention combinations that must be simulated. However, if we were to simulate these for all 100 parameter sets we would be here a while.
As a result, the decision was made to either:
A comparison of the two methods is shown below, with the plot illustrating how their fit differs between 2010 and 2015. The question now is which do we pick, and is there an alternative that would speed up the optimisation algorithm at all?