Closed DavAug closed 3 years ago
As discussed it is probably best if we select the data from index 60 to 90 and normalise by units of 100k incidences. If we choose priors similar to this (Happy to choose Truncated Gaussians instead) we should be able to reliably find the optimum https://nbviewer.jupyter.org/github/SABS-R3-Epidemiology/seirmo/blob/optimiation-notebook/examples/optimisation.ipynb.
The minimal optimisation app should probably fix the parameters similar to the notebook. As an extension we can allow to fix any parameter (which may be used to illustrate that the model is not identifiable).
Write an app that finds the maximum a posteriori (MAP) estimates for the flu dataset.
add_problem
method to the app that takes the data aspandas.DataFrame
, add the model as aseirmo.ForwardModel
whichpints.CMAES
, from prior drawn initial point, log-transformed parameters and otherwise default settings.update_simulation
from the simulation app.