kingaa / pomp

R package for statistical inference using partially observed Markov processes
https://kingaa.github.io/pomp
GNU General Public License v3.0
110 stars 26 forks source link

Pseudo-likelihood combining iterated filtering and probe matching #202

Closed Fuhan-Yang closed 6 months ago

Fuhan-Yang commented 9 months ago

This is more like a question than a request. My data is noisy with weak dynamical signals, despite the fitted filtering median from mif2 being good, the simulations are white noises, i.e, the model didn't capture the dynamical signals of the data. Since we have probe matching to capture the dynamical features, I wonder if it is possible to develop a pseudo-likelihood function combining the likelihood from mif2 and the synthetic likelihood from probe matching. Is there a way to enable this in dmeasure? This may be helpful to weigh towards the summary statistics of interest when filtering the time series.

kingaa commented 8 months ago

@Fuhan-Yang : it's an interesting question. Theoretically, the likelihood is a kind of "master summary statistic" in the sense that no other summary statistic can give information that adds to that given by the likelihood itself. If you find that the model you are proposing is not capturing the signals you think you see in the data, you might propose a better model. If it turns out to be very difficult to imagine a better model, then synthetic likelihood becomes an attractive option, since it allows you to pick out those features of the data that you want the model to match and to ignore the rest.

Fuhan-Yang commented 8 months ago

Thanks for the explanation! In my understanding, probe matching calculates the normal synthetic likelihood of the mean summary statistics from simulations given data. If this is correct, what are the potential issues in the model fitting without considering the uncertainty (by setting a seed) from the various simulations? I guess this could be addressed in a sensitivity analysis (such as: rerunning the model fitting with different seeds?). However, we may still need likelihood uncertainty (similar to pfilter). Can we get the uncertainty by evaluating the prob_objfun multiple times given MLE without setting the seed?

kingaa commented 8 months ago

@Fuhan-Yang : these are also good questions. One can approach the problem of maximizing a noisy function like the synthetic likelihood in two ways. First, as you suggest, one can fix the seed, turning the noisy function into a deterministic but at least somewhat rugged function. Deterministic optimizers can then be applied, but the ruggedness means that one has to beware of local maxima that are globally sub-optimal. In addition, as you say, one has to give some thought to the dependence of the surface on the random seed. If the summary statistics combine information from many observations---as they often do---then the differences due to the random seed will very often be quite small. In any case, the statistical uncertainty in parameter estimates is distinct from the Monte Carlo error.

The second approach is to use a stochastic optimizer on the noisy function. One might even be so bold as to throw a nominally deterministic but quite robust optimizer such as Nelder-Mead at the noisy function. This has been known to give good results.

Fuhan-Yang commented 8 months ago

I see. As the function became deterministic by setting seed, the uncertainty should not come from stochasticity but more is an estimate error. In this case, we can use standard method (such as profile likelihood) to get estimate uncertainty, but also recognize the extra uncertainty from various simulations. I guess this would be fine for model comparisons, but may still bring issues in forecasting.

I'm trying the first approach but using a stochastic optimizer, simulating annealing. The deterministic optimizer (Nelder-Mead or MCMC) could barely move further from initial parameters, which may indicate the surface ruggedness. So now I use simulated annealing on the entire parameter ranges. Then I narrow the parameter range (by selecting the parameters with top 5% likelihood) and repeat the analysis. It seems working as the likelihood can reach and stay maximum. Simulated annealing behaves better than Nelder-Mead. I haven't tried other stochastic optimizer similar to mif2 (the one which simulates and estimates at the same time) in probe matching, if you have any experiences, I would like to hear!