dflemin3 / approxposterior

A Python package for approximate Bayesian inference and optimization using Gaussian processes
https://dflemin3.github.io/approxposterior/
MIT License
41 stars 9 forks source link

Checking GP predictive accuracy on-the-fly #22

Open dflemin3 opened 6 years ago

dflemin3 commented 6 years ago

One way to check the accuracy of the GP's predictions is by, for each of the m new points in parameter space identified by the GP, compute the GP's predictions before running the forward model. These predictions can be cached and then compared against the result of the forward model to estimate metrics like relative error.

dflemin3 commented 5 years ago

Accidentally closed the wrong issue!

dflemin3 commented 5 years ago

This task can be accomplished in a few ways:

1) k-folds cross-validation At any point in the algorithm, I can partition the training set, {theta, y}, into k folds, training on k-1 folds and predicting on the kth fold. This procedure yields a pretty decent estimate of the the accuracy and uncertainty of the predictions that can be compared to the GP's own uncertainty from its conditional predictive distribution.

2) on-the-fly Before the GP evaluates the forward model, I can use it to predict the functional value at that point, with uncertainties under the GP model, to assess the accuracy of the prediction.

Both of these methods should be implemented, and both should not be too computationally-expensive as re-training the GP for a given set of hyperparameters is decently quick with george given that be design, our training set sizes are small.