pints-team / pints

Probabilistic Inference on Noisy Time Series
http://pints.readthedocs.io
Other
223 stars 33 forks source link

Comparison matrix of models versus optimisers versus inference methods #229

Closed martinjrobins closed 2 months ago

martinjrobins commented 6 years ago

I'm writing a repo that simply takes all the toy models in pints, and all the methods (optimisers and inference), and then tests every method against every model. This will take a while, so its doing it all on arcus-b (there is a lot of machine-specific stuff in there, so it's not suitable to put into Pints itself)

When I'm comparing optimisers, I compare using the following criteria:

I might also average these results over multiple runs of the optimiser, since some of them will be stochastic.

I'm less sure how to compare the inference methods, perhaps:

What other criteria do you all think are necessary @MichaelClerx @ben18785 @sanmitraghosh @mirams @chonlei ? I'm hoping this will give a bunch of heat maps comparing the performance of all of our methods, and will go into the first paper

mirams commented 6 years ago

A direct count of the number of forward solves involved in getting to an optimum/converged posterior is nice to have.

MichaelClerx commented 6 years ago

A direct count of the number of forward solves involved in getting to an optimum/converged posterior is nice to have.

Good point! Relates to #203

martinjrobins commented 6 years ago

Here are a couple of heat maps I did using the optimisers, just one run each, and 1% noise:

score_with_noise_0.pdf time_with_noise_0.pdf

MichaelClerx commented 6 years ago

Just discussing if, for optimisers, we want to show the mean score of multiple runs, or the best score.

@DavidGavaghan ?

mirams commented 6 years ago

I think you want to see the distribution of optimiser results really, will have a big impact on use if they always get the same answer versus have a wide distribution of results.

martinjrobins commented 6 years ago

Yea, I think you're right. I'm storing all the results from all the independent runs of the optimisers, and plotting the mean and minimum scores and execution_time. But all the data is there, so we can post-process any other statistic you might wish for.

On 21 February 2018 at 15:08, Gary Mirams notifications@github.com wrote:

I think you want to see the distribution of optimiser results really, will have a big impact on use if they always get the same answer versus have a wide distribution of results.

— You are receiving this because you were assigned. Reply to this email directly, view it on GitHub https://github.com/pints-team/pints/issues/229#issuecomment-367356629, or mute the thread https://github.com/notifications/unsubscribe-auth/ABGF9LimpVDkUTm_rTjVrygsIQSkFPFTks5tXDFigaJpZM4SCZxF .

MichaelClerx commented 3 months ago

@martinjrobins can this one be closed?