megalut / megalut

3 stars 3 forks source link

Get prediction errors via predicting several realizations #82

Open mtewes opened 10 years ago

mtewes commented 10 years ago

An appealing road to get there:

Develop the option "mode = all" of learn.run.predict(). From the doc:

If "all", it will predict all realizations (_0, _1, ...), and then use groupstats to compute statistics of the predictions coming from the different realizations.

Once this is done, we can get a single simulation catalog containing (1) predictions made on averaged measurements, and (2) standard deviations of the predictions obtained from the different realizations.

Now simply train a second stage of machine learning to predict (2) based on (1), and apply this to real observations (using as features the predictions from the first stage) to get (statistical) error bars for every predlabel!

So far this sounds great to me. I might try to get something this week, as I think that we should really show that we do have very nice error estimates (aka weights) for our predictions.

mtewes commented 10 years ago

Argh sorry, wrong button -- I reopen this issue

mtewes commented 9 years ago

I'll attempt to implement this "mode = all" option of learn.run.predict() now. As making nrea predictions might be slow for large nrea, I'll maybe offer the possiblity to run only on a number of realizations, and not only "all".

I'll make a new branch for this (based on #78 = open pull request).

mtewes commented 9 years ago

It doesn't look all wrong, that's a very first test of the new groupstats (pull request #87) screen shot 2015-01-15 at 14 02 11