spiralulam / bumblebo

2 stars 1 forks source link

Model uncertainty using resampling #5

Open spiralulam opened 2 years ago

spiralulam commented 2 years ago

Take N bootstrapping samples and train N models (e.g. N Random Forest Models) on these samples which may have different predictions for an unseen x.

spiralulam commented 2 years ago

A second idea would be to use N (e.g. N=20) DIFFERENT models from sklearn (rf, dnn, linear regression,...) for the prediction instead of one model and use the resulting prediction interval as uncertainty measure. What do you think?

spiralulam commented 2 years ago

eval_acquisition_function should comprise model prediction and uncertainty measure based on N models from resampling (https://scikit-learn.org/stable/modules/generated/sklearn.utils.resample.html), if "uncertainty" == "resampling"

bertiqwerty commented 2 years ago

What about adding model specific uncertainty calculations besides resampling as default? For instance for nns, instead of resampling and retraining one can train only once and use dropout during prediction.

spiralulam commented 2 years ago

What about adding model specific uncertainty calculations besides resampling as default? For instance for nns, instead of resampling and retraining one can train only once and use dropout during prediction.

Sounds good!