Open spiralulam opened 2 years ago
A second idea would be to use N (e.g. N=20) DIFFERENT models from sklearn (rf, dnn, linear regression,...) for the prediction instead of one model and use the resulting prediction interval as uncertainty measure. What do you think?
eval_acquisition_function should comprise model prediction and uncertainty measure based on N models from resampling (https://scikit-learn.org/stable/modules/generated/sklearn.utils.resample.html), if "uncertainty" == "resampling"
What about adding model specific uncertainty calculations besides resampling as default? For instance for nns, instead of resampling and retraining one can train only once and use dropout during prediction.
What about adding model specific uncertainty calculations besides resampling as default? For instance for nns, instead of resampling and retraining one can train only once and use dropout during prediction.
Sounds good!
Take N bootstrapping samples and train N models (e.g. N Random Forest Models) on these samples which may have different predictions for an unseen x.