Right now predict.caretEnsemble uses the model disagreement to provide the uncertainty around individual predictions, but it calls these standard errors. These are most certainly not standard errors and they do not reflect the total uncertainty in the prediction arising from the uncertainty within individual models + uncertainty in the weighting of the models, etc.
This needs to be documented. Later implementations can seek out alternative ways of extracting a better measure of the uncertainty in the predictions, perhaps using a new function se.caretEnsemble which extracts some uncertainty or bootstraps predictions from component models.
Right now
predict.caretEnsemble
uses the model disagreement to provide the uncertainty around individual predictions, but it calls these standard errors. These are most certainly not standard errors and they do not reflect the total uncertainty in the prediction arising from the uncertainty within individual models + uncertainty in the weighting of the models, etc.This needs to be documented. Later implementations can seek out alternative ways of extracting a better measure of the uncertainty in the predictions, perhaps using a new function
se.caretEnsemble
which extracts some uncertainty or bootstraps predictions from component models.