When learning a model (either happing offline or online), we can pass some notion of confidence with respect to predictions of performance. You may interpret this as an estimate of model uncertainty. For example, we can say how confident the model is with respect to prediction of battery consumption of particular configuration and this confidence may vary from configuration to configuration. For the configuration that we have actual measurements, this uncertainty is zero or very close to zero depending on measurement inaccuracy. This can be somehow interpreted to probabilities in planning and might be a quite interesting direction to explore for probabilistic planning at runtime.
When learning a model (either happing offline or online), we can pass some notion of confidence with respect to predictions of performance. You may interpret this as an estimate of model uncertainty. For example, we can say how confident the model is with respect to prediction of battery consumption of particular configuration and this confidence may vary from configuration to configuration. For the configuration that we have actual measurements, this uncertainty is zero or very close to zero depending on measurement inaccuracy. This can be somehow interpreted to probabilities in planning and might be a quite interesting direction to explore for probabilistic planning at runtime.