Open velezbeltran opened 1 year ago
Given that uncertainty quantification is used often in many settings when confidence in predictions is required it would be nice to include a task in Flaml that tackles this setting.
There are a couple of questions that about how to do this.
- Should we create a separate tasks or is it enough to give instructions on how to use the uncertainty estimates on models like Catboost.
The latter is preferred. Some code changes in flaml are required for getting uncertainty estimates from Catboost.
- If we create another task, is it necessary to implement other validation metrics like proper-scoring rules (e.g log-likelihood, CRPS, etc)?
- Should we only focus on Catboost or would it be helpful to implement other methods for uncertainty quantification?
It depends on whether we can have a unified API for all the libraries. If we can, it will be an additional value for users to support multiple libraries with a unified API.
Given that uncertainty quantification is used often in many settings when confidence in predictions is required it would be nice to include a task in Flaml that tackles this setting.
There are a couple of questions that about how to do this.
1) Should we create a separate tasks or is it enough to give instructions on how to use the uncertainty estimates on models like Catboost. 2) If we create another task, is it necessary to implement other validation metrics like proper-scoring rules (e.g log-likelihood, CRPS, etc)? 3) Should we only focus on Catboost or would it be helpful to implement other methods for uncertainty quantification?