Description
This PR improves the calibration of intent classification probabilities. Having calibrated probabilities improves interpretability.
There are two cases:
1) intent classification with no intents filter
The binary logistic regression is naturally well calibrated (see this for more details), so what we want is to use One-vs-All probabilities without renormalizing them.
2) intent classification with intents filter
In this case, the prior over intents is no longer uniform as the scope of intents that the input can belong to is reduced. However the logistic regression probabilities, as is, do not leverage this prior information and One-vs-All probabilities are likely to be very poorly calibrated.
A way to improve calibration in this case, is to normalize One-vs-All probabilites after having set to 0.0 the probabilities of out-of-scope intents.
Description This PR improves the calibration of intent classification probabilities. Having calibrated probabilities improves interpretability.
There are two cases:
1) intent classification with no intents filter The binary logistic regression is naturally well calibrated (see this for more details), so what we want is to use One-vs-All probabilities without renormalizing them.
2) intent classification with intents filter In this case, the prior over intents is no longer uniform as the scope of intents that the input can belong to is reduced. However the logistic regression probabilities, as is, do not leverage this prior information and One-vs-All probabilities are likely to be very poorly calibrated. A way to improve calibration in this case, is to normalize One-vs-All probabilites after having set to 0.0 the probabilities of out-of-scope intents.