microsoft / responsible-ai-toolbox

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
https://responsibleaitoolbox.ai/
MIT License
1.27k stars 336 forks source link

Threshold scrollbar for Fairness Dashboard #479

Open michaelamoako opened 3 years ago

michaelamoako commented 3 years ago

Rather than pass in predictions, it would be useful if I could pass in the confidence scores of a binary classification model and use a scrollbar to vary threshold within the dashboard itself (to see how the metrics change as a result).

Perhaps the parameter for threshold could be its default value (where the scrollbar starts)

michaelamoako commented 3 years ago

Added component: Ability to choose threshold as the value on the X-axis

riedgar-ms commented 3 years ago

Not quite what you're asking, but you can start up a dashboard with y_pred as a probability, rather than a class. You'll get a different set of metrics, which might help until what you're describing can be implemented.

michaelamoako commented 3 years ago

The data contains binary labels and your model makes binary predictions - changing the predictions to probabilities would remove the notion of threshold, which in my case is not possible

riedgar-ms commented 3 years ago

Ahhh, I thought you might have access to predict_proba() or something like that. That would let you have probabilities. It wouldn't get you everything, but might help with alternative plots.

romanlutz commented 2 years ago

This reminds me somewhat of https://research.google.com/bigpicture/attacking-discrimination-in-ml/

I read this like @riedgar-ms and assumed that you want to threshold on the probabilities (or as you called them: "confidence scores"). You can very much threshold on probabilities. That's in fact what many unfairness mitigation techniques do, see the link above or ThresholdOptimizer in fairlearn. I do agree, though, that we'd need to think about what gets passed in.