Trusted-AI / AIF360

A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
https://aif360.res.ibm.com/
Apache License 2.0
2.46k stars 840 forks source link

implement "double-corrected" variance estimator #334

Open kvarsh opened 2 years ago

kvarsh commented 2 years ago

The recent paper "De-Biasing 'Bias' Measurement" by Lum, Zhang, and Bower shows that fairness metrics involving more than two groups are themselves statistically biased when put together naively. In Section 5 of their paper, they propose a "double-corrected" variance estimator that fixes this issue and provides unbiased estimates and uncertainty quantification of the variance of model performance across groups. The issue is to implement this metric estimator. Doing so will involve careful consideration of dealing with protected attributes with many values, which should not be grouped into privileged and unprivileged sets.