ModelOriented / fairmodels

Flexible tool for bias detection, visualization, and mitigation
https://fairmodels.drwhy.ai/
GNU General Public License v3.0
85 stars 15 forks source link

Orientation of Equal opportunity (FNR) #20

Closed jakwisn closed 4 years ago

jakwisn commented 4 years ago

After the last change the side of Equal opportunity ratio is inverted. Less false negatives is a good thing (at least from perspective of human that might be affected by ai) therefore scale for this metric should be inverted to match other metrics and be easily interpretable.

jakwisn commented 4 years ago

After a few tests, I realized that the better option is to leave it as it is and change documentation. Now it suggests that on the left side of fairness check unprivileged subgroups are the deprived ones which is no longer the case.

jakwisn commented 4 years ago

Equal opportunity now will be TPR as it originally should. This way properties stated before should be satisfied