ModelOriented / fairmodels

Flexible tool for bias detection, visualization, and mitigation
https://fairmodels.drwhy.ai/
GNU General Public License v3.0
85 stars 15 forks source link

About the bar at right side (Fair models) #51

Closed Nehabtkk closed 1 year ago

Nehabtkk commented 1 year ago

Hi Jakub, I hope you are well.

I am using the fair models library for classification purposes on my data and have the following graph.. Can you interpret it, I mean what does the bar at the right side of score 1.0 mean? I know that if the bar enters the pink space, we can say that there is some bias exist but what about the bars at the right side?

Also, since I am using this library after a long time (one year probably), are there any advancements in the library? Can we now use the fairness for the output feature (earlier it was only intended for input features).

The graph is shown here.

Hi Jakub, I hope you are well.

I am using the fair models library for classification purposes on my data and have the following graph.. Can you interpret it, I mean what does the bar at the right side of score 1.0 mean? I know that if the bar enters the pink space, we can say that there is some bias exist but what about the bars at the right side?

Also, since I am using this library after a long time (one year probably), are there any advancements in the library? Can we now use the fairness for the output feature (earlier it was only intended for input features).

The graph is shown here.

image

jakwisn commented 1 year ago

Hi Neha, Thanks for the question! I will allow myself to copy-paste a section from a tutorial: https://modeloriented.github.io/fairmodels/articles/Basic_tutorial.html

If bars reach red field on the left it means that there is bias towards certain unprivileged subgroup. If they reach one on the right it means bias towards privileged

so in other words bar starts at 1 because it is ideal ratio between metric for the unprivileged and privileged. If you divide smaller metric by higher metric the bar will be on the left, otherwise on the right.

As I think previously mentioned you can input an output variable to the package as long as it makes sense with the interpretation. But to be honest it is hard for me to come up with a good example on why to do that, so unless you are 100% sure I would not recommend it - fairness can be looked at from the prism of causality - we assume that something had an effect on something else and our model should reflect that.

jakwisn commented 1 year ago

Closing for now, if not clear enough please reopen