christophM / interpretable-ml-book

Book about interpretable machine learning
https://christophm.github.io/interpretable-ml-book/
Other
4.75k stars 1.06k forks source link

Update verbage on bias example #220

Closed mhoeger closed 3 years ago

mhoeger commented 3 years ago

This might be my American take on it, but the phrase protected groups is confusing to me. In the U.S. at least, there are protected "qualities" like age, sex, national origin, ethnic background, etc., meaning that you cannot discriminate based off of those qualities. I think all groups that are formed along the lines of those qualities are "protected."

Also proposing to add the phrase "that has been historically disenfranchised" to help explain why those biases may exist in a given dataset. (Sidenote: there's definitely a lot of irony in me feeling like there needs to be a "why" in this section :) )

christophM commented 3 years ago

Your formulation makes a lot more sense. Now that you pointed it out, I realize it's protected attributes and not protected groups :facepalm: Thanks for this PR!