dashaasienga / Statistics-Senior-Honors-Thesis

0 stars 0 forks source link

Phil Thomas #7

Open dashaasienga opened 9 months ago

dashaasienga commented 9 months ago

@katcorr here is a transcript of my email conversation with Professor Phil Thomas (from UMass). He was really receptive to my email! I plan on looking at some of the resources he shared over the weekend/ early next week but I wanted to share it here for your reference as well. Also, his work on the Seldonian algorithm is very applicable to my thesis and I am beginning to get a clearer vision for how we may perform a simulation using it. Really good stuff!

Dasha:

Dear Professor Phil Thomas,

I hope this email finds you well. I'm an undergraduate student at Amherst college and I am currently writing a thesis in the Statistics department on statistical notions of fairness in ML. I've been reading about your work on the Seldonian algorithm and I am truly intrigued!

Professor Lee Spector pointed me to you, and in particular, a talk you gave at Hampshire regarding conflicts in some fairness measures. Is there any way I can get access to that talk?

I would really appreciate any additional resources as well.

Best wishes, Dasha.

Phil Thomas:

Hi Dasha,

It's nice to hear about your interest in these topics! My slides are mostly pictures that don't make sense outside of the planned talk, but there are some very good (and clear, I think) papers discussing these issues. I think the first well-known paper on conflicting measures of fairness was by Alexandra Chouldechova, and discussed the notions of fairness in the relatively famous ProPublica article.

ProPublica Article [if you haven't seen this, I recommend reading it]: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing Alexandra's paper [I would read this after the next reference, if at all]: https://arxiv.org/abs/1610.07524

Subsequently ML fairness researchers have found a few large classes of fairness definitions that are conflicting. A nice high-level summary is here. I would start with this article. (Note: I think somewhere in this article there is a Y that should be \hat Y, or vice versa, so if at some point a line is confusing, this may be it.) Specifically, look through the parts on separation, independence, and sufficiency. There is then a list of 3 conflicts between these types of fairness. If you want an even more detailed take on these issues, there's a free book [link], and the chapter on classification discusses these definitions and conflicts.

In my talk I think I discussed a different conflicting definition that came up when predicting student GPAs. This is different because it's in the regression setting while most of this other work is in the classification setting. Let Y be a student's GPA, \hat Y be the prediction of Y, X be features of their application to college, and G be their gender. The conflict I ran into was that the following two properties cannot hold at the same time:

  1. E[\hat Y | G=male] = E[\hat Y | G=female]. If this is violated, then just knowing your gender we know that on average we will predict a higher/lower GPA.
  2. E[\hat Y - Y | G = male] = E[hat Y - Y | G = female]. This is violated if we over-predict for one gender and under-predict for another (the data we had from a university in Brazil only had binary gender labels). This is also violated if we over-predict (under-predict) more for one gender.

In our work we found standard ML algorithms over-predicted for male applicants by 0.15 GPA points and under-predicted for women by 0.15 GPA points, on average. We focused on definition #2, but noticed that we couldn't achieve #1 and #2 at the same time (this stems from the fact that male and female applications have different average GPAs). Here's the open access link to our paper: http://aisafety.cs.umass.edu/paper.html.

Best, -Phil

katcorr commented 9 months ago

This is so awesome! I agree -- very encouraging and helpful response! Great resources