daviddao / awful-ai

😈Awful AI is a curated list to track current scary usages of AI - hoping to raise awareness
https://twitter.com/dwddao
6.93k stars 229 forks source link

Is the gaydar AI actually accurate? #42

Open thatsofia opened 3 years ago

thatsofia commented 3 years ago

https://www.theregister.com/2019/03/05/ai_gaydar/

https://blogs.scientificamerican.com/observations/the-medias-coverage-of-ai-is-bogus/

What do you think? I took a quick peek at the website and it was updated in 2020 but a Google search didn't bring up any 2020 articles so I don't know if anything changed.

(and good lord this list was super scary 😣😣 straight outta black mirror)

igorbrigadir commented 3 years ago

As far as i know, it was bogus back then and it's still bogus in 2020. I'm not aware of any other newer replications https://arxiv.org/abs/1902.10739 is the latest.

daviddao commented 3 years ago

Yes I agree it was bad research but the damage has been done to society. Especially because it was conducted by a famous researcher at a top institution. Keeping it as an awful showcase of AI ((I hope) the author's goal was to raise awareness) for future discussion.

thatsofia commented 3 years ago

Well I think people have always thought about making AI gaydars. I'm sure people have attempted them before that paper. LGBTQ people already face tons of shit so if anything, I think something positive has come out of it. We've had some pretty good discussions on the ethics of AI and since it was bogus, it couldn't be used for discrimination. I think that there will be an accurate gaydar AI one day but I don't think this AI in particular is why.

You might want to add that it's not accurate to the readme.

michalkosinski commented 3 years ago

As far as i know, it was bogus back then and it's still bogus in 2020. I'm not aware of any other newer replications https://arxiv.org/abs/1902.10739 is the latest.

Why do you think is bogus? Also, what if you are wrong and the risk that we are warning against is real? Dismissing it is very dangerous.

michalkosinski commented 3 years ago

Yes I agree it was bad research but the damage has been done to society.

Why do you think so, David? Have you read the paper you are dismissing as bad and damaging? Do you think that the damages has been caused by us making people aware of the risks that others have created? How is that different from you creating this (much needed) resource cataloguing the risks if AI?

villasv commented 3 years ago

The accuracy of the model is less relevant than the simple fact that its existence is unethical (or the attempt to bring it into existence). Predicting sexual orientation is widely regarded by LGBTQ+ researchers to be an unethical endeavor. For that alone it deserves to be on this catalog.

As for also documenting the accuracy of the models, not sure it's worth the effort because hell will be raised when discussing the accuracy measurement methodology. IMHO it would be better to just collect the examples, and leave detailed analysis to be made individually.

michalkosinski commented 3 years ago

That's precisely the point of our research. We did not build a privacy-invading AI (others keep doing it). We studied existing facial recognition technologies, already widely used by companies and governments (e.g., https://patents.google.com/patent/WO2014068567A1/en) to see whether their claims are real. We found that they were, in fact, real and decided to - after much deliberation - sound the warning.

As you can see here (and elsewhere) people did not (and still don't) believe that this threat is real. It's just a replay of my past research on how people's privacy can be invaded based on their (then publicly available) Facebook likes. For years, no one took it seriously, then Cambridge Analytica happened, and then I was - somehow - blamed for it (despite the fact that, as in the context of facial recognition, I did not come up with the privacy-invading tool but merely warned against it. Facebook did come up with that one: https://patents.google.com/patent/US9740752B2/en)

It is quite dispiriting that few (if any) of the harsh critics have actually read the paper they are so keen to call bogus and it's author unethical.

sdfordham commented 3 years ago

The research is unambigiously unethical, to quote the Leuner paper: "The advent of new technology that is able to detect sexual orientation in this way may have serious implications for the privacy and safety of gay men and women". It was all speculation before the W & K paper, but now it's a mainstream idea that gaydar can be artificially learned. But the nub of the matter somehow is that this is the type of AI research that is guaranteed to make the news. And I daresay that's all that matters!

michalkosinski commented 3 years ago

sdfordham: Did you read the W&K paper? Could you please explain to me what is so unambiguously unethical about it?

Developing facial recognition tech that can do it is extremely harmful. To quote W&K paper: "most importantly, the predictability of sexual orientation could have serious and even life-threatening implications to gay men and women and the society as a whole."

It wasn't all speculation before W&K paper: companies and governments were developing, selling, and patenting such tech. The public opinion did not know, however. Luckily, the research made the news, you, general public, and policy makers took notice, and there is much effort now to regulate facial recognition technology.

villasv commented 3 years ago

@michalkosinski I think I finally understood what you're trying to say, tell me if I did.

I think we agree that predicting gender is an unethical use of technology, thus it belongs in this catalog, right? Then the point of disagreement is whether the cited articles (W&K originally and Leuner in this thread) should be cited as unethical research, because the goal of these articles is implied to be a demonstration that this tech is already dangerous, not to actually advance its efficiency. Seems a valid point to me, if that's what you're saying.

This is complicated, though. I think it might be unfair to label them as unethical, given their purpose and conclusion, but there approach is still debatable: these articles still incorporate the hypothesis that the human face encodes useful information about sexual orientation - as opposed to state that any association is bound to be caused by latent cofounding (and likely biased) factors. In my opinion this is no trivial distinction, because by posing the problem this way these articles are dangerously close to phrenology.

Does this analogy seem fair? Would an article demonstrating correlation of skull shapes and criminal behavior, but concluding that such technology is a threat to privacy and human rights, be considered fair research? Doesn't it still echoes phrenology principles even if the goal is to warn that its efficacy is dangerous?

I guess this should be hardly surprising, as the article mentions the way this kind of research is viewed. In fact, it says it's righly rejected. But in the same very paragraph presents a defense that shapes the tone of the paper.

image

Thus I'll rephrase my previous comment: the accuracy of the model is less relevant than the simple fact that its hypothesis is problematic. Instead of demonstrating that "physiognomy works, therefore is dangerous", more mindful discourse would approach it as "physiognomy is an illusion built on top of stereotypes that unfortunately may manifest predictive power".

And I get it, I honestly get it that the main intention is to raise concerns. But legitimizing the hypothesis behind the model doesn't serve that purpose, does it? Why the need to lay out a theoretical basis for the predictive power of the model that ought to be banned? For instance, the paper used a gender classifier to define facial femininity. Nowhere it is discussed the ethics of doing that, instead its accuracy (0.98 AUC) is slapped as justification.

michalkosinski commented 3 years ago

@villasv Thank you for your thoughtful response.

As we write in the fragment you cite, physiognomy was based on unscientific studies, superstition, anecdotal evidence, and racist pseudo-theories. Yet this does not automatically imply that physiognomists' claims are all wrong. While I agree that it is extremely upsetting, some of them may be correct, perhaps by a mere accident.

For example, physiognomists were clearly wrong when they claimed that they could accurately judge character based on facial appearance. Modern scientific studies (including W&K paper) have shown that people are not very accurate at this task. Yet, the same studies consistently show that people - while inaccurate - are better than chance, revealing that faces/facial images contain at least some information about one’s character.

I completely and fully agree with you that if the links between faces and character traits exist, it is dangerous and very upsetting. I honestly hope and wish that this is not the case. Yet, in science, one does not legitimize or de-legitimize hypotheses based on one's ideology, preferences, or the vision of what the perfect world would look like. If there are links between facial appearance and character (and I wish and hope there are not) - it is still better that we know it.

Consider this analogy: As an Ashkenazi Jew, I am surely upset by the fact that, as recent studies show, I am at higher risk of a range of genetic diseases.

Following the reasoning in your post - if I understood it correctly - you would consider such studies as very problematic. This is because, to rephrase your post, "...they incorporate the hypothesis that the Jews are genetically different from Gentiles, as opposed to state that any differences between Jews and Gentiles are caused by latent cofounding (and likely biased) factors (such as environment), which is no trivial distinction, because by posing the problem this way those studies are dangerously close to Nazism."

This is - as you will hopefully see - not the best way of thinking about it. Those findings are upsetting yet they are not antisemitic. Quite the opposite - I am grateful that those studies were conducted. I am hopeful that they inspire research on how to mitigate the risks I am facing, even if most of them did not suggest any immediate solutions to the problems they exposed.

What do you think?

michalkosinski commented 3 years ago

@villasv I am not sure if I got the the second criticism raised in your post ("the paper used a gender classifier to define facial femininity. Nowhere it is discussed the ethics of doing that, instead its accuracy (0.98 AUC) is slapped as justification.")

cestinson commented 3 years ago

The part of the criticism that seems to get skipped over is the jump from there being a correlation between appearance and sexual orientation to there being a biological or genetic cause of that correlation. It's blindingly obvious that appearance gives some clues about sexuality. Gold lamé shorts, rainbow tattoos, very short hair on women, etc., are all more-or-less reliable ways of guessing sexuality based on appearance, and in dating profiles much more so than in driver's license pictures, for example, you'd expect to see people signalling their sexuality through haircuts, beard trimming, make-up, poses, style of glasses, etc. I'd be much more surprised if there weren't any correlations to be found. So why frame this as being able to detect sexual orientation in biological aspects of faces? Why claim that it supports a theory that being gay is a birth defect? It should be obvious that that could be stigmatizing. It seems just as obvious that it's a conclusion that's not supported by this study. If the framing of the results had been that a machine can reliably tell if you're gay or not based on your beard and your eyeliner, the study would be listed under hilarious-AI.

michalkosinski commented 3 years ago

@cestinson the criticisms you refer to seems to come from those that did not read the paper. Have you read it? The paper does not jump to such conclusions.

The framing is that the algorithms - developed by others and examined by us - can reliably tell if you are gay from your social media picture. This may be hilarious to you, but not to gay men and women whose lives are put at risk by the developments in AI.

Also, how can you call being gay a "birth defect"? That is totally inappropriate.

cestinson commented 3 years ago

Yes, I have read the paper. I'm glad that you now realize that the link you drew between sexual orientation and biology was inappropriate.

villasv commented 3 years ago

Following the reasoning in your post - if I understood it correctly - you would consider such studies as very problematic. This is because, to rephrase your post, "...they incorporate the hypothesis that the Jews are genetically different from Gentiles, as opposed to state that any differences between Jews and Gentiles are caused by latent cofounding (and likely biased) factors (such as environment), which is no trivial distinction, because by posing the problem this way those studies are dangerously close to Nazism."

If the hypothesis is a connection between genetic diseases and religious group, then yes, this study is problematic. But of course there are no such studies, because they're explicitly stating that they're studying the ethnic group. It's measuring correlation between a genetic pool and another genetic factor. If someone suddenly converts from Atheism to Judaism, there should be no changes to their risk profile for genetic diseases, right? It doesn't make sense to pursue the hypothesis of a link between a purely genetic feature and a purely cultural feature.

The problematic hypothesis given credit in the paper is that the accuracy of the model is explained by biology instead of cultural circumstance. How would this model fare in a culture where sexual orientation doesn't even work like it does today, e.g. Ancient Rome?