UChicago-Computational-Content-Analysis / Readings-Responses-2023

1 stars 0 forks source link

4. Exploring Semantic Spaces - [E1] 1. Caliskan, Aylin, Joanna J. Bryson, Arvind Narayanan. 2017. #38

Open JunsolKim opened 2 years ago

JunsolKim commented 2 years ago

Post questions here for this week's exemplary readings: 1. Caliskan, Aylin, Joanna J. Bryson, Arvind Narayanan. 2017. “Semantics derived automatically from language corpora contain human-like biases.” Science 356(6334):183-186.

ValAlvernUChic commented 2 years ago

I think this paper is so so important because it highlights and emphasizes extremely deleterious consequences of bias in machine learning models - recidivism scores in court adjudications being one of the most salient examples. I was wondering about the implications of actually using this type of model in communications - the media, PR etc. - to sort of "spot" words or phrases that could be perpetuating cultural biases. At the same time, I'd be interested to see if these biases had the same qualities across domains or cultures.

hsinkengling commented 2 years ago

If I'm understanding correctly, the goal of the paper is to demonstrate that word embeddings can capture the implicit associations people have with examples from well-known implicit associations such as gender, age, names, pleasantness of flowers vs insects. However, there are an infinite number of testable implicit associations. My question is: in these kinds of studies, how much evidence would amount to "enough" evidence for demonstrating that word embeddings work? Is it the job of the researchers to qualify the power of these tools by specifying the social realms in which these models work or don't work? or is it enough for researchers to demonstrate that the model works for some of the most important, most studied phenomena?

LuZhang0128 commented 2 years ago

I wonder if this paper calls for a broader testing method for implicit human-like biases. For instance, the biases in the content that is related to photos, pictures, videos, or even recordings. I think based on the current method, people are able to at least classify pictures and the skin color of people in the photos. I wonder we can further explore this area.

chuqingzhao commented 2 years ago

It is a thought-provoking paper and critical for the debias method in word embedding. The paper construct a Word-Embedding Association Test (WEAT) based on cosine similarity measures, and perform on large corpus. In small corpus, the distance between word vectors could be small, and therefore the bias score are smaller. I wonder how to take the corpus size or different corpus into consideration?

97seshu commented 2 years ago

The word embeddings approach of identifying implicit biases is pretty cool. I can see its potential in helping us track the change of bias within cultures throughout history (or since texts are available). My question is how we want to solve the problem after knowing that biases live in texts. The author briefly mentioned in the end that technologies that learn about the properties of language might also exhibit the same kind of biases. How should we deal with it? Do we want to just remove the biases? As mentioned by @hsinkengling , since we can have an infinite number of associations or biases, is it plausible to get rid of them all? How can that change the properties of language?

AllisonXiong commented 2 years ago

A inspiring NLP version of IAT! My questions are:

  1. The authors mentioned that the stereotyped attitude perpetuation may be 'simply explained by language', however, is it possible that language use is to some extent the 'result' of social learning, rather than simply the source of bias? How can we validate the cause-effect direction within biases reflected by language and cultural steretypes?
  2. Can the WEAT test used to examine the strength of a certain stereotype across time? What can we expect from this more stratified study?