UChicago-CCA-2021 / Readings-Responses

1 stars 0 forks source link

Deep Classification, Embedding & Text Generation (E3) - Guo and Aylin 2020 #36

Open HyunkuKwon opened 3 years ago

HyunkuKwon commented 3 years ago

Guo, Wei, Aylin Caliskan. 2020. “Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases.” arXiv: 2006.03955.

lilygrier commented 3 years ago

I really enjoyed this paper, as it draws awareness to the risks of using word embedding models for tasks that have real consequences and impacts. I'm wondering what has been done to train models that are less intersectionally biased. My thought would be to intentionally curate a corpus of texts that reflect different perspectives, though it may be difficult to do this while still being "representative," given how many texts exhibit these biases. What do efforts to de-bias language models while including text necessary for relevant language tasks currently look like?

sabinahartnett commented 3 years ago

Similar to Lily's question I'm wondering what can be (/is already being) done to mitigate some of the biases that are reinforced by these word embedding models trained on biased data? With this reading as well as many of the others I'm wondering how/if we can ever really balance selecting a non-bias AND representative dataset when training language models...

theoevans1 commented 3 years ago

This paper compares biases of different models, noting for instance that Elmo is most biased and GPT-2 least biased for the categories explored—what factors contribute to those differences between models? And what does this mean for social science researchers who (as discussed last week) may want to identify and analyze biases?

jinfei1125 commented 3 years ago

This article is a continuous work of Caliskan followed by her article in 2017 about semantics derived automatically from language corpora contain human-like biases, she expands the test code to specific ethnic groups of women and found the external validity of this study is not very good: The probability of random correct identification in these tasks ranges from 12.2% to 25.5% in IBD and from 1.0% to 25.5% in EIBD. So I am thinking how to increase its external validity?

I also wonder if there is any future work that can be done to de-bias these judgments of AI? So AIs can make neutral decisions and get rid of bias. However, I think discrimination is a very complex phenomenon and more work is needed to be done to maintain fairness instead of just using AIs to correct it. Is there anything especially needed to be noticed in this debiasing process?

k-partha commented 3 years ago

Our very own Prof. Allyson Ettinger demonstrated that BERT 'struggles with challenging inference and role-based event prediction' and deals very poorly with negation. Given this weakness, how sceptical should we be on the claim that contextual embeddings demonstrate human-like 'biases' (as compared to something more benign, like 'associations')?

Raychanan commented 3 years ago

I think their work is great and one of the most improvements lay in eliminating the need for relying on pre-defined sets of attributes. In the past, biases are more subjective but now with the help of CEAT, the situation is changing.

My question is about the random effects model. According to the authors, their approach is based on a random effects model to measure social bias in our language. It seems that they emphasize the advantages of this model. But I don't think they describe this model in very much detail in the paper. Would it be possible for you to talk about the fundamental principles/rationale of this model and how they compare to other models? I really know very little about this model.

egemenpamukcu commented 3 years ago

I would like to hear more about the nuances between static word embeddings and contextual word embeddings and the intuition behind them while thinking about in what contexts they should be utilized?