UChicago-Computational-Content-Analysis / Readings-Responses-2023

1 stars 0 forks source link

7. Accounting for Context - [E2] 2. W. Guo, A. Caliskan. 2020. #19

Open JunsolKim opened 2 years ago

JunsolKim commented 2 years ago

Post questions here for this week's exemplary readings: 2. W. Guo, A. Caliskan. 2020. “Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases.” 

ValAlvernUChic commented 2 years ago

It's interesting (and at first surprising) that the level of overall bias is negatively correlated with the level of contextualization in a language model - I would think that with more context, the model would be more biased, accounting for the level of implicit bias that I assumed would've been aggregated. Their explanation for why there is higher variance, and correlatedly lower overall bias in more contextualized models really flipped this assumption for me! They mentioned that "upper layers of contextualizing models produce more context-specific representations" - could I have an explanation for why?

Qiuyu-Li commented 2 years ago

I think it's a very interesting paper. May I ask an easy question...What does "contextualization" mean here?

hshi420 commented 2 years ago

I think it's a very interesting paper. May I ask an easy question...What does "contextualization" mean here?

It means that the model learns a sequence(sentence) level embeddings. For SWEs, a word's embedding does not change for the whole document. For CWEs, a word's embedding can change based on the sentences surrounding it.