UChicago-CCA-2021 / Readings-Responses

1 stars 0 forks source link

Exploring Semantic Spaces (E3) - Garg ...& Zou 2018 #31

Open HyunkuKwon opened 3 years ago

HyunkuKwon commented 3 years ago

Post questions about the following exemplary reading here:

Nikhil Garg, Londa Schiebinger, Dan Jurafsky and James Zou’s follow-on article. 2018. “Word Embeddings Quantify 100 Years of Gender and Ethnic Stereotypes.” Proceedings of the National Academy of Sciences 115(16): E3635-E3644

Raychanan commented 3 years ago

For historical temporal analysis, we use previously trained Google Books/COHA embeddings, which is a set of 9 embeddings, each trained on a decade in the 1900s

Can you please talk about why they would choose “one decade” instead of two or three decades? I know we would use techniques such as cross-validation to choose the best or the most appropriate parameter in our model. So if the period the authors adopt is also data-driven? Or “one decade” is socially meaningful?

Bias in the embeddings, between two groups with respect to a neutral word list, is quantified by the relative norm difference

I think it’s hard to say the list can accurately reflect what is bias when creating the neutral word list only from the modern perspective. Words can surely change in its sentiment as time passing by. Especially, with the emergence and development of political correctness, some “neutral words” have already become negtive words.

theoevans1 commented 3 years ago

How should we think about the selection of demographic categories, particularly in a historical study like this? I’m wondering because categories like ethnicity, race, and gender shift over time, not just in their associations but in how those categories are conceptualized or defined. Do methods like this risk assuming a false stability to the social categorizations being studied?

jcvotava commented 3 years ago

To add to @theoevans1 excellent question, I'm wondering how to think about "bias" here when both the category and the associations simultaneously shift over time. For instance, the paper tracks the Islam vs. Christianity bias score of discussions of terrorism in the New York Times. Suppose, however, that both way journalists talked about Islam and the way they talked about terrorism changed simultaneously over the relevant period. What exactly is the meaning of a "bias score" here?

romanticmonkey commented 3 years ago

This question has been lingering since I read the orient reading: how do we deal with the gap between oral and written language when all we use is written text corpus? I can see that the news corpora are closer to everyday speech and some articles does contain conversation records, but doesn't this study and many others still only research the written culture instead of the language culture as a whole?

k-partha commented 3 years ago

Echoing some thoughts expressed on the Caliskan, Bryson, and Narayanan reading: I find most of the paper's findings not really surprising. But I do think it important that the findings expressed in both these papers align with our strong intuitive expectations that real-world interactional biases would spill into language - validating the word-embedding approach significantly.

One thing that struck me about this paper is the framing of their paper as 'quantifying' bias and stereotypes. We live in a time when (certain threads) of the societal discourse of gender and race are becoming hyper-self aware and are using linguistics devices laced with associations between particular attributes and specific race/gender (as a means to analyse these historical biases).

How can we disentangle the drivers behind the use of 'biased' language using empirical methods? Or is it a better strategy to always weave in insights from qualitative research with empirical findings, so that we understand the social reality that 'bias' is actually reflecting?

MOTOKU666 commented 3 years ago

I have the same question with @romanticmonkey . It seems that this paper still focuses on the written culture. However, I guess the reason they don't research the whole language culture is that there might be too many things to specify or too much noise containing when information is aggregated.

zshibing1 commented 3 years ago

Though not sure how feasible this is, but if we can trace the basic demographics of the authors of these written documents, will an analysis that takes social stratification into account produce more meaningful results?

toecn commented 3 years ago

I also have a question as to the rational for time selection and to the size of the corpus necessary for analysis.

mingtao-gao commented 3 years ago

My question is related to the causality of historical events. The paper used these events in the word embedding model to discover changes in gender stereotype, but how much of that shift is causal to these historical events?

lilygrier commented 3 years ago

The authors state that a limitation of their approach is that "embeddings used ar fully black-box, where the dimensions used have no inherent meaning" (9). Their dimensions don't rely on sentiment or parts of speech. There is almost always a trade-off between interpretability and having significant results, but how can we do our best to ensure both of those things are satisfactorily present?