Computational-Content-Analysis-2020 / Readings-Responses-Spring

Repository for organizing orienting, exemplary and fundament readings, and posting responses.
0 stars 0 forks source link

Exploring Semantic Spaces (E1) - Caliskan, Bryson & Narayanan 2017 #28

Open HyunkuKwon opened 4 years ago

HyunkuKwon commented 4 years ago

Post questions about the following exemplary reading here:

Caliskan, Aylin, Joanna J. Bryson, Arvind Narayanan. 2017. “Semantics derived automatically from language corpora contain human-like biases.” Science 356(6334):183-186.

nwrim commented 4 years ago

Although I do not agree that the reaction time difference in IAT and the location of words in vector space touches upon the same dimension of "bias", I think the result of this research is highly interesting and important. One question I had while reading the paper was that can we try an inter-corpora survey on these biases? In other words, can we sample different corpora with different sampling plans and try to see what bias is found in one corpus but not in others? (maybe selectively sample texts from animal right groups and compare it with the general population to test bias on "zoo"?) Could this be a tool in accessing groups with different thoughts?

WMhYang commented 4 years ago

I think this paper provides the fundamentals and rationales to do projection with word embedding models. I am thinking of an application. After we have detected the bias, is it possible that we design an algorithm which could normalize the words with bias to other words without bias so that taste and statistical discrimination could be alleviated? For example, a black name may have a negative impact on the candidate to find a job. If we project his or her name to another one that does not have race bias, it may be better for him or her to match the white counterparts. A following question would be, how could we deal with the ethical issues related to the algorithm?

Lesopil commented 4 years ago

This is a very interesting article, and a particular sentence caught my eye. The corpus they are using "containing 840 billion tokens (roughly, words). Tokens in this corpus are case sensitive, resulting in 2.2 million different ones" (2). I am wondering why in this instance the token are case sensitive, rather than our standard approach of putting everything into a lower case. Obviously this has some impact on the corpus, and it seems to be a positive one. When should we consider using case sensitive vs insensitive tokens?

shiyipeng70 commented 4 years ago

The authors are stressing the importance of redressing harmful biases but they also maintain that all biases are meaningful. It seems to that even though some biases are harmful, they are still reflections of our culture environment. Is it really good for us to deliberately erase these biases? Can we really change the social inequality and discrimination by doing such work?

timhannifan commented 4 years ago

I'm interested the technical aspect of "explicit characterization of acceptable behavior" and "explicit instruction of rules of appropriate conduct". What would this look like? Is it a matter of excluding features or subsets of data, or maybe re-weighting vector importance?

An extension this analysis could look at the results of court transcripts and case outcomes/sentences, as well as the statistical regularities in defendant profiles. Based on the authors' results, we might observe the same "unthinking reproduction of statistical regularities" in a different context.

pdiazm commented 4 years ago

As the authors note, eliminating bias is eliminating information, however, I don't think they elaborate on a scalable approach to discriminate between prejudice and bias so we can eliminate / mitigate the former and use the latter. Are there examples of use cases in which researchers systematically remove prejudice or in which they interchangeably add / remove bias depending on the task?