uchicago-computation-workshop / Winter2021

Repository for the Winter 2021 Computational Social Science Workshop
7 stars 5 forks source link

01/14: Aylin Caliskan #1

Open smiklin opened 3 years ago

smiklin commented 3 years ago

Comment below with questions or thoughts about the reading for this week's workshop.

Please make your comments by Wednesday 11:59 PM, and upvote at least five of your peers' comments on Thursday prior to the workshop. You need to use 'thumbs-up' for your reactions to count towards 'top comments,' but you can use other emojis on top of the thumbs up.

ghost commented 3 years ago

If machines learn human-like biases, would you relate it to a statistical discrimination or a taste-based discrimination?

JadeBenson commented 3 years ago

Thank you so much for sharing this research - I think this will be one of the most important issues to address going forward in the field. My question is very similar to those that many have already mentioned but I'm particularly interested in how to break down the institutionalized bias that these algorithms operate within. How do we convince the companies that use these types of algorithms that they are potential harmful (even if profitable) and then motivate them to change their approach? What do you see as the role of consumers, policy makers, academics, and businesses in addressing this problem?

MegicLF commented 3 years ago

Thank you for discussing this interesting topic! I wonder if it is possible to reduce or even eliminate those observed biases introduced by human behavior in the pre-training data. Do you have any suggestions on doing so? Or is it possible for us to develop a model that can resolve this issue?

mingtao-gao commented 3 years ago

Thank you for sharing your work! It is a very interesting paper. My question is how will the results be interpreted differently if we train the model with corpus in languages using gendered words, like Spanish and French?

Qiuyu-Li commented 3 years ago

Thank you for presenting these outstanding findings. The topic is very interesting to me! I read an article before about how AI assists the County Office of Children to decide whether there are child abuse and neglect. The article highlights the fact that the machine reads in human-biased data, and produce biased predictions that only strengthen the biases. Would you be able to talk about how you think about the future of AI’s participating in decision making? Is it just producing self-fulfilling prophecies of human prejudice?

jinfei1125 commented 3 years ago

First, thank you so much for sharing your excellent work with us! This finding of AI with human-like biased after learning from the corpora is very interesting and shocking. After reading your articles, my first thinking is--Is there any future work that can be done to de-bias these judgments of AI? So AIs can make neutral decisions and get rid of bias. However, just as @Yutong0828 mentioned, I think discrimination is a very complex phenomenon and more work is needed to be done to maintain fairness instead of just using AIs to correct it. Do you have any opinions on this?

YijingZhang-98 commented 3 years ago

Thanks for your work! This is an interesting discussion on the application of AI. I really enjoy learning it. I am curious about whether could we mitigate the human-like bias? I mean, the human-like bias comes from the training data, which is created by humans, so the trained model has also "learned" the bias. If we could double-check the classification, can we mitigate? this problem. Moreover, does the human-like bias necessary to be a bad thing when applying ML to decision-making prediction? It bears watching that people are naturally biased, and the machine-learned their behavior pattern correctly.

weijiexu-charlie commented 3 years ago

Thanks for your presentation. Given that the results of your study indicate that the state-of-the-art pre-trained image models do produce human-like bias, I'm wondering do you have any thoughts on how to address this problem? On the other hand, I'm curious about whether the human-like biases that appeared in these pre-trained machine learning models are able to help us better understand the biases held by human beings ourselves?

shenyc16 commented 3 years ago

Thank you for showing us this interesting research. I have a question relating to the human-like biases discovered in language corpora. Are those results merely statistical reflections of prejudices in real world or spontaneous relationships discovered by machine learning? Also, how do you think your conclusions derived in the research will affect social opinions on controversial topics?

kthomas14 commented 3 years ago

Thank you for sharing your research with our workshop! I think that trying to tackle something like human-bias is a very difficult task. Would an ideal approach for attempting to eliminate human bias from machine learning models be to limit the training set in order to fit the desired outcome and try to prevent bias in a controlled manner, or to expand the data sample in order to increase exposure?

YileC928 commented 3 years ago

Thank yous so much for sharing your wonderful works with us. What do you think are the practical implications of your findings to future AI implementations? How might those biases be mitigated? Through the training/producing processes or through more ethical applications?

goldengua commented 3 years ago

Thank you for fantastic research. Your papers brought up important ethics questions for many disciplines who are using machine learning methods and for the general society as a whole. I was wondering how we can make better computational models that minimize human-like bias? I really appreciate the idea to assess ML models with psychological tests. In addition, I think we might be able to uncover bias in human society by learning the patterns in the corpora.

egemenpamukcu commented 3 years ago

Thank you for sharing your work. I would like to hear more about the next steps on how to tackle the problem and "cleanse" AI from human-like biases. Also, do you think, as AI gets involved in our daily livens more and more over time, the negative effects of such biases will be more devastating? It seems to me that reinforcing these biases may have more subtle long-term implications too. Finally, do you think the said bias is caused more by the algorithms or real-world data? If it is the latter, would it be possible, or helpful, to tweak the algorithms in a way to deliberately reduce bias of outcome?

MkramerPsych commented 3 years ago

Dr. Caliskan,

Thank you so much for sharing your research with us! I have done previous research in analyzing systemic biases in Deep Neural Networks, and I feel you are bringing very important issues to attention with the 2021 paper. I have two questions, one more specific to the paper and then a more general one.

  1. Is there a concern about "levels of bias" when developing a pipeline for bias detection in a dataset? For supervised learning, the bias between one label and another is the entire purpose of using machine learning in the first place. Could a pipeline be created such that the level of detected bias is not all-encompassing but only at a higher than threshold level?

  2. Your paper discusses the danger of bias invading transfer learning applications, but does not suggest any solution to the problem. Do you think the answer to reaching less biased models stems from controlling the initial datasets before they are used to train models or some form of bias detection and mitigation later in the deep learning pipeline?

Thank you for your time.

YaoYao121 commented 3 years ago

Thank you for sharing this fascinating research! I am very excited to read such a paper conbiming computational method and social science perfectly. Your paper poits out that the machine-based prediction will also have bias if the original data is biased. I am curious since that how could we deal with this and get unbiased prediction. Thanks!

TwoCentimetre commented 3 years ago

Recently, I read a piece of news about an app which can generate original pictures according to the description given by the user. And there is also an app that can generate fake human faces. If we combine these two things, would that mean that we do not need CG movie and human actors any more since we can make original fake human images? Also, I wonder if ML can study memes on the internet, which contains a lot of creative expressions and sarcasm.

k-partha commented 3 years ago

Thanks for sharing your research! Your application of embeddings in this regard is both innovative and revealing of fascinating patterns. I was wondering whether this form of analysis can be standardized to explore biases in particular ecosystems - highlighting the variation of bias across social/academic ecosystems?

ddlee19 commented 3 years ago

Thank you for your presentation! My question is about the potential for AI to create unbiased knowledge. Is AI's learning process different than that of a human mind? And is this the source for the potential to create unbiased knowledge?

ttsujikawa commented 3 years ago

Thank you very much for having such an intriguing paper, and I am very excited to hear your presentation. The main focus o the paper has been widely discussed since AI itself can be purely objective under the situation where its training data is not biased or prejudicial. In other words, this fact poses critical questions to the usage of AI technologies. I am fascinated by the paper since the paper questioned the credibility of the training data in natural language with the empirical evidence. I believe the labeling process is where humans are likely to put biases to training data, and, these days, the process is being gradually automated by AIs. Then, my concern is that biases generated by the labeling can be increasingly problematic in the future. If you have any thoughts on this, could you please share that?

romanticmonkey commented 3 years ago

Thank you for your presentation! I wonder if biases can be cultural-specific. For example, might your model be a generalization of biases in only American culture? Can these biases be applied globally?

adarshmathew commented 3 years ago

(I can't come up with a question that hasn't already been covered by everyone here. Looking forward to this.)

Qlei23 commented 3 years ago

Thank you for sharing your work with us. It's inspiring to know the existence of human-like bias in the machine learning models. I wonder if there're ways to "clean" the pre-trained data set since it's harder to de-bias the model afterwards.

chuqingzhao commented 3 years ago

Thank you so much for sharing your fascinating research! I enjoy the idea to identify social biases in natural language processing techniques. I have two questions: 1) I am wondering how to debias the results. As we can identify the social bias, what do you think of how to develop tools to reduce the biases. 2) Since you have examined the static word embedding models in 2020 paper, I am curious whether we can also look into dynamic word embedding models. For example, the biases can be identified by trajectories between word pairs through time. Thanks!

tianyueniu commented 3 years ago

Thank you so much for sharing the work with us! I look forward to learning more about the biases further in your talk.

Anqi-Zhou commented 3 years ago

Thanks for your inspiring lecture in advance! I have two basic question. So, can word embeddings interpret all the biases? If not, how can we deal with the biases that may not be interpreted? And can this model be applied to other languages? If so, will the accuracy be much lower? I'm looking forward to hearing more about the detailed logic!!

JuneZzj commented 3 years ago

Thank you for presenting. It is inspiring to know that we can use iEAT to address the issue of abstraction of semantic representation of the image. Does the rationale behind the image Embedding Association Test (iEAT) have some similarities with the target concepts reflected by a single token in language? Thank you.

Raychanan commented 3 years ago

I think what you are discussing in your papers is a very important topic.

Discrimination may be partially motivated by people trying to save themselves time and effort. Likewise, I think companies go about using these biases to recruit employees and fire employees out of a desire to save the company's resources and time costs.

So I think it's possible and motivating for companies to intentionally use these discriminatory algorithms, even if they already recognize that it's a discriminatory behavior. I think this objectively discriminatory behavior is already present in large numbers in companies like Facebook. A living example is the ad recommendation algorithm.

What do you think we should do about this "intentional" use of discriminatory language algorithms by companies?

Rui-echo-Pan commented 3 years ago

Thank you for sharing! This topic reminds me of the COMPAS software which assesses potential recidivism risk with algorithms (for those who may be interested: refer to wiki. The discrimination seems inevitable because the data we use for training is from the real world, which however reflects the characteristics behind the object we want to study (e.g. recidivism). How do you think this could be solved with technique methods, or it is kind of naturally inevitable?

anqi-hu commented 3 years ago

Thank you for sharing your work with us. How feasible do you think is it for us to tackle the problem of aggregated bias on a more fundamental level, i.e. being more cautious and perhaps improving the quality of the data that we feed into such algorithms?

jsoll1 commented 3 years ago

Thannks for sharing your papers with us! Do you think that there are viable ways to train models so they don't pick up this kind of discrimination, or are we forced to take these models with a grain of salt?

hihowme commented 3 years ago

Thanks a lot for your presentation! This is a truly interesting paper. The bias you mentioned in your paper seems really interesting, what do you think this would affect the strategy for some tech business company and research? Thanks a lot!

a-bosko commented 3 years ago

Thank you very much for sharing your papers! I look forward to learning more about machine learning and understanding the role of social bias in computer models. Coming from a psychology background, what do you believe we can do as humans to help reduce or eliminate bias in machines? Can we teach machines empathy, or is emotional intelligence limited to humans?

Thank you very much!

Panyw97 commented 3 years ago

Thanks for your sharing! Could you please elaborate on how did you quantify or measure the bias that you mentioned? What kind of measurement of bias is the mainstream in today's social science research? Thank you so much!

FrederickZhengHe commented 3 years ago

Thanks very much for this marvelous paper. Do you think there is any feasible way to address such bias with AI?

FranciscoRMendes commented 3 years ago

Thank you for sharing your work, I was wondering if the bias is an outcome more as a consequence of AI or as a consequence of the data that we share with the algorithms i.e. they are reflective of deep societal biases that need to addressed in society rather than simply changing the algorithm itself.

AlexPrizzy commented 3 years ago

It's interesting to see that machine learning can produce human-like bias in semantic associations, though I don't think this necessarily means that machine learning produces semantic associations in the same way a human does. Each person will process language differently due to individual differences such as fluid intelligence, working memory, and prior experiences. Would you expect to find drastically different results if these individual differences were implemented in computer models?

timqzhang commented 3 years ago

Thank you for your research ! My question is quite general, that to what degree should human beings intervene the process of AI in various projects? How do we detect the potential bias?

chun-hu commented 3 years ago

Thanks for your presentation! I'm wondering how we can quantify the bias in machine learning?

zixu12 commented 3 years ago

Thanks for sharing your interesting research. Indeed, it is a great concern that AI mimics and may even amplify the already existent human bias. As many fellow students have already asked about the possible ways to address/correct such bias, I have a question further: If there are indeed ways to 'eliminate' the human bias through machine training approach, do you think it is legitimate/principally justified? For some people who are in doubt about AI, they may fear that this is a prelude that machine learning may change the way people lead their life by force.

xzmerry commented 3 years ago

Thank you share the research that machine learning could actually yield human-like bias. Could you explain more on how this exciting finding could be applied (though you have provide some examples)? Whether this indicates that machine learning and other computer science technology might generate new types of discrimination? If so, how to prevent them? Thanks.

cytwill commented 3 years ago

Hi, Professor Aylin.

I am interested in your paperwork of ValNorm, I also did some research on the intrinsic evaluation of word-embeddings in the last quarter. The bias in word-embeddings has long been discussed in other papers. In your paper, the valence/pleasantness becomes the dimension to measure the bias. From my perspective, the choice of specific bias should be relevant to the task. For example, if we want to understand gender bias in different languages, we might use some gender-related vocabulary in the evaluation. From this point, your approach here seems to narrow down the use of WEAT and WEFAT to a narrow scope of pleasantness/unpleasantness? So I am wondering how you are going to defend the generality and novelty of this method.

lyl010 commented 3 years ago

Thank you for your presentation! I would like to know more about the effects of bias in machine learning: why it could be a serious problem if left unsolved? Thank you!

minminfly68 commented 3 years ago

Thanks for sharing the presentation. We are very impressed about that and would like to know about its application in social science perspective, thanks!

chentian418 commented 3 years ago

Thanks for the interesting research with delicate designs! I'm confused about the human-like bias: does the word-embedding associations really indicate human-like bias? For example, with the strong association of European American names with pleasant terms, how can we argue that the human-like bias in the social science sense help account for these statistical associations? Thanks!

luyingjiang commented 3 years ago

Thank you for sharing. To my understanding, word embeddings are explicit representations in terms of the context in which words appear, and hence the algorithm would replicate the biases in the original corpus. Do you consider using pre-trained embeddings? How we can make better computational models that minimize human-like bias?

Yiqing-Zh commented 3 years ago

Thank you for your presentation in advance! I am curious about whether we can completely eliminate bias in machine learning in the future and how social sciences can help in this process. Thank you!

ziwnchen commented 3 years ago

Thanks for the presentation! It is interesting to know that image and word representation learning models will also learn human-like bias. It would be very helpful if you could also introduce us to some of the potential debiasing methods suggested by the research community. For example, does manipulate the training set be an effective way of debiasing? Are there any methods to investigate whether an input dataset is harmful/harmless? One relevant bad example of this kind is the famous psychopath' artificial intelligence named Norman.

harryx113 commented 3 years ago

I think your research touches on some very important points. One of them is the accountability of AI. As humans, the speakers take responsibility of their speeches, but when models are trained, no one is taking responsibility for what an AI says. Where do you see this area going?

j2401 commented 3 years ago

Thanks for sharing! Though I am totally a layman in CS techniques, I'm still impressed by how you approach and address this problem and point its potential to be discussed in social sciences.

yiq029 commented 3 years ago

Thanks for sharing! I also have a question about the bias caused by human behavior in the pre-training data. I am curious about how to deal with such a problem when we use machine learning techniques. Thank you.