uchicago-computation-workshop / Winter2021

Repository for the Winter 2021 Computational Social Science Workshop
7 stars 5 forks source link

01/14: Aylin Caliskan #1

Open smiklin opened 3 years ago

smiklin commented 3 years ago

Comment below with questions or thoughts about the reading for this week's workshop.

Please make your comments by Wednesday 11:59 PM, and upvote at least five of your peers' comments on Thursday prior to the workshop. You need to use 'thumbs-up' for your reactions to count towards 'top comments,' but you can use other emojis on top of the thumbs up.

nwrim commented 3 years ago

Thank you so much for presenting in our workshop. Both articles were truly interesting, and I think both have important implications for every single person who is in Computational Social Science or AI in general. My questions below focus more on the 2021 article.

Upon reading the article, I kept on thinking about supervised models, although your articles were centered around unsupervised models. Specifically, I wondered if classification models based on supervised learning will contain some bias as well. For example, will an image of a woman wearing a business suit less likely to be classified as "woman" compared to an image of a woman wearing a bathing suit (example largely inspired by Figure 3 and section 7.3.3. in your 2021 article)?

In addition, I wonder if the bias will be amplified if we use classification models in choosing the stimuli set in the 2021 study. Instead of using the carefully selected images in conducting the iEAT, what would happen if we feed in a large number of images that were classified to have the object in question? Will the bias be amplified, suggesting that the rabbit hole of problems gets even more intensified if we combine two AI models together?

Thank you so much for presenting in our workshop again.

rkcatipon commented 3 years ago

Dr. Caliskan, it's a pleasure to have you at our speaker series and I look forward to your presentation!

On page 3, the study mentions that the results support the, "...distributional hypothesis in linguistics, namely that the statistical contexts of words capture much of what we mean by meaning." Out of curiousity, I wondered if there was some way to measure the missinginess of meaning, i.e. the meaning in a text that a model could not detect? For example I recently heard of the Winograd Schema Challenge in which a machine is tasked with intelligently identifying a subject of a sentence. The schema often has ambigous pronouns. With a single switch of a word, the sentence's mean is easily changed. The classic example:

The city councilmen refused the demonstrators a permit because they feared violence.

The city councilmen refused the demonstrators a permit because they advocated violence. (Wiki, Winograd Schema Challenge)

A document-term-matrix would easily identify "councilman", "demonstrators", and "violence" as having a statistical likelihood of appearing together and a model would deduce an association, but the relationship between the terms would be lost. Is there a way to quantify this loss of meaning?

Perhaps it does not matter if we're looking at large-scale corpora and are using AI to detect common patterns? Perhaps the loss of meaning is at the single sentence or paragraph level and matters less when looking at larger bodies of texts? Not sure if this question makes sense, so more generally, I would love to know what you think of the limitations of NLP/Comp. Linguisitics methods in deriving Semantics!

bakerwho commented 3 years ago

Dr. Caliskan, thanks so much for presenting your work! I really enjoyed reading your 3 papers on 'human-like biases' - my own research interests and thesis are very much along these lines.

The Intersectional Bias Detection test is very interesting to me because it seems like a great way to capture complex 'aggregated biases' - such as a set of political attitudes forming an ideology. Especially in the highly polarized landscape of the present, I'm curious what your thoughts are on using IBD to measure this. Specifically, one might see polarized embeddings (in the American political context) show neat 'good/bad' divergences on political topics like 'tax', 'medicare', 'business', 'welfare policy' but less stark differences on 'war', 'industry', 'growth' or 'China'.

How else can we think about bias in embeddings as a dynamic rather than a static process - i.e. polarization?

Lynx-jr commented 3 years ago

Dr. Caliskan, thanks so much for also sharing your previous works and giving us the presentation! To the best of my knowledge, MoCo takes less time to train than iGPT-L and SimCLR, will it be worthwhile to explore the bias of MoCo in the future? And how many days did you took to train for iGPT-L and SimCLR each?

Yutong0828 commented 3 years ago

These are very inspiring studies! It gives me idea about how using machine learning method can add to psychology research by making it possible to study people's thoughts and behaviors in the past with historical records. I was wondering how we should interpret the results? The correlation between the target words and attributes could either reflect the facts as well as human's attitudes or decisions, and in the former case, it's not that convincing if we still call it "bias" or "discrimination". I suppose my question is similar to a more general question about discrimination, that is, sometimes the discrimination or stereotype rooted in facts and possibilities, and it could save people's time while making decisions, but the existence of those stereotypes can still hurt some people.

In AI's case, such dilemma also exists: while the computer scientists hope to develop an algorithm to help people make decisions and avoid risks, it may inevitably leave out some groups and bring unfairness. What do you think about this problem? Can further development of the AIs help to distinguish between the useful and less harmful "correlations" and more harmful "correlations"? Thanks very much!

sabinahartnett commented 3 years ago

Thank you for sharing your research! At the end of both articles I found myself wondering what we can do about these biases.

The "potentially harmful biases" discussed in both the semantics-focused and image-focused papers appear to me as especially harmful when accepted as ‘fact’ (as many things regurgitated by computers are)... The unsupervised models described in both articles clearly pick up on (and have the threat of further propagating) these biases - so I am wondering if you see potential in interventions or supervised models informed by social psychology or cognitive science? For example, if the machine is recognizing a connection between ‘female’ and ‘family’ which is a stereotype also common in humans, would it be potentially advantageous to train that model on common stereotypes and biases to then flag those connections or cut those ties? How do you see the field best moving forward to address these biases (and end their reproduction by machines) in the most ‘fair’ way possible?

PAHADRIANUS commented 3 years ago

Thank you for presenting these outstanding findings. I think you potently argued for the existence of biases and prejudice in machine learning models, using examples both semantic and imagery. Potentially I suppose that similar phenomenon can be observed in many other applications of machine learning. You also located with precision the origins of such bias and admonish us on the danger of using generative models without discretion. But as you pointed out, a large portion of these observed biases are introduced by human behavior in the pre-training data, and we are yet to find out whether the method of machine learning itself is also causing the problem. Another question is whether it is possible to work out a model itself purposefully resolving the issue, for instance with adjustable features for tuning the level of bias or even detect and downplay them automatically outright?

bowen-w-zheng commented 3 years ago

Thank you for sharing your research! The idea of adapting traditional tests from social psychology to quantify biases in algorithms is fascinating. I wonder if we could go further and borrow interventions from sociology/social psychology to help build less biased models.

wanitchayap commented 3 years ago

Thank you in advance for your presentation! Algorithmic bias is very important these days. Similarly, human bias is also an important phenomenon to pay attention to. And I believe that studying one side of this coin would shed some light on the other side of the coin as well! I am very much looking forward to your talk 😄

I completely agree with @bakerwho 's question. Human bias can be very complex and dynamic: with many dimensions and facets in time, space, culture, etc. How could we ensure that the word embeddings capture such nuances well enough for us to understand these biases holistically? Since as we add more of these nuances into our analyses, we may inevitably introduce confounding effects from the unrepresentativeness of the corpus into our results?

In addition, how do you see these biases, which could be very harmful to society, being ameliorated? Do you think it is possible to have an algorithm that realizes its own biases and corrects them in an ethically desirable way? Or it is more likely that we still need humans (who are still subjected to these biases) to supervise the algorithm still? Would at some points, the algorithm can be fairer than humans and thus can serve as a solution to the current biases in society?

skanthan95 commented 3 years ago

Thank you for sharing this fascinating and deeply-relevant research! It's particularly striking to me that unsupervised models learned problematic human biases as well - I expected stronger prevalence in models that had worked with labelled data, but it's clear that these biases run so deeply that unsupervised models detect them as well. Given that these problems seem to surface regardless of whether a model uses labelled or unlabelled data - is there a way to gauge whether supervised vs. unsupervised models are more or less susceptible to this problem, or compare the likelihood/extent of susceptibility? Are both models comparably likely to misclassify the image in the third panel below?

Or, is it highly context dependent (i.e., supervised models aren't necessarily always more susceptible to learning human biases than unsupervised models)

image

boyafu commented 3 years ago

Thank you for sharing this fascinating paper! I am excited about the intersection of technology and traditional topics in social science. I was wondering if the bias in the model can be thoroughly solved through technical optimizations, or we might need to involve ideas in social sciences to address it? Thanks!

YuxinNg commented 3 years ago

Thank you for sharing! My question is same to @boyafu @bowen-w-zheng . Can Social Science help to cut the bias? if yes how? It would be helpful if you can give more cases to demonstrate. Looking forward to your lecture! Thanks!

william-wei-zhu commented 3 years ago

As the (online) social world is becoming increasingly aware of the importance of cultural, racial, and gender equity, will computational models trained from images and text posted recently (last 2 years) have less bias than models trained from images and text from 5 or 10 years ago?

mikepackard415 commented 3 years ago

Thanks very much for sharing your research with us. It seems intuitive to me that AIs trained on human-generated data (whether words or images) would exhibit historical and cultural human biases, particularly aligning with the biases of those that produce most of the data. If we want to avoid these kinds of outcomes, we are essentially asking the AI to be smart about some things and ignore discovered patterns elsewhere. Do you see a productive path forward for AI that does all the cool things we hope it can do while avoiding human cultural and historical biases, or do you think biases in AI are inevitable so long as biases exist in the human population?

lulululugagaga commented 3 years ago

This topic is super important and thank you for bringing in the discussion here. My question is that, do you see any reproducible work that efficiently optimizes the situation? I've read papers on biased recidivism algorithm correction but I feel like in other area, it might be quite different, leaving the problem being underestimated.

Jasmine97Huang commented 3 years ago

Thank you Dr. Caliskan for presenting this research. My question is related to the pre-training data you used in your 2021 study. Both models are pre-trained on ImageNet2012. Would the representation of certain demographics or social cultural categories impact the resulting biases?

NikkiTing commented 3 years ago

Thank you for your sharing your work! I have a similar question to @william-wei-zhu . I think it would be interesting to study the trend of how biases change over time based on your research. Have you observed any such patterns with your current work? I look forward to your presentation!

qishenfu1 commented 3 years ago

Hi Prof. Caliskan, thank you for sharing! You mentioned that these AI and machine learning applications can shape our society. Do you think they can help people correct some bias and stereotypes, thereby helping human beings to bypass some limitations of the way of thinking?

luckycindyyx commented 3 years ago

Thank you for sharing such interesting work with us. I also have a question about the tendency with time, that is, do you think social scientists would be more aware of this problem and try to adjust details like the assumptions or algorithms so that the bias will significantly decrease, or actually no big change would happen in this field? Thank you!

vinsonyz commented 3 years ago

Thank you for your presentation Aylin! How could we combine the social science intuition and artificial intelligence properly for our research in social science?

siruizhou commented 3 years ago

Thank you for bringing this issue. It causes concern that machine learning technologies could amplify biases as they become popular tools in daily life. I'm interested in learning more about the field of fairness in machine learning.

fyzh-git commented 3 years ago

Thank you for presenting the fantastic ideas. They enlighten me with the feasibility of using machine learning for semantic analysis in order to mine any non-obvious bias. A very smart way utilizing the fact that text corpora and images capture semantics.

ginxzheng commented 3 years ago

Thank you for coming Aylin! May I wonder if there are any possibilities to let the machine self-correct its own biases? Would it be easier in text processing or image reading? Many thanks.

ydeng117 commented 3 years ago

Thank you for your presentation. The topic of bias from data is really crucial in today's development of AI. I think the true issue for this topic is that machine learning is amplifying our biases and presenting them to a broader school of users. The fundamental problem is not how we should choose our data carefully but how human ethics could be taught.

afchao commented 3 years ago

Thank you for sharing your work with our group! My question is on the social end of these findings - is approaching the topic of bias in AI kind of putting the cart before the horse? As some have already noted, it seems reasonable to expect that an AI trained on human-generated data would adopt human-like biases. Although this can be obviously problematic for the blind application of such AI, resolving the biases in these tools seems like a small portion of the larger problem that is resolving the biases in human society. I believe solutions to the latter problem would address the former along the way, whereas it's unclear to me that addressing bias in AI tools would influence the progression of tolerance in society. Part of this concern is also based in the assumption that people who pause to reflect about whether or not they're using biased tools almost by definition are already trying to minimize the amount of bias in their perspective, whereas people who aren't bothered by the prospect of incorporating bias into their decisions probably don't care whether or not they're using biased tools in arriving at those decisions.

yongfeilu commented 3 years ago

Thank you so much for the presentation! after reading your thesis, I am very curious about how you minimize the bias of your predictive model. Since data and statistics indicating the prediction accuracy can be tortured to support the researchers' conclusion, can we find a way to identify such dishonesty in the future by using AI or machine learning methods? Thanks again!

chrismaurice0 commented 3 years ago

Thank you for sharing and presenting your work with us! My question is on the responsibility to regulate this emerging technology. In addition to the bias that machine learning models replicate when learning from our own behaviors that you uncover in your papers, there have been several cases of black men wrongly accused and jailed for crimes they did not commit due to faulty facial reignition. Do you see your role as a researcher to solely bring these issues to light, or are there other steps you and your colleague are taking to show the problems with this model? Or does the responsibility to eliminate bias from models instead fall on organizations creating these models?

Yilun0221 commented 3 years ago

Thank you for the presentation! My question is about this paper: Toney&Caliskan(2020).pdf. In this paper, the algorithms achieved high accuracies in the translated words. I just wonder if the algorithms will still work when the texts are more literature with higher complexity. This means that the translations may vary greatly from person to person.

bjcliang-uchi commented 3 years ago

Very interesting research paper! Word embeddings essentially measure words' semantic meanings by their contexts, and thus an algorithm should by nature replicate the biases in the original corpus. So my question is, the current paper explores a yes-or-no question of whether stereotypes exist, but what about the change of stereotypes over the years and across different corpus? Also, I am wondering your opinion on how to fix these biases in the policy-making process--should we better balance the training corpus or change the algorithm itself?

wanxii commented 3 years ago

Really impressive and intriguing studies! As your researches have proceeded from words to images, I've been inspired to think of the possiblity to examine the potential sterotypes embedded in the form of audio or even videos (e.g. commercial advertisments). I understand those would require integrating more factors, but I just wonder if they are feasible and meaningful.

heathercchen commented 3 years ago

Thank you in advance for your presentation! It is very interesting to see artificial intelligence became "human-like" when we try to make them think and operate as humans. What do you think that causes this kind of bias? Do you think it is a way of reflecting the advancement of nowadays technology?

yutianlai commented 3 years ago

Thanks for your presentation. I'm wondering how social science could help reduce bias.

Dxu1 commented 3 years ago

Thank you for two interesting papers. Both of your papers focused on unsupervised machine learning. I wonder if the certain bias that are not desirable (gender, race etc) could be mitigated through supervised models. Also, I am curious if you looked into text of other languages in your 2017 paper.

Tanzi11 commented 3 years ago

I look forward to your presentation! How can images be filtered or screened to prevent the unsupervised machine learning model in adopting the social human biases from the data? Also, what is meant by "intelligent systems" and how do you foresee this occurring? Many Thanks!

hhx2207061197 commented 3 years ago

Thanks for the sharing. Just want to know how we can combine such AI techniques with the research of economics?

ChivLiu commented 3 years ago

Thank you for sharing us this great presentation! From my opinion, biased data shows the diversity of human thoughts and culture differences, but it might also create difficulties for training a general model. It is brilliant to have AI filtering the data and organize them. I wonder that would those AI machines become biased AI when facing data from the same background of the previously biased data?

hesongrun commented 3 years ago

Thanks for the wonderful presentation! I am curious to learn how to quantify the bias in machine learning and how do we define the welfare of society? Can we adopt some measures to mitigate the bias induced by machine learning? Thanks!

alevi98 commented 3 years ago

Thank you Professor Caliskan for sharing your work and presenting at the workshop! Your research is critical in a world where better understanding the path towards equity is so pressing. One question: have you ever tried to train an algorithm with text corpora that are explicitly anti-biased? For instance, maybe working in collaboration with an equity specialist, or seeking out content online in spaces that are likely to be less prone to bias?

luxin-tian commented 3 years ago

Thank you very much for your sharing. I wonder if there exists similar patterns in supervised learning models, and how could we quantify and reveal those, if any, in a wider range of machine learning algorithms?

linghui-wu commented 3 years ago

Welcome to our workshop, Professor Caliskan. I really love your work as it reminds me of studies on the gender discrimination behind Amazon's automated hiring tools. Looking forward to hearing more details about your research!

caibengbu commented 3 years ago

Thank you very much for your sharing. I wonder if there exists similar patterns in supervised learning models, and how could we quantify and reveal those, if any, in a wider range of machine learning algorithms?

RuoyunTan commented 3 years ago

Thank you for sharing your work. I agree that there may have been and will be unintended consequences in the applications of AI, and I really enjoy reading your work. Looking forward to your presentation.

xxicheng commented 3 years ago

I am wondering if the bias results are dependent on the content we use to train the model? If so, how should we reduce this bias? Looking forward to tomorrow's presentation :)

XinSu6 commented 3 years ago

Thank you so much for sharing this fascinating research. I am just wondering do you think the model and method mentioned can be used in other fields?

chiayunc commented 3 years ago

Thank you so much for your papers. They are fascinating reads. In the conclusions, you point out that these models are greatly impacted by the fact that the training data pools are not well represented. The questions above have suggested algorithmic ways to ease these implications for future applications of these types of models, but what if we choose to tackle the problem from the root -i.e. increase the data's diversity to a level of reasonable representation? an extremely easy, though rough method might be to include, for example, images searched using different languages. Do you think this might reduce the learned bias?

yierrr commented 3 years ago

Thanks for such an intriguing research! This reminds me of another paper where text analysis on posts in the forum EJMR was done and found that popular topics about female figures were much less professional or academic than those of male figures. However, I have a question regarding maybe the general text or content analysis work: how do researchers justify the probably biased sample? Thanks!

NaiyuJ commented 3 years ago

Thanks for sharing these wonderful works! I find all these four papers are "bias"-related. I'm wondering how you come up with those research ideas step by step from one paper to another paper.

mintaow commented 3 years ago

Thanks so much for sharing the researches. I appreciate these findings as this could be crucial in perfecting algorithms learning from semantics, images, or videos free from bias.

I am particularly curious about the impact of the findings: is it possible for us to quantify the bias that human might learn from the image/semantics from the way people are portrayed in images or words are aggregated (so that we could take this into consideration in the classification or decision process)? In other ways, would it be feasible for us to further design an algorithm that could neutralize the bias effect?

Bin-ary-Li commented 3 years ago

Thank you for joining us in the seminar and sharing your wonderful works. I find your paper on Science particularly interesting, but also confirming the belief that many of us have been holding, i.e., social biases buried deep in all sort of human records can be learned by machine learning models. In recent years, many machine learning scientists are paying more and more attention to the research of explainable neural network models and the explainability of ML models in general. What do you think about this trend? Any finding from this area that you find particularly exciting?

WMhYang commented 3 years ago

Thank you very much for the interesting papers. When I first came across the concept "human bias", I was always thinking about how to alleviate the impact of these biases, especially those harmful. Hence, I was wondering with the help of the ML models in the papers, is it possible to come up with several policy implications that could encourage the society to realize the issue and try to attenuate it? Thanks.