Thinking-with-Deep-Learning-Spring-2022 / Readings-Responses

You can post your reading responses in this repository.
0 stars 0 forks source link

Week 3 - Possible Readings #18

Open jamesallenevans opened 2 years ago

jamesallenevans commented 2 years ago

Pose a question about one of the following articles: “Norman, World’s first psychopath AI (Links to an external site.)”, 2018. P. Yanardag, M. Cebrian, I. Rahwan; “Deep neural networks are more accurate than humans at detecting sexual orientation from facial images (Links to an external site.).” 2018. Y. Wang & M. Kosinski. Journal of personality and social psychology 114(2): 246; “Dissecting racial bias in an algorithm used to manage the health of populations (Links to an external site.)”. 2019. Z. Obermeyer, B. Powers, C. Vogeli, S. Mullainathan. Science 366(6464): 447-453; “Semantics derived automatically from language corpora contain human-like biases.” (Links to an external site.) 2017. A. Caliskan, J. J. Bryson, A. Narayanan. Science 356(6334):183-186; "Aligning Multidimensional Worldviews and Discovering Ideological Differences"Links to an external site.. 2021. J. Milbauer, A. Mathew, J. Evans. EMNLP.

JadeBenson commented 2 years ago

The Obermeyer, et al. article “Dissecting racial bias in an algorithm used to manage the health of populations” is so inspiring. I think this article tackles an incredibly important issue with clarity, extensive research, empathy, and solutions. Their explanation of the “problem formulation,” especially in health fields particularly caught my attention. They thoroughly explore why future costs is an unfair label to use, but it was a reasonable choice by the algorithm’s developers. They also explain how catastrophic health care utilization could be used as another outcome, or the number of chronic health conditions. I would love to discuss how we approach using labels in our own research and future work? I worry about this in my upcoming job. I would not have initially thought that future health care expense could result in unequitable treatment for black Americans by a third! How do we make sure that our own work doesn’t have these sorts of consequences? Do we test all possibilities and explore their differential effects by relevant factors (i.e., race, class, sex)? How can this be done on tight time frames? How do we conceptualize of variables that are not plagued by these biases or realize how they may be and accordingly adjust? I was also wondering in this article if the number of health conditions is also dependent on your healthcare utilization and costs, as those who do not often see doctors will likely not have as many diagnosed but may be worse off because of the severity of their conditions by the time you seek care. This seems like one of the most important aspects of our work as data scientists and it is important to handle these complexes carefully. How can we best handle this problem formulation?

thaophuongtran commented 2 years ago

“Norman, World’s first psychopath AI”, 2018. P. Yanardag, M. Cebrian, I. Rahwan is a very interesting project, create a digital double trained on captions annotating pictures of human death. The project shows the importance of the quality of training data and understanding its bias before feeding it into deep learning/ML models. While data is everywhere, collecting and gathering data can still be quite a resource-extensive process. What are some measures to check the biasedness of your data? I assume you would need to check for bias along every single dimension/feature of your data. For example, the Current Population Survey data are biased toward more heavily toward urban areas and coastal geographies. Even though there is population adjusted weights, estimates produced for states with low observations like Wyoming are not reliable. When you found your collected/training data are bias, what are some of steps you can take?

borlasekn commented 2 years ago

I read “Semantics derived automatically from language corpora contain human-like biases." The article is a valuable discussion on the trouble that comes with training data being semantically biased. I was wondering though, are there scenarios where it can be important to have this natural semantic bias? I am thinking that this could be valuable if one is attempting to model bias in culture (thus training a model that mimics this bias could be important in anticipating where bias comes from in society). However, I would say that in general building biased models is problematic.

sabinahartnett commented 2 years ago

Similar to @borlasekn, while reading Wang & Kosinski, 2018 and Caliskan et al., 2017, I worried about the re-creation of biases in models (even knowingly) and the potential dangers of training algorithms to further propagate those issues. As @JadeBenson included in her response, Obermeyer et al., 2018 inverts this by acknowledging which predictive factors in models can be most harmful in the real world. This brought about a few more philosophical questions: can there be truly 'neutral' training data for these models? Can we neutralize models with hand-crafted features (like punishing a model for making biased predictions)? Who will be the gatekeepers of that data / determining 'neutrality'?

ValAlvernUChic commented 2 years ago

Adding onto the thread by @JadeBenson , @sabinahartnett and @borlasekn , I was thinking about how much of the creative (text, video, audio etc.) data we have is produced overwhelmingly by communities that have access to whichever form of expression and how those might in turn act upon groups that are not part of that production (whether because of lack of access, institutional barriers etc.). For example, in Singapore (not exclusively of course), foreign domestic workers dont have access to the same discursive or rhetorical spaces that are by default afforded to the rest, meaning that the lens through which we might want to study their experience will necessarily be unowned by them. Beyond then managing for bias from within the corpus, I was wondering what methods/strategies could be used to "include" voices from these underrepresented communities in our training data?

pranathiiyer commented 2 years ago

I guess mostly everyone who has read “Semantics derived automatically from language corpora contain human-like biases." covered my questions. My question is a little more philosophical. Given that STEM as a field itself can be underrepresented and inaccessible to certain communities, could having those voices and communities as part of the development of these models make them more inclusive and sensitive to bias? Because ultimately the problem of bias is more systemic than machine learnt.

egemenpamukcu commented 2 years ago

As I browsed through the project “Norman, World’s first psychopath AI”, I thought about the "alignment problem" and the potential existential risks that can be posed to our planet and species. Obviously, this model was trained specifically to imitate human, psychotic behavior which makes it different than most of the concerns I have read about an existential threat from a super-intelligence, which seems to revolve more around the topic of non-alignment of human versus machine goals. I would like to hear your thoughts on the other, much more human and familiar form of threat that can be posed by intelligent systems that are specifically trained to impose physical/psychological damage. It seems to me inevitable that as we make these models more human-like, intelligent, and accessible, we become more vulnerable to its existential harms (besides already commonplace phenomena like algorithmic bias and discrimination).

javad-e commented 2 years ago

Reading the discussion on unsupervised cultural analysis methods by Milbauer et al. (2021) was informative. I was recently doing a literature review for using machine learning techniques to analyze text written in non-Latin alphabets. I encountered researchers using a wide range of methods including Google translating all the text, tokenizing on their own and then incorporating the available packages, or creating their own packages. Reading about the unsupervised cultural analysis methods in this paper, I was wondering what our approach should be for running a similar analysis in other languages?

isaduan commented 2 years ago

Add to some friends' question, on “Semantics derived automatically from language corpora contain human-like biases," how could we leverage the discovery of those biases to help not only AI alignment, but "human alignment"?

min-tae1 commented 2 years ago

The usage of misalignment in "Aligning Multidimensional Worldviews and Discovering Ideological Differences" was interesting, and I was wondering would it be possible to understand misalignments within communities rather than between communities. While the paper analyzes subreddits such as r/the_donald as a single entity, I believe there would be different ideologies, age groups within those communities as well. Singling out those groups and finding out how those differences are at play in community politics would be interesting. Also, I do think certain subgroups in a community, such as r/the_donald, could share lots of commonalities with other subgroups in another community that misaligns with the former, such as r/politics. Finding those similar subgroups would also be a way to find commonality within the polarized cyberspace.

BaotongZh commented 2 years ago

Just a question regarding the “Semantics derived automatically from language corpora contain human-like biases,", which many people read. Whether such bias embedded in text corpus is beneficial or not when we are talking about training a model. And how do we handle the bias to facilitate our purpose.

y8script commented 2 years ago

Relating to “Semantics derived automatically from language corpora contain human-like biases", my question is also more of a conceptual one. Apart from ethical problematic biases, what is the nature of those ethical neutral biases? Are these biases bound to certain knowledge we have(e.g. insects are more dangerous than flowers)? As we would want to mitigate ethical problematic biases, is it possible that we also mitigate ethical neutral biases? If we do that, can we get a machine mind that is non-humanlike and totally unbiased, or will we get a model that lacks a large proportion of human knowledge?

zihe-yan commented 2 years ago

Adding to @javad-e 's question on "Aligning Multidimensional Worldviews and Discovering Ideological Differences", I'm also interested in the potential of cross-language application scenarios of such a word-embedding-based method.

One thing that occurred to me when reading this is that people in different countries are using the same phrase to talk about things that are totally opposite (for example, in Mainland China the use of right describes a completely different political view from that in Western liberal countries), which is a problem similar to polarized subreddits. And these different interpretations can be problematic in the communication process itself as well as affect the country's foreign policymaking. But when this problem is perceived from a comparative perspective, we have to take multilingual corpus as our data. Some languages can be similar in structure, but some are not.

So my question is: would those differences in syntax structure be a concern in applying this type of method to comparative analysis? If so, what can we do to generalize this word embedding method to allow two or multiple languages corpus that is drastically different in the syntax and also in a completely different cultural context?

ShiyangLai commented 2 years ago

About paper "Semantics derived automatically from language corpora contain human-like biases", I am wandering how should a 100% percent neutral model looks like. Prejudices are undeniably enhanced over time since our decisions are always based on historical information which is already biased. If we can omit all the biases in the model, the thing leftover might be a very simple model without strong predictive power. Is that really what we want?

Hongkai040 commented 2 years ago

I have a question with regard to the function of IRB and ethical consideration with regard to “[Deep neural networks are more accurate than humans at detecting sexual orientation from facial images ". The authors got approval from Stanford University’s IRB, asked many experts for revision, notified several leading international LGBTQ organizations in advance, and finally published their finding on a journal to enable transparency-based accountability. But they're denounced by many people the immorality of the research. I am interested in the stand of stanford IRB and what is the boundary of ethical research?

yujing-syj commented 2 years ago

“Norman, World’s first psychopath AI " is a very interesting project to me. It shows us how to train a Psychopath AI by some bias data. It also tells the importance to control and measure the bias when trainig the mode. However, considering we could mimic the performance of a psychopath, I am wondering whether this could be applied to analyze the pathological mechanism of the psychopath or other mental illness, and whether the ML methods could be use to control for the influent factors and deterioration of mental health?

Yaweili19 commented 2 years ago

The article, “Semantics derived automatically from language corpora contain human-like biases.", has truly been an inspiration to me and makes me further ponder before I take on any kind of natural language processing. I do wonder though, how is the author's method themselves, not affected by those biases? Since they are automatically derived semantics, too, to an extent.

yhchou0904 commented 2 years ago

Combining the article "Dissecting racial bias in an algorithm used to manage the health of populations" and “Semantics derived automatically from language corpora contain human-like biases” together, they both try to deal with the problem that there seems to have a bias or inequality in the decisions made by models or methods that have "learned" from real-life data. However, I am still confused about the meaning behind them. When we could reduce the bias and improve the performance at the same time, it would definitely be worth it to adjust the model. However, we all know that equality is more important than anything, but the models are simply learning from what they've got, which is usually containing historical or discriminal bias. I guess what we could do is be aware of the data we use and the design we choose. Or is there a commonly accepted standard for us to develop on?

Emily-fyeh commented 2 years ago

From the reading "Semantics derived automatically from language corpora contain human-like biases", like many previous comments have mentioned, I constantly think of how we should view these human-caused biases in the models. On some occasions, we do need to preserve these biases to represent reality. Also, a more conceptual question would be, how to define a 'universal' standard of discrimination and bias when the values are ever-changing?

sudhamshow commented 2 years ago

The paper 'Aligning Multidimensional Worldviews and Discovering Ideological Differences' by J. Milbauer et al. was an incredible read with novel approaches of problem solving with word embeddings and also provided reference to several other seminal papers through an exhaustive methodological survey. A couple of questions about the paper - 1) How do you adjust to the difference in frequency of word usage in different subreddits (different genres) when trying to find a transformation function that aligns language in 2 different communities. Wouldn't it be unreasonable to fit words that are used only a couple of times in the entire history of the subreddit? Also how are mutually exclusive (but most distinguishable) words handled? 2) It makes sense to align other subreddits to a language more comprehensive like the one in r/askreddit. Does it have to be ensured that the context (alignment of word embeddings) in the reference language should be uniform? Would it impact the alignment of words from other languages (here - different subreddit) if the alignment of the vocabulary is not dispersed? (Could it lead to under alignment of other languages) 3) Since the approach still considers (after preprocessing) particular words to align the languages around, is this a truly unsupervised learning approach? 3) I was also wondering how good of a fit linear transformations could achieve in the context of aligning word embeddings, given that they only perform a linear shift. Were high dimensional transformations (other kernels like gaussian) considered?

mdvadillo commented 2 years ago

"Norman, World’s first psychopath AI " is an interesting project and also a little disturbing. Since the neural networks used to train Norman are akin to a human brain, could it be possible for the medical community to further understand how much negative (in this case death-related) stimuli is necessary to create a psychopath? I am thinking that testing certain hypothesis on AI can help us understand human behavior in contexts where otherwise testing would be unethical

linhui1020 commented 2 years ago

I am really inspired by “Norman, World’s first psychopath AI " which trains model working as the human brain and showing decision path. I wonder whether such method could be used for understanding why certain mental problems such as autism, depression. In addition, whether such model could be used for different age groups, for example children and the elder might have different mechanism.

chentian418 commented 2 years ago

I am impressed by the accuracy of predicting sexual orientation from facial images from the paper Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Besides the classifier with identifying sexual orientation, I was wondering if deep neural network could help identify the causal chains behind the phenomenon: whether sexual orientation has caused such facial patterns because of common mood within the groups, or people inherently showing specific facial expressions more frequently are more common to have certain sexual orientation in their later stage of life?