Thinking-with-Deep-Learning-Spring-2024 / Readings-Responses

You can post your reading responses in this repository.
0 stars 1 forks source link

Week 3. Apr. 5: Sampling, Bias, and Causal Inference with Deep Learning - Possibilities #6

Open JunsolKim opened 3 months ago

JunsolKim commented 3 months ago

Pose a question about one of the following articles:

Deep neural networks are more accurate than humans at detecting sexual orientation from facial images.” 2018. Y. Wang & M. Kosinski. Journal of personality and social psychology 114(2): 246.

Dissecting racial bias in an algorithm used to manage the health of populations”. 2019. Z. Obermeyer, B. Powers, C. Vogeli, S. Mullainathan. Science 366(6464): 447-453.

“Semantics derived automatically from language corpora contain human-like biases.” 2017. A. Caliskan, J. J. Bryson, A. Narayanan. Science 356(6334):183-186.

“Prediction of Future Terrorist Activities Using Deep Neural Networks”. 2020. Zada, N., Aziz, F., Saeed, Y., Zeb, A., Atif, S., Shah, A., Al-Khasawneh, M.A., and Mahmoud, M. Complexity.

The moral machine experiment.” 2018. Awad, Edmond, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan. Nature 563, no. 7729: 59-64.

Deep Neural Networks for Estimation and Inference”. 2021. M. H. Farrell, T. Liang, S. Misra. Econometrica 89(1). An approach that creates bounds and inference within standard feedforward neural networks for a range of social scientifically relevant outcomes. This demonstrates the utility of neural networks for classic, but challenging, problems of inference.

BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization”. 2020. “Show and Tell: A Neural Image Caption Generator”. 2015.

maddiehealy commented 3 months ago

Lots(!) of thoughts on the Detecting Seuxal Orientation from Facial Images article, but in the interest of space I am going to just talk about my overarching question which has to do with the state of access (and/or democratization) of deep learning tools in the future.

Currently, if someone has access to a laptop, WiFi, and a dataset, then they can run deep learning models. This was something I was pleasantly surprised by when I first began programming with these models. However, after reading this article I do wonder if as deep learning techniques expand to other (more controversial) studies, such as digital-gaydars like this article, if we will see a restriction on who has access to these tools in the future?

Besides my larger hesitancy towards physiognomy in general, when reading this article, I was also worried about what would happen if this technology landed into the wrong hands. I was also worried about what would happen if this technology landed in the wrong hands. For example, if a homophobic employer used this model as a way to screen – and weed out – potential employees based on their sexuality. And, based on the preached accuracy of these models, I think this is a pretty tame example of how this technology could be used with malicious intent. The authors reflect this sentiment too, acknowledging that well-being and safety can often depend on a person's ability to control when and to whom they reveal their sexuality to. Therefore, I can see why rules around who can run these tests and on whose image would be useful (it makes sense why an employer should not be able to access these tools in the context of the hiring example).

HOWEVER, this also threatens the current accessibility of deep learning technologies. What would happen if only the one entity could access the highest DL technology? What would happen if you had to get your DL applications approved beforehand? How do we balance these grave consequences with the potential benefits of allowing anyone to practice with these models? etc...

I don't have answers to any of these questions, but like all discussions involving the future of technology, I worry solutions need to be determined soon based on the rapid progress of DL applications.

guanhongliu2000 commented 3 months ago

In the article “Dissecting racial bias in an algorithm used to manage the health of populations”, I am curious about the potential long-term consequences on public health if biases in predictive algorithms are not addressed. How might these biases exacerbate existing health disparities, and what role do they play in the trust communities have in healthcare systems?

kceeyang commented 3 months ago

The survey conducted in “The Moral Machine Experiment” reminds me of the trolly problem in philosophy wherein participants were asked to choose an option in a set of moral dilemmas in the context of autonomous vehicles. Researchers found that people have a stronger preference for “sparing humans over animals, sparing more lives, and sparing young lives,” suggesting that the collected data can be used to guide machine ethics and automate decisions. Specifically, they highlighted the relationship between moral preference and the cultural, social, and economic background of participants. I’m curious: does this research maybe focus more on cultural/social acceptability rather than the rightness of behavior in a dilemmatic setting? If so, are they suggesting that a socially acceptable behavior is a moral thing to do? Also, is there a way to evaluate and verify whether a decision generated by the machine is ethical or unethical?

Pei0504 commented 3 months ago

The article titled "Deep Neural Networks Are More Accurate Than Humans at Detecting Sexual Orientation From Facial Images" explores the capabilities of deep learning in identifying sexual orientation based on facial images. The study also investigated which facial features contributed to the classifications, finding both fixed (e.g., nose shape) and transient features (e.g., grooming style) significant. These findings support the prenatal hormone theory of sexual orientation and raise important privacy concerns due to the potential for misuse of such technology.

Based on the content and findings of this article, how do the fixed and transient facial features identified by the deep learning model correlate with the prenatal hormone theory of sexual orientation, and what implications might these correlations have for our understanding of the biological underpinnings of sexual orientation? Considering the privacy concerns raised by the ability of deep learning models to accurately predict sexual orientation from facial images, what ethical considerations should guide the development and application of such technologies, and how might policymakers and technologists work together to ensure these tools are used responsibly?

hantaoxiao commented 3 months ago

In the first paper, given the study's conclusion that deep neural networks can accurately predict sexual orientation from facial images, what are the ethical implications of such technology in societies with varying levels of acceptance towards LGBTQ+ individuals? How do we navigate the balance between technological advancement and the right to privacy, especially in regions where being a part of the LGBTQ+ community might lead to discrimination, social stigma, or worse?

chenhuifei01 commented 3 months ago

In "Prediction of Future Terrorist Activities Using Deep Neural Networks", the research demonstrates the use of DNNs to predict various aspects of future terrorist activities with high accuracy. While these models can be invaluable tools for preemptive security measures, they also raise significant ethical concerns. Considering the predictive power of these models in determining the likelihood of an individual committing a terrorist act based on patterns and data, discuss the potential ethical dilemmas this presents. How should we balance the benefits of predictive policing in counterterrorism with the risks of infringing on individual privacy and potentially targeting innocent individuals based on algorithmic predictions?

Yu-TingWeng commented 3 months ago

In "Dissecting racial bias in an algorithm used to manage the health of populations", it highlights the importance of understanding and addressing label bias in algorithm development. The authors suggest that changing the label used in prediction tasks can reduce bias, as demonstrated by their experiments. Are there any existing regulatory frameworks and ethical guidelines for researchers to follow to address the issue of label bias in predictive algorithms in current approaches?

Xtzj2333 commented 3 months ago

“Semantics derived automatically from language corpora contain human-like biases.” 2017. A. Caliskan, J. J. Bryson, A. Narayanan. Science 356(6334):183-186.

The finding is very interesting, however, I wonder if the model demonstrating associations of stereotyped biases necessarily imply that the model is undesirable? Imagine a model trains on readings of critical theory: the model would probably demonstrate a lot of biases in IAT. However, this does not mean the model is discriminatory; it is because readings of critical theory - which it trains on - talks about discrimination and stereotypes in order to dissect and critique them. In this example, associations from IAT is perhaps not enough to draw conclusions on the desirability of the model, but we need to take into account the broader contexts (i.e. the model demonstrates biases in IAT exactly because it has learned knowledge from readings of critical theory). Is it possible to develop deep learning models that have this kind of contextual awareness?

risakogit commented 3 months ago

“Deep neural networks are more accurate than humans at detecting sexual orientation from facial images.”

The photos used in the experiment may include selfies as well as those taken by others. Therefore, depending on the photographer, the impression of the face in the photos may vary significantly (for example, in selfies, the angle may be from below, making the chin appear larger). In this experiment, is this possibility being considered? If the model constructed in the experiment does not consider or exclude this point, is there a possibility that the experimental results could be misguided?

uc-diamon commented 3 months ago

Deep neural networks are more accurate than humans at detecting sexual orientation from facial images-

what policy implications does this have in the face of ethical concerns?

Marugannwg commented 3 months ago

Looks like all those papers this week are trying to push us social scientists into the frontline of hot ethical debates... Imaging a world where your preferences/habits in speech/sexual orientation/health information/and more... can be indirectly but accurately evaluated with algorithms... (even if neither you nor the one developed the method intended to do this)...

I wonder what would you do when you realize we are wielding such a power. People can argue indefinitely on what is correct and how to regulate, but what I'm more interested is what you are doing right now to keep yourself consciously ethical/cautious. Any simple guidelines or personal beliefs?

HongzhangXie commented 3 months ago

I am intrigued by the discussion on convergence rates in "Deep Neural Networks for Estimation and Inference." In the routine process of building deep learning models, there are numerous methods to measure a model's performance, such as by splitting the data into training and testing sets to test the model's predictive accuracy. This leads me to wonder about the importance of convergence rates in deep learning models. In other words, with only a limited amount of data available, how stable to achieve relatively accurate predictions using deep learning model? I speculate that there is a trade-off between convergence rates and accuracy across different deep learning models and methodologies. Some models may exhibit high accuracy with large datasets but perform poorly with smaller samples. Others might have higher convergence rates but less accuracy with large datasets. I am curious about which factors within deep learning model structures affect convergence rates. Also, to achieve better convergence rates, what trade-offs must we make in terms of computational cost and model accuracy?

HamsterradYC commented 3 months ago

[Dissecting racial bias in an algorithm used to manage the health of populations]

The authors have scrutinized a prevalent healthcare algorithm and uncovered its inherent racial bias against Black patients, illustrating how reliance on historical data can amplify preexisting disparities. What innovative mechanisms do we need to establish for reviewing and evaluating such systems to prevent algorithmic biases from exacerbating historical inequalities?

erikaz1 commented 3 months ago

In Prediction of Future Terrorist Activities Using Deep Neural Networks, Uddin et al. (2020) train NN models on a unique dataset containing records of terrorist attacks from UMD. These data are curated through verifying various news media reports of attack incidences. Given the nature of this particular dataset, it seems that the "prediction" task of these DNN will be an intellectual one (the model cannot actually "forecast" terrorist attacks due to a posteriori knowledge). I think this is just an interesting reminder of how data choices might limit the generalizability of a prediction.

anzhichen1999 commented 3 months ago

How does the reliance on health care costs as a proxy for health needs contribute to implicit racial bias in algorithms used for managing population health, and what are the implications of such bias on the equitable distribution of health resources among Black and White patients?

MarkValadez commented 3 months ago

Reading: Semantics derived automatically from language corpora contain human-like biases

As far as I understand, there is analytical reasoning as to why a unified language system or unified representation system cannot be formalized, and hence we are left with languages full of ambiguities or context dependence. We have seen recent examples of ChatGPT vs StockFish as to how these systems fail when taken out of their training context (i.e. ChatGPT materializing a knight on top of the opponent queen). This mode of analyzing language (embeddings) seems to provide a method to generalize communication interpretation outside of contexts. My question is: How do things such as "inside jokes" or "internal references" can be addressed in this context? (Although even people suffer from lack of context, which would appear to be a natural barrier to interpretation of communication for both humans and machines.)

XueweiLi1027 commented 3 months ago

The Moral Machine experiment study found that in the context of morality for autonomous vehicles, the strongest moral preferences were for sparing humans over animals, sparing more lives, and sparing young lives, which could serve as building blocks for machine ethics. Also, significant cultural variations were observed, with countries clustering into Western, Eastern, and Southern groups based on their moral preferences, which makes a lot of sense. Further analyses confirmed that moral preferences were correlated with individualism, rule of law, economic inequality, and gender equality at the country level.

Given that ethical problem is a major concern for the various application contexts of deep learning (including the autonomous vehicle context in this study), I am wondering how can we come up with a regulation on the use of deep learning that is universal across different cultural settings.

kangyic commented 3 months ago

Deep Neural Networks for Estimation and Inference The idea to generate a basis function out of data right away is still confusing for me. It is non parametric, without heuristics, how could it ever be a basis function.

beilrz commented 1 month ago

Deep neural networks are more accurate than humans at detecting sexual orientation from facial images.” 2018. Y. Wang & M. Kosinski. Journal of personality and social psychology 114(2): 246.

I recall reading this paper last year and criticizing its potential ethical implications. In the age of deep learning, what are some potential ways of preventing these models from revealing private, determinantal information about us? For example, I recall protesters in recent events often wearing masks or facial gear to prevent facial recognition.

00ikaros commented 1 month ago

What are the key factors contributing to the limited adoption of deep learning in social sciences such as economics, despite its acknowledged predictive power? How does the formal proof of valid inference using deep learning for first-step estimation enhance its potential utility? Additionally, what practical implications and opportunities do the authors identify for embedding deep learning into standard econometric models and other economic settings?

Carolineyx commented 1 month ago

The Moral Machine experiment provides comprehensive insights into global moral preferences regarding autonomous vehicle decisions. Given the significant cross-cultural differences identified, what are the main challenges in establishing universal ethical guidelines for autonomous vehicles that are acceptable across diverse cultural contexts? Additionally, how can policymakers and car manufacturers effectively balance these cultural differences to develop ethical frameworks that ensure both global acceptance and ethical integrity in autonomous vehicle decision-making?

icarlous commented 1 month ago

How do convergence rates impact accuracy with limited data? What trade-offs exist between convergence rates, computational cost, and model accuracy?

Brian-W00 commented 1 month ago

The article “Deep neural networks are more accurate than humans at detecting sexual orientation from facial images" explores the application of deep learning techniques to identify private attributes of individuals, such as sexual orientation. So, from a legal and ethical perspective, how to balance the use of this technology with the protection of personal privacy?