Open GabeNicholson opened 2 years ago
Hi Prof Todorov, thanks for sharing your work. The paper mentioned that the participants were majority white from NA. Since differences in face perception biases are likely impacted by the participants' own racial background and life experiences, I was wondering how you think the attribute inferences would be different between racial groups and between people with different levels of exposure to racially diverse populations. Would also be interested in hearing more about your thoughts on tieing the group level and individual level analyses together to better understand which face features contribute most to each bias across individuals within the same group and the mechanisms of bias formation.
Hi Professor Todorov,
I am very impressed by the accuracy of the deep learning models. However, I have also noticed that you mentioned the study of aggregated judgments masks the role of stable individual differences in social judgments. As a beginner in computational social science, I wonder in deep learning, how you address the issue of idiosyncrasy versus consensus. Do you think deep learning has the potential to deal with idiosyncrasy in a more powerful way, or do you think deep learning emphasizes the consensus and ignores individual differences? Thank you!
Dear Prof. Todorov,
many thanks for presenting your work in our workshop. I found your research on producing synthetic face images of various attribute perceptions particularly intriguing.
I was aware of another example of synthetic image production, Forensic Architecture's (FA) attempts to create synthetic images of ammunition (from bullets to vehicles). FA used these synthetic images to train classifiers (due to a lack of a large number of real ammunition images). The classifiers were subsequently used to identify ammunition in video footage and photographs that were part of the group's investigations.
Of course, the case of ammunition is completely different from human faces as there is a plethora of real face images that we actually can connect to human attributes (e.g. we might have data on the real age of the image's owner). Yet, inspired by both your research and the above-mentioned subsequent use of synthetic images as input in other models, I wonder whether your synthetic images could actually increase the accuracy of already existing classifiers. I understand the synthetic images described in your paper only capture perception of such attributes and not their existence. For example, an image of someone characterized as "smart" does not necessarily mean that the person depicted is smart (well, the person depicted is in any case not a... person). However, I wonder whether under the appropriate manipulation of these synthetic images (e.g. correlating synthetic images with real attributes) they could successfully be used in classification applications. That would of course lead to further ethical concerns other than the ones already discussed in your article (e.g. exposure of sensitive attributes).
Many thanks once again and looking forward to your presentation on Thursday.
Kind regards, Loizos
Hello Professor Todorov, thank you for sharing and taking time to present in the workshop today. Your topic sounds pretty interesting. It is interesting to see how annotations in the picture we see today are created systematically. As you mentioned, face attribute inferences can bring the proliferation of techniques for scientific modeling of faces, therefore future studies are important and necessary. While in reading your research paper of Deep models of superficial face judgments I wonder how researchers recruit the participants. How do you avoid potential ethic issues such as privacy, races and genders while using those participants' images? Another question is, I understand majority of the modeling data was trained in North America and male faces in general, but in order to have the model applicable in other demographies, do you have any other suggestions for people studying the correlation between personality traits and face attributes in vectors?
Professor Todorov, thank you for sharing this intriguing research. How people perceive others' upon first impression has so many important implications for how we are mitigate biases present. I wonder if you had any insight into who people perceive as "trustworthy" as we age. For example, do children perceive women to be more trustworthy? What about people above 80? Do men and women have differences in the type of people they view as trustworthy? How can we account for variability throughout life stages on perception of facial expressions, while controlling for sociodemographic processes such as race and gender? You mention that these attributes are likely related, so I wonder how we can further develop the analysis to see what factors people view as trustworthy, as related to their own demographics as well as the demographic of the image they are viewing.
Hello Professor Todorov, thanks a lot for giving us this presentation. I am curious that humans sometimes cannot be explained by science. Whether this study shows that we can use the model to predict human by their face accurately? Thanks
Hi Professor Todorov,
You mention in your paper that the ability to have such a large set of realistic yet controlled stimuli can be very useful for behavioral experiments in the future. However, do you see any applications to industry? For example, in advertising, companies may want a person to endorse their product who seems trustworthy and smart. Granted, they already do this, but this technology may further strengthen this effect. What are the ways that you perceive (or have seen) this being used in the non-academic setting, either for benevolent or malicious purposes?
Thank you for your presentation, Sam
Hi Prof Todorov, thank you so much for presenting this interesting paper! In the general discussion section, you point out that although developing a more generalised model is of primary goal, it would make interpretation more challenging. I am wondering if you could address some remedies/future directions of research that could be conducive to the interpretation of the model. Thanks!
Hi Professor Todorov, thank you for sharing your research with us! It is so interesting that the deep learning modeling could model inferences of more than 30 attributes over a comprehensive latent face space. My question is that what do you think is the trade-off between modeling and the human interpretation?
Hi Prof., thanks for sharing your work. It is always fascinating to see different kinds of contributions in ML.
From your "Materials and Methods" section, I see that the majority of participants, whose ratings went into the Million Impressions dataset identify as "White" (~70%). It is also natural to assume that there are different cultural ideas of trustworthiness, beauty, and familiarity, for example. Do you think this participant pool would yield a "ground truth" that is robust across these different preferences?
Hi Prof. Todorov, Thank you so much for sharing your work. Do you expect to see the deep neural structure representation of the latent variables to bear any resemblance to biological neural circuits? Would you be interested in probing the internal representatin and computation of the DNN and what method you might use to investigate?
Hi Prof Todorov, thank you so much for presenting this paper! As a beginner in computational social science, I do not have so many questions. I just can’t wait to learn some important knowledge about deep learning.
Dear Professor Todorov:
Thank you very much for sharing the interesting study of facial impressions and machine learning! I also have some problems with the 34 perceived social and physical attributes.
Firstly, I found that these attributes can be divided into several very different groups. So why do we select those 34 attributes? In other words, what's different if we add or reduce some attributes?
Second, some of the attributes are easily distinguished, but some attributes may not have strict criteria. For example, different people may have different responses to the attribute "looks like you". How can we measure and estimate these attributes?
Thirdly, because of the high-dimensional of the model, we use the ridge regression method to reduce the risk of overfitting. I think the problem of high-dimensional models and multicollinearity will become more and more common with the development of big data analysis tools. Therefore, when faced with these problems, what things do we need to take care, and how could we solve them? Many thanks!
Hi professor. Thanks for sharing us such an interesting paper. I'm wondering whether it will induce ethical problems when using people's facial data in modeling? Besides, I'm very curious about the empirical applications of this mechanism. Thank you.
Hi Professor Todorov, Thank you for bringing us such a great work. I was just wondering how we could observe some latent attributes (not defined by human) that only could be learned throughout this model? Thanks!
Hi Prof. Todorov, thanks for sharing your work. do you think human could have accurate judgment on the real face and AI generated face, and how the facial features tell things more than the facial itself but more about characteristic associated with a person? For example, there is a couple of research employ deep models to investigate the association between sexual orientation and facial features. The significance of these studies actually raise ethical concerns - what is the ethical boundary of using human faces/ other unique attributes as a part of study?
Hi Professor Todorov, thank you for sharing your work. I am curious if grouping attribute inferences into objective, socially constructed, and fully subjective, whether there would be a stronger correlation in the same group or across different groups. In addition, the artificial face images were transformed on attribute inferences with differences along a continuous scale, and I wonder whether the subtle differences could be detected by human raters without directly comparing two adjacent images on the scale.
Hi Prof Todorov, thank you so much for presenting this paper! I wonder if utilizing face data from real individuals for modeling may lead to ethical issues. If so, can you share how you dealt with this issue? Thanks a lot!
Hi prof. Todorov, thanks for sharing such an interesting paper. For the studies of applying deep learning in face evaluation, I am curious that what is the practical use of the analysis of face image? Would the practical application of in the real life, especially the generation and prediction of faces, bring some ethical issue?
Hello, Prof. Todorov. Thank you very lot for writing this paper! I'm curious how you deal with the ethical concerns that come with using people's facial data in modeling. In addition, how you think about attribute inferences differs depending on ethnic group and level of exposure to racially diverse areas. Thanks so much!
Hello, Prof. Todorov. Thank you very lot for writing this paper! I'm curious how you deal with the ethical concerns that come with using people's facial data in modeling. In addition, how you think about attribute inferences differs depending on ethnic group and level of exposure to racially diverse areas. Thanks so much! Sorry, I meant, thank you for sharing this paper.
Hi Professor Todorov,
Thanks for sharing your research with us. I have a question as follows: In the article "Deep models of superficial face judgments," you have mentioned that impressions of what other people are like based solely on how their faces look have real-life consequences ranging from hiring decisions to sentencing decisions. Since I'm interested in the deep learning applied in the real world but not so familiar with superficial face judgments, would you please provide more real-world examples in your relevant area?
Hi Professor Todorov, thanks for coming to our workshop. The research is super cool, and my question is more general perhaps, regarding the police implication of your research - if some company uses your model to claim a certain kind of human face with some characteristics appear to be more trustworthy, would the next step be to manipulate how we look?
Hello Professor Todorov, thanks for sharing such exciting research. I wonder what the use case in the industry of your model to manipulate inferences concerning arbitrary face photographs or to generate synthetic photorealistic face stimuli is.
Hi Professor Todorov, thank you for sharing your work! I'm wondering how can we avoid the abuse of using the model by authorities?
Hello Professor Todorov, thank you for sharing this amazing work with us! I am surprised that deep learning models can perform pretty accurately in capturing and predicting the nuance in human facial expressions. I noticed that the sample is predominantly from US cultural background. While facial expressions can be different across cultures, I believe that training models using data within these cultures can still lead to accurate predictions. However, I am interested in whether there are universal traits/patterns in human expressions, if so, are there any established theories/studies in this topic and how well can algorithms capture such commonality? Thank you!
Hi Prof. Todorov,
This research has some obvious ethical issues such as companies/governments screening people, or politicians/public figures manipulating their photos to appear more trustworthy etc. However, I feel that it shouldn't discourage such research because more information is almost always better. If we are aware that such methods to predict impressions from faces exist, we can be more mindful of our biases.
Hi Prof. Todorov, I appreciate your research team's effort to construct a racially representative sample of faces, but I also noticed that the judges are dominantly (white) North American, which can likely explain why they would rate faces of non-white races as "unfamiliar" like in Figure 4. I'm concerned that the model inherits racial biases common in the white North American population. What's your thought on this issue of inherited bias? Would it alleviate the extent of bias if judges from different geographical and racial backgrounds are incorporated in the research?
Dear Professor Todorov, thank you for coming with your fascinating research! Can you pleas elaborate more on the social science implications to fight against inequality and stereotype behind the fact that "statistical modelling shows that stable idiosyncratic preferences contribute to more than 50% of the variance" ?
Dear Prof Todorov, Thank you for sharing this work with us! You mentioned that idiosyncratic preferences contribute to more than 50% of the variance for complex impressions. In these cases, how much of the variance can be explained by the intrinsic attributes of face stimulus in general (by implementing a similar analysis that you did for idiosyncratic preference, but on the image level)? Also, I'm very curious about the details of your potential models that aims at recognizing the idiosyncratic preferences of each individual. Are they similar to what psychology researchers have done to measure individual trait differences? Is it possible to use these models to achieve better measurements of other preferences?
Hi Professor, thank you very much for presenting us with your work! As many would have, my question is: how will you address the ethical concerns involved with the research?
Hello Prof Todorov,
With the training of this GAN, and some measure of its ability to create and rate faces,could this be integrated to potentially assess the accuracy or biases of new diffusion based image generators like stable diffusion and Dalle2? If so, what kind of information might we be able to glean from an excersize such as this?
I know they're a hot topic right now, but quantitative or even qualitative examinations into a possible future core tool in image manipulation seems very lacking at the moment and this seems a possible method to begin that exploration.
Hi Prof Todorov, thank you for sharing your interesting study! My question is about the application of your trained GAN model. Could you elaborate more on the potential usages of the model in the real world? Many thanks.
Hi Professor Todorov, thank you for sharing the study. Since the crowdsourcing in your paper (the MTurkers) are mostly White. Few of them are ethnical minorities. I wonder if this will induce bias in training due to this undersampling of this lack of ethnical minorities. Thanks!
Hello Professor Todorov, thanks for sharing your work! As you mentioned in the paper, one possible reason for the larger gap between reliability and model performance in “Black” attribute is the sampling bias. Since the majority population in this dataset is white, North American, I wonder if reducing the sampling bias before training the model will lead to a more convincing conclusion.
Hello Professor Todorov,
Thank you for sharing your work. I am curious about depth in the scientific modeling of faces. From my understanding, generative adversarial networks (GANs) model faces from large corpora of photographs. Since faces are not two-dimensional renderings but three-dimensional topographies, how well do these deep generative image models extend from the digital to the physical world? Put differently, is it possible to incorporate depth and move beyond (mostly) frontal, centered representations of faces?
Hello Professor Todorov,
This is really an interesting topic. Thank you so much for sharing your work with us. I was fascinated by your figure 3, as it's the first time that I've seen a plot like this. I'm also a bit confused by the message it's trying to convey. Can you please elaborate more? Also, I previously worked on a facial expression classification project, where I witnessed that color images will not improve performance outcomes. For your analysis, why did you choose color images and what are the advantages?
Professor Todorov, thank you for sharing your paper with us, I am wondering if there are any scientific computing strategies you could take advantage when doing an image analysis with neural network. Would that make image analysis more efficient? Thanks
Dear Dr. Todorov
Thank you for sharing your research with us! Near the end of your paper, you briefly discussed the ethical implications of using machine learning to edit pictures of human faces. In that discussion, you talked about how this topic has importance for defamation law. Specifically, you compared the methods that you were using to DeepFakes. I am familiar with DeepFakes and can obviously see how they can be used to defame or slander an individual. However, I am having a difficult time seeing how your methods of adjusting photos could have similar negative impacts. Can you give an example of how your methods might be misused?
Hi professor todorov thank you for sharing your work with us! "Our model can be used to predict and manipulate inferences with respect to arbitrary face photographs or to generate synthetic photorealistic face stimuli that evoke impressions tuned along the modeled attributes". I see how this is exactly what deep fakes exploit today. I personally too see several avenues for misuse of this application. How does the academic research community work towards balancing the tradeoff between benefits and harms of innovation given that unlike industry, a lot of academic work attempts to be open source, and can easily be misused?
Hi Professor Todorov,
I appreciate that, early in "Deep models of superficial face judgments" you and your coauthors add the very clear and important caveat that "...these attribute inferences, especially those of the more subjective or socially constructed attributes, have no necessary correspondence to the actual identities, attitudes, or competencies of people whom the images resemble or depict (e.g., a trustworthy person may be wrongly assumed to be untrustworthy on the basis of appearance)." My question is, do you have any suggestions about how we might be able to find more attributes to study that dont have this problem? I realize that doing so would be a project unto itself, and that your work purposefully studies attributes related to the superficial type of judgments that humans make about physical appearances (and such judgments are themselves full of subjectivity, bias, and between-person inconsistency), but wouldn't it be useful to find more attributes that have high construct validity (e.g., perhaps a larger set of attributes related to the objective features you mention, like presence vs. absence of glasses)?
Hello Professor Todorov, Thank you for sharing your paper on determining human attributes from facial images using the deep neural networks.
I am not very familiar with machine learning and DNN, but I just read a paper discussing the efficiency and accuracy to predict sexual orientation based on people's facial images. This paper causes much ethics controversy because its target is very personal and sensitive. I believe your study of analyzing psychological attributes by realistic face photos suffers from less concern, but can you tell us what ethical issues have you encountered during composing this study?
Also, I am curious about the particularly large sample you chose to use, specifically over 1 million judgments to model inferences of more than 30 attributes. Can you elaborate on the necessity of doing so? Thank you so much and look forward hearing your presentation on Thursday.
Hello Professor Todorov,
Thank you so much for your paper. I was wondering why you opted to use synthetic images instead of actual images for the experiments. Is it just a matter of cost or was there a deeper methodological reason to this?
Hello Professor Tadarov,
Thank you for your interesting contribution and for coming to speak to us. I have a question for you. You've shown convincingly that we can use AI for analysing facial judgements, but at what cost?! The question I have is whether we SHOULD use AI for analysing facial judgements? Have you seen 2001 A Space Odyssey, it didn't go very well there...
Thanks, Elliot
Hi Professor Todorov, thank you so much for sharing your wonderful work with us. Impressions are the result of different (weighted) factors such as facial features, sex, body shape, and age. I'm just wondering what you think about these factors and whether they would affect the contribution of your paper. Thank you!
Hello, Professor Todorov, your finding though can not imply a causal relationship, the descriptive evidence is still very intriguing. You mentioned that your auditing approach has several major limitations, I am just wondering do you have some other approaches that you would like to try in the future, that might perform better than the auditing approach.
HI Professor Todorov,
Thank you for sharing your research with us. Your study specifies the "everyday context" in which the photos are collected and appraised. Do you think it would be worthwhile for future studies to investigate any effect of the context the appraised are in and also any effect of the context the appraiser is situated in when making the appraisal? Your study results are interesting as a focus on the more intrinsic properties of the face that people use to make judgements. It also makes me curious about how additional contextual information affects the decision-making behavior of appraisers. For example, in the context of negotiation, how do interactants weigh facial impressions to negotiate? Are objective values higher if negotiators feel their counterparts to be more trustworthy? Will we see then see the same effect in the context of a friendship?
Looking forward to your presentation tomorrow!
Hi Professor Todorov, Thank you so much for sharing your research with us! Your research on building and explaining individual models of face perception is impressive. I was wondering in your opinion, to what extent morphing the pixel values on the latent space will change people's inference about the features?
Hello professor Todorov Thank you for sharing your interesting research. It was interesting to see that the computer was able to identify the subjective attributes of a persons image by using GAN. However, I cannot completely comprehend the idea that "subjective" attributes could be identified as a data. I guess the fact that attributes were identified using 1000 people makes the "subjective" feature close to "objective", but If think there could be a possibility of sample bias among the 1000 people. I wonder whether you think the result would differ if more people judged the images. Another concern is that this research could stimulate the use of stereotypes in judging people. I wonder what you think regarding this concern.
Hi Professor Todorov, Thank you for sharing your work with us. The idea of building models for superficial face judgment is really interesting. By building a system for face judgment, we could have a chance to numerize people's feelings toward facial expressions. This model does prove the ability of machines and algorithms, which is what people should be aware of. However, I am wondering if this would deepen people's typical way of explaining human expression and monotone this path.
Comment below with a well-developed question or comment about the reading for this week's workshop!
If you would really like to ask your question in person, please place two exclamation points before your question to signal that you want to ask it.
Please post your question by Tuesday 11:59 PM. We also ask you all to upvote questions that you think are particularly good. There may be prizes for top question-askers.