uchicago-computation-workshop / Spring2024

Spring Workshop 2024, Thursdays 9:30-11:50am
3 stars 0 forks source link

Spring2024

Repository for the Spring 2024 Computational Social Science Workshop

Time: 9:30 AM to 10:50 AM, Thursdays Location: 1155 E. 60th Street, Chicago IL 60637; Room 295

Sign up to meet over lunch, dinner, or small group settings with our speakers here

5/16 Jake Hoffman is a Senior Principal Researcher at and founding member of Microsoft Research in New York City, where he works in the field of computational social science. Prior to joining Microsoft, he was a Research Scientist in the Microeconomics and Social Systems group at Yahoo! Research. He holds a B.S. in Electrical Engineering from Boston University and a Ph.D. in Physics from Columbia University, where he is Adjunct Assistant Professor of Applied Mathematics and Computer Science. He runs Microsoft's Data Science Summer School to promote diversity in computer science. His work has been published in journals such as Science, Nature, and the Proceedings of the National Academy of Sciences and has been featured in popular outlets including The New York Times, The Wall Street Journal, The Financial Times, and The Economist.

An illusion of predictability in scientific results. In many fields there has been a long-standing emphasis on inference (obtaining unbiased estimates of individual effects) over prediction (forecasting future outcomes), perhaps because the latter can be quite difficult, especially when compared with the former. Here we show that this focus on inference over prediction can mislead readers into thinking that the results of scientific studies are more definitive than they actually are. Through a series of randomized experiments, we demonstrate that this confusion arises for one of the most basic ways of presenting statistical findings and affects even experts whose jobs involve producing and interpreting such results. In contrast, we show that communicating both inferential and predictive information side by side provides a simple and effective alternative, leading to calibrated interpretations of scientific results. We conclude with a more general discussion about integrative modeling, where prediction and inference are combined to complement, rather than compete with, each other. Contributing papers one and two.

Pose your questions here!

5/9 Grant Blank is a sociologist who explores the social and cultural impact of the Internet and new media. He is currently a Fellow at the Center for Advanced Internet Studies (CAIS) within the Ruhr University in Bochum, Germany; and has served as faculty at the Oxford Internet Institute (OII) since 2010. Previously, he received his Ph.D. in sociology at the University of Chicago. He recently received the William F. Ogburn Career Achievement award from the Communication, Information Technology and Media Sociology section of the American Sociological Association in recognition of sustained contributions to research in on communication technology.

Smartphone dependencies shape internet use and outcomes. The distinct characteristic of smartphones is their flexible ability to be personalized to their owners’ needs, goals and lifestyles. How they are personalized can lead different people to depend on them to attain very different goals. Drawing on media system dependency theory I describe three routine uses of smartphones: orientation, play, and escape dependency. These dependencies are associated with different subpopulations and they are major contributors to the amount and variety of internet use. All three also shape internet outcomes but in different ways: orientation dependency has a positive influence on the benefits of use, while play and escape dependencies have a negative influence. The results show that the ways in which people incorporate smartphones into their lives have a strong impact on how they use the internet and what benefits they enjoy. The implications for a future theory of smartphone use are explored. The following readings support the presentation: echo chambers are overstated.pdf; smartphone dependencies.pdf

Pose your questions here!

5/2 Xuechunzi Bai is an incoming Assistant Professor of Psychology at the University of Chicago, currently a postdoctoral scholar at Princeton University in psychology, cognitive science, and statistics and machine learning at the Department of Psychology and the School of Public and International Affairs. She studies dynamic social minds--the interplay between individual decision processes and societal phenomena in social cognition, such as the origins of social stereotypes.

Multidimensional Stereotypes Emerge Spontaneously When Exploration is Costly. Stereotypes of social groups have a canonical multidimensional structure, reflecting the extent to which groups are considered competent and trustworthy. Traditional explanations for stereotypes – group motives, cognitive biases, minority/majority environments, or real-group differences – assume that they result from deficits in humans or their environments. A recently-proposed alternative explanation – that stereotypes can emerge when exploration is costly – posits that even optimal decision-makers in an ideal environment can inadvertently create incorrect impressions. However, existing theories fail to explain the multidimensionality of stereotypes. We show that multidimensional stratification and the associated stereotypes can result from feature-based exploration: when individuals make self-interested decisions based on past experiences in an environment where exploring new options carries an implicit cost, and when these options share similar attributes, they are more likely to separate groups along multiple dimensions. We formalize this theory via the contextual multi-armed bandit problem, use the resulting model to generate testable predictions, and evaluate those predictions against human behavior. In particular, we evaluate this process in incentivized decisions involving as many as 20 real jobs, and successfully recover the classic warmth-by-competence stereotype space. Further experiments show that intervening on the cost of exploration effectively mitigates bias, further demonstrating that exploration cost per se is the operating variable. Future diversity interventions may consider how to reduce exploration cost, such as introducing bonus rewards for diverse hires, assessing candidates using challenging tasks, and randomly making some groups unavailable for selection. The following reading supports the presentation: BaiGriffithsFiske.pdf

Pose your questions here!

4/25 Uri Hasson is Professor of Psychology at the Princeton University who studies the neural basis of brain-to-brain human communication, natural language processing, and language acquisition. He aims to develop new theoretical frameworks and computational tools to model the neural basis of cognition as it materializes in the real world, inspired by the success of deep learning in modeling natural stimuli. He and his team are searching for shared computational principles and inherent differences in how the brain and deep neural networks process natural language, with findings that suggest how deep language models provide a new computational framework for studying the neural basis of language.

Deep language models as a cognitive model for natural language processing in the human brain. Naturalistic experimental paradigms in cognitive neuroscience arose from a pressure to test, in real-world contexts, the validity of models we derive from highly controlled laboratory experiments. In many cases, however, such efforts led to the realization that models (i.e., explanatory principles) developed under particular experimental manipulations fail to capture many aspects of reality (variance) in the real world. Recent advances in artificial neural networks provide an alternative computational framework for modeling cognition in natural contexts. In this talk, I will ask whether the human brain's underlying computations are similar or different from the underlying computations in deep neural networks, focusing on the underlying neural process that supports natural language processing in adults and language development in children. I will provide evidence for some shared computational principles between deep language models and the neural code for natural language processing in the human brain. This indicates that, to some extent, the brain relies on overparameterized optimization methods to comprehend and produce language. At the same time, I will present evidence that the brain differs from deep language models as speakers try to convey new ideas and thoughts. Finally, I will discuss our ongoing attempt to use deep acoustic-to-speech-to-language models to model language acquisition in children. The following readings support the presentation (read and comment on either or both): Hasson_et_al_2020; Goldstein_et_al_2022.

Pose your questions here!

4/18 Nilam Ram is a professor of Communications at Stanford University who studies the dynamic interplay of psychological and media processes and how they change moment-to-moment and across the life span. This workshop is VIRTUAL only. online.

Modeling at Multiple Time-Scales: Screenomics and Other Super-Intensive Longitudinal Paradigms. A decade ago, we used newly emerging smartphone technologies to obtain multiple time-scale data that facilitated study of new intraindividual variability constructs and how they changed over time. The recent merging of daily and digital life further opens opportunity to observe, probe, and modify every imaginable aspect of human behavior – at a scale we never imagined. Using collections of intensive longitudinal data from survey panels, experience sampling studies, social media, laboratory observations, and our new Screenomics paradigm, I illustrate how methodological invocation of zooms, tensions, and switches (ZOOTS) is transforming our understanding of human dynamics and development. Along the way, I develop calls for more flexible definitions of time, fluidity and diversity of methodological approach, and engagement with science that adds good into the world. Two short, related papers available here and here.

Pose your questions here!

4/11 Ashton Anderson is an Associate Professor in the Department of Computer Science at the University of Toronto, broadly interested in the intersection of AI, data, and society. He runs the Computational Social Science Lab at the University of Toronto and has made major advances to large-scale understanding of online communities, polarization, and AI designed to collaborate with humans.

Generative AI for Human Benefit: Lessons from Chess. Artificial intelligence is becoming increasingly intelligent, kicking off a Cambrian explosion of AI models filling thousands of niches. Although these tools may replace human effort in some domains, many other areas will foster a combination of human and AI participation. A central challenge in realizing the full potential of human-AI collaboration is that algorithms often act very differently than people, and thus may be uninterpretable, hard to learn from, or even dangerous for humans to follow. For the past six years, my group has been exploring how to align generative AI for human benefit in an ideal model system, chess, in which AI has been superhuman for over two decades, a massive amount of fine-grained data on human actions is available, and a wide spectrum of skill levels exist. We developed Maia, a generative AI model that captures human style and ability in chess across the spectrum of human skill, and predicts likely next human actions analogously to how large language models predict likely next tokens. The Maia project started with these aggregated population models, which have now played millions of games against human opponents online, and has grown to encompass individual models that act like specific people, embedding models that can identify a person by a small sample of their actions alone, an ethical framework for issues that arise with individual models in any domain, various types of partner agents designed from combining human-like and superhuman AI, and algorithmic teaching systems. In this talk, I will share our approaches to designing generative AI for human benefit and the broadly applicable lessons we have learned about human-AI interaction. Paper: "Designing Skill-Compatible AI: Methodologies and Frameworks in Chess", Karim Hamade, Reid McIlroy-Young, Siddhartha Sen, Jon Kleinberg, and Ashton Anderson. in ICLR 2024.

Pose your questions here!

3/28 John Wixted, is a Distinguished Professor of Psychology at UCSD with research focused on understanding episodic memory. His work investigates the cognitive mechanisms that underlie recognition memory, often drawing upon signal detection theory. He also investigates how episodic memory is represented in the human hippocampus, based mainly on single-unit recording studies performed with epilepsy patients. In recent years, his research has also investigated the applied implications of signal detection-based models of recognition memory and its implications for the reliability of eyewitness memory.

Emerging Insights into the Reliability of Eyewitness Memory. Eyewitness misidentifications have contributed to many wrongful convictions. However, despite expressing high confidence at trial, eyewitnesses often make inconclusive misidentifications on the first test conducted early in a police investigation. According to a new scientific consensus, it is important to focus on the results of the first test because, if the perpetrator is not in the lineup, the test itself leaves a memory trace of the innocent suspect in the witness’s brain. Thus, all subsequent tests of the witness’s memory for the same suspect constitute tests of contaminated memory. Unfortunately, when evidence of an initial inconclusive identification is introduced at trial, the rules of evidence provide a witness with an opportunity to explain their prior inconsistent statement. In response, witnesses often provide an opinion about why they did not confidently identify the suspect on the initial test despite doing so now (e.g., “I was nervous on the first test”). However, witnesses lack expertise in—and have no awareness of—the subconscious mechanisms of memory contamination that have been elucidated by decades of scientific research. The combination of a sincerely held (false) memory and a believable (but erroneous) explanation for a prior inconsistent statement is often persuasive to jurors. This is a recipe for a wrongful conviction, one that has been followed many times. These wrongful convictions, which have long been attributed to the unreliability of eyewitness memory, instead reflect a system that unwittingly prioritizes false memories elicited at trial over true memories elicited early in a police investigation. The Federal Rules of Evidence were enacted almost a half-century ago, and it may be time to revisit them in light of the principles of memory that have been established since that time. Related paper on The Mechanisms of Memory vs. the Federal Rules of Evidence sent by email.

Pose your questions here!