uchicago-computation-workshop / Spring2022

Repository for the Spring 2022 Computational Social Science Workshop
5 stars 3 forks source link

05/05: David Lazer #6

Open shevajia opened 2 years ago

shevajia commented 2 years ago

Comment below with a well-developed question or comment about the reading for this week's workshop. These are individual questions and comments.

Please post your question by Wednesday 11:59 PM, and upvote at least three of your peers' comments on Thursday prior to the workshop. You need to use 'thumbs-up' for your reactions to count towards 'top comments,' but you can use other emojis on top of the thumbs up.

Thiyaghessan commented 2 years ago

Hi Professor Lazer,

Thank you for presenting your work with us today. I had the following questions.

  1. Is it possible that the lower prevalence of misinformation (p. 377) is a function of the platform itself? Figure 3 identifies conservatives as being at greater risk of exposure and most conservative and far-right content creators are most active on platforms such as Facebook and YouTube. I would thus expect Twitter to not have as much sharing of misinformation from conservative sources. Additionally, older users who are more susceptible to misinformation's pernicious effects are more active on Facebook and even messaging apps like Whatsapp and Telegram. I would suspect that the exposure is greater there.

  2. Would better data privacy help reduce exposure to misinformation? Looking at figure 3, it seems like the greater likelihood of exposure for conservatives could also be a product of already increased susceptibility coupled with targeted advertising by publishers. With stronger protections, the efficacy of targeted ads could be reduced and thus reducing exposure for this already susceptible group.

  3. You pointed out that individuals are likelier to share information that is belief congruent. Should we be asking ourselves why some individuals find extremist/fake news to be belief congruent? Many of these sources tend frequently devolve into conspiracy mongering, alleging that our institutions are under the control of an evil cabal. Sure, some people are by nature going to find such information appealing but for many others, this congruency is often the product of economic dislocation and the social alienation that comes with it. Is such a "root-cause" explanation valid and should we be looking for policy solutions to tackle these causes?

pranathiiyer commented 2 years ago

Thank you for sharing your work with us professor! I had questions about how social media organisations could adopt some of your policy suggestions across different geographies given that they have penetrations across countries--social media users also tend to share associations and networks with individuals in different countries. Moreover, I'm guessing different events trigger different peaks of fake news within countries, whereas an issue like covid is a fairly global issue that could influence misinformation across nations. You suggest that social media platforms could partner with third party organisations to monitor the spread of fake news, however I wonder what you might think of the complexity of these policies as they extend to more than one country? I also believe the prevalence of bots on social media could also make the task of identifying these spreaders of fake news more challenging at a larger scale, and in real time. Would love to hear what you think!

hsinkengling commented 2 years ago

Hi Prof. Lazer. Thanks for sharing your research with us.

I really like the organizational definition of fake news, since it takes the epistemic burden of proving that something is "false" off the researchers. However, I wonder if this difference may result in the labels capturing more of news "style", which may consist of a number of content characteristics that could be driving the results, rather than the sheer difference of a piece of information being false. In terms of research and policy, would it be better to find another way of distinction (eg. conspiratory)? or should fakeness be the defining feature?

william-wei-zhu commented 2 years ago

Thank you Professor Lazer for sharing your work. What's your advice for Elon Musk and his new Twitter team to reduce the spread of misinformation while maintaining impartiality on social media?

chrismaurice0 commented 2 years ago

My question might be a cynical one: We know fake news exists, we know it is targeted towards/affects few people and is spread by few actors. Do you feel like you are screaming into the void with your research on the effects of de-platforming on the spread of fake news given social media companies are reluctant to intervene on "free speech" issues?

JoeHelbing commented 2 years ago

Sort of combining the focus of both papers, and in reference to Musk's musings about "authenticating all real humans", to what extent do you think the anonymity of twitter for producers and consumers of misinformation exacerbates the issue? Just spit-balling on the psychology of it, I imagine to some extent, even just on the consumption side, removing anonymity might reduce fake news consumption irrespective of it's effect on "retweets" or dissemination generally.

isaduan commented 2 years ago

I am curious about your definition and operation of fake news: "The attribution of fakeness is thus not at the level of the story but at that of the publisher." Do you expect, if so how, your results change if we were looking at the fakeness of news on the news level? How we might think about operationalizing fakeness on that level?

hhx2207061197 commented 2 years ago

Hi Professor. Just one question: Do we need to consider the reason why some people think extremism/fake news is aligned? Some people naturally find such information appealing, but for many others, the alignment is often the product of economic dislocation and the social distancing that comes with it. Thanks!

egemenpamukcu commented 2 years ago

Hi professor Lazer, it is an honor to have you as a guest. I would like to use this opportunity to get your opinion on the recent acquisition of Twitter by Elon Musk. Some are arguing that Twitter, as a public company that is tasked with bringing shareholders financial value, was destined to boost the dissemination of fake news and polarizing content on the platform as they attract users and drive engagement. Do you think such societal problems arising out of widespread adoption of the social platform by the general public can be mitigated while these companies remain public? What do you think are some possible avenues to make the goals of these companies align with those of society in the long run?

a-bosko commented 2 years ago

Hi Dr. Lazer,

Thank you for sharing your work with us! As data scientists, a lot of us work with Twitter data, so it is very interesting and relevant to see how we can apply our skills across different fields!

In the article "Fake news on Twitter during the 2016 U.S. presidential election", the authors mentioned that only 1% of individuals accounted for 80% of fake news source exposures. This seems like a big deal to me! I believe this type of research opens the door for other fields to reduce the spread of misinformation. My first question is, how do you see this work being applied to other fields, such as within healthcare or finance? My second question is, what kinds of strategies do you see social media platforms implementing to stop the spread of misinformation during elections? Do you think that social media platforms will try to stop the spread?

linhui1020 commented 2 years ago

Professor Lazer, thanks for your presentation! I am really interested into your paper "Using Administrative Records and survey data to construct samples of tweeters and tweets", where you mention that survey data may only have very few accounts that are actually most important and active accounts in twitter. This may cause a bias in the result. Can this be avoided if we can give more weights to the result of active accounts in the survey?

jinfei1125 commented 2 years ago

Hi Professor Lazer, Thank you so much for coming and sharing your work, I am really looking forward to your talk! The Twitter fake news one is really engaging. As mentioned in the research "fake news sources were extremely concentrated, only 1% of individuals accounted for 80% of fake news source exposures, and 0.1% accounted for nearly 80% of fake news sources shared". What would be an effective approach for us to arrest the spread of fake news? Thank you!

MengChenC commented 2 years ago

Hi Professor Lazer,

Thank you for coming and sharing your work, I am really excited to hear the presentation tomorrow. Based on your research, you concluded that conventionally sized surveys are likely to lack the statistical power to study subgroups and heterogeneity within even highly salient political topics. I am curious if we can remedy this deficit since many surveys in social sciences tend to be small in size. For instance, what will be the effect if we introduce/link more demographic data from different sources? Thank you.

Jasmine97Huang commented 2 years ago

Dear Professor Lazer, so excited to hear your presentation tomorrow! My question is related to your paper "Fake news on Twitter during the 2016 U.S. presidential election". I found your focus on fake news sources very interesting, as using fake news pieces as units of analysis seem to be also intuitive. I am wondering what is reason behind this design choice here? How does your study distinguishes between fake news sources that include truthful information and fake news sources that are consistently lack of accuracy?

kthomas14 commented 2 years ago

Hello Dr. Lazar and thank you for sharing your research with us! I found you paper about the dissemination and consumption of Fake News during the 2016 election to be very insightful and affirming for the relationship that the US population has with fake news. I thought that your utilization of twitter was a very interesting way to collect and analyze a sample of US voters. I would be interested to know more about the role of superconsumers and supersharers as political influences to users who are not as involved with sharing news on social media. Additionally, I would be interested to see a time series of the spread and consumption of fake news through the present day, with special attention to marked political events that have recently happened, or to simply hear your thoughts on these trends.

yierrr commented 2 years ago

Thanks for sharing your research! I am also interested in the use of social media surveys in research and how they may be improved.

Yutong0828 commented 2 years ago

Hi Professor Lazer, Thank you for sharing your research with us! I found your paper about combining administrative records and survey data to construct social media samples very inspiring! I have also been working with social media data for a while, and I really agree that attaching demographic information to the data or integrating social media related questions in the survey can be very beneficial to our researchers. However, a challenging thing here is how to balance the privacy issues and make the participants feel safe to provide their social media account. Social media account can be a very personal thing to people and it is a anonymous platform, so I am worrying that some people may be reluctant to provide their account to the researchers, thus there can be selection bias as people who are willing to provide their account information can be less polarized and be more positive in their online behaviors. So my question is how can we control for that problem in social science research? I am looking forward to learning more about your research, thanks!

JunoWuu commented 2 years ago

Hi, Dr. Lazar! I really like the topic about comparing different methods that you have identified. As a student in Psychology, I am quite familiar with the traditional survey-based methods but not so with other more recent data collection approaches. Therefore, I wonder if the strength you have identified with these approaches is specific to areas of study or can be generally applied to all fields.

PAHADRIANUS commented 2 years ago

Professor Lazer: Thank you for sharing your team's progress on analyzing the political demographics of Twitter users, a topic which I myself am quite interested in digging and have been searching literatures. I totally agreed with your approaches of filtering user information and measure user behaviors. I have a few concerns:

  1. You work on the 2016 elections decisively demonstrated that ultra-rights were more exposed to and sharing fake news reports and endangering the online forum environment. Still, that was a fair portion of around 10 percent, far from encompassing the majority of that group. Would this be sufficient to justify the comprehensive actions taken against them? Where should the redline be drawn?
  2. Whereas the expelling and voluntary exits of the ultra-right users improved the information accuracy and discourse ambience on Twitter, their voices did not despair and their exodus to other more extreme social media forums could even further cement their biases and false beliefs. Rather than solving the problem, the intervention simply disguised it. Like a pressed spring, the ultra-rights could easily bounce back at a convenient timing.
atowey-uchi commented 2 years ago

Thank you for coming to speak with us! My question is around privacy and the connection between demographic information and social media profiles. I could foresee concerns if people are able to track down users they disagree with online and find their demographic and geographic location (even though this information is currently public domain). Additionally, would researchers need to obtain permission to use the demographic information of the users even if they obtain the information through publicly available means?

Raychanan commented 2 years ago

I have no specific question. We appreciate your central point regarding data linking for enhancing our understanding of our sampling strategy and results. By developing samples that are linked to external benchmarks and whose members are tracked over time, researchers can examine whether the people who tweeted about a particular topic reflect the broader population of US adults on Twitter. A question of such importance weighs heavily on many researchers' minds as they make their conclusions.

xxicheng commented 2 years ago

Echoing @chrismaurice0, I also have a "so-what" question. Misinformation has existed in all forms ever since human beings lived in the world; social media is just one of the platforms that spread fake news. Now some patterns have been found, so should we let them disappear ... just like everything else we do not like? Also, who can decide what is fake or not, esp. in terms of politics? Would it become another sort of ideology reinforcement by certain groups of people (always already advantageous in other forms such as SES, political leadership)? How should we distinguish misinformation blocking from censorship? Or, should we? How do you think our life could be better by potentially applying the results in your and others' misinformation studies?

wanxii commented 2 years ago

Thank you so much for sharing the interesting projects! As people tend to believe what they want to believe, I wonder how you think of the actual magnitude of impact of fake news on voting behaviors or people's political ideology. Many thanks!

jiehanL commented 2 years ago

Hi Professor Lazer, thank you for sharing your work! My question is about the fake-news paper, what techniques did you use for excluding potential social bots? And how did you estimate/evaluate the potential bias brought by the presence of social bots? Many thanks!

NaiyuJ commented 2 years ago

Hi Professor Lazer, Thanks for sharing your excellent work! The paper tells us how researchers can link individuals’ basic demographic attributes with their social media data in order to better understand political phenomena. However, in most situations, we may not be able to directly link surveys to social media content at the individual level, especially in studying authoritarian regimes. Nevertheless, having survey data to complement social media data can still help us understand more about research questions in the sense that we can build an indirect link between public opinion and surveys at the aggregate level. How can we build this sort of indirect link and better combine these two sources to get a full picture of citizens' political attitudes or behavior? Thanks!

afchao commented 2 years ago

Thanks for sharing your work with our group! It's interesting to think about the kinds of media consumers who are victimized by misinformation. I think it's naive to imagine that they're all actually misinformed; rather, I suspect that there's a group of people in there who aren't dealing with news media in good faith. In other words: people who are looking to validate an existing belief rather than acquire information. If you grant the existence of people like this, I feel like any "corrective" mechanism on public discourse is only going to inflame these peoples' opinions and sense of disenfranchisement - especially if we're talking about the modern American right! Do you see any potential for social media (perhaps in conjunction with computational social science?) to be applied as a bridge to these people rather than the cudgel it's become?

ValAlvernUChic commented 2 years ago

Thank you for sharing your work with us, Professor. While the paper covered the exposure of individuals to fake news, I was wondering how we might extend this to investigating resulting changes/entrenchment of belief systems and opinions. There seems to be a complicated mapping process from exposure to actual belief in what they're being exposed to, especially with such large-scale data. I imagine it'd be centered around some sort of longitudinal analysis but would love to hear your thoughts!

mikepackard415 commented 2 years ago

Hi Professor Lazer, thanks for coming to the workshop and sharing your research. The problem of misinformation is a really important one, and I suspect the deplatforming of insurrectionists provides somewhat of a "natural experiment" on what we might call the information ecology. The summary of your talk mentions "the capacity of social media platforms to control public discourse." I guess I'm wondering about the word control, and whether that's what we should want for social media platforms, even if it operated to reduce misinformation in this scenario. What does your research tell us about how we can develop healthy governance systems for the information ecology?

yutaili commented 2 years ago

Thanks for sharing your work Professor Lazer. With all the negative consequences of fake news and misinformation on social media like Twitter, what would be your stand point on implementing strict censorship of contents on Twitter? Would you say strict censorship is undermining the rights of free speech, or it can be justified by protecting those vulnerable groups of people who are easily affected by fake news and misinformation? Thanks.

taizeyu commented 2 years ago

Dear Dr Lazer. Thanks for sharing the research with us. My question is that if we can completely prevent the fake news. What is the harm of fake news. If the fakes news are always bad

cgyhumble0612 commented 2 years ago

Hi Professor, thanks for sharing us two interesting papers. I am very interested in the research on fake news. what do you think of the role social media play in mitigate the misinformation? Besides, could we measure the impact of fake news as some kinds of concrete value to help do quantitative work? Thank you so much!

Qiuyu-Li commented 2 years ago

Hi Professor Lazer, thank you for coming to our workshop. I guess my biggest concern is whether the misinformation check would lead to cancel culture. In my imagination, before we define something to be mistaken, we have to establish some right thing. Wouldn’t it be dangerous to be so confident about right and wrong?

BaotongZh commented 2 years ago

Hi professor Lazer, Thank you for sharing us your great work. As the paper shows, the majority of people are exposed to popular non-fake news source, there is are possibilities that the fake news could significantly alter the election results. I was just wondering how we can completely prohibit the fake news without hurting the freedom of speech or giving the government too much power?

ChongyuFang commented 2 years ago

Hi Prof. Lazer, thanks for sharing with us your research here. My question is: is there a possible way to combine individual level data with aggregate level data? Will there be a case that when individual level data aggregate, the effects may offset? I believe this creates bias and might bring bias to the analysis. Thanks!

AlexPrizzy commented 2 years ago

Thank you for presenting this work Dr. Lazer. In the Twitter study following the 2016 presidential election, you mention the possible remedy of disincentivizing frequent posting to prevent flooding techniques. Though this may not be in favor of social media platforms and influencers since frequent posting means financial gains for these parties. Do you think it may be possible to over power fake news by flooding social media with real news? or would this turn into a "fighting fire with fire" situation?

zixu12 commented 2 years ago

Hi Professor Lazer, Thank you so much for sharing your work! I am wondering how you distinguish disinformation and misinformation. I feel some misinformation you talked about is like disinformation to me. I am also wondering if you or how you plan to correct the "misinformation"? Thanks!

helyap commented 2 years ago

Hi Professor Lazer,

Thank you for sharing your papers and your upcoming presentation with us. Do you find that the correlations of disinformation or fake news sharing and exposure from your 2019 study (e.g. like how political congruency affects individual engagement in the posts) to be explainable by any broader sociological or social psychological mechanisms of decision-making and information systems? Also, although it's noted that "fake news" and misinformation has existed for quite some time, are there features of fake news sharing through the internet and social media platforms that are novel and part of new phenomena from our internet age?

Emily-fyeh commented 2 years ago

Hi Professor Lazer, thank you for coming to our workshop. I am curious about the definition of "exposures" in the paper "Fake news on Twitter during the 2016 US presidential election", and how you define the "panels" of the Twitter population. Since the change of social media algorithms and the activity level of the users (which can possibly be weighted over the population attributes) can cause the estimation of "exposure" to fake news changes, the subsequential analyses outcome can lead to different implications accordingly.

kuitaiw commented 2 years ago

Dear Professor Lazer, thanks for coming and sharing your work. I would like to know how to define fake news so that we can avoid the harm of fake news? At the same time I wonder if we can effectively eliminate fake news if we hire people in various social media platforms to moderate all news?

yiq029 commented 2 years ago

Thanks for sharing your work Professor Lazer! I am so looking forward about how you deal with confounding variables for the causal effect analysis with a highly complex data set.

koichionogi commented 2 years ago

Thank you so much for presenting your work Dr. Lazar. I have a quick question regarding sharp RDD that you use for your recent research. If you consider the jan 6th insurrection as the event that creates discontinuity, would the treatment be observing the event? Does it happen to everyone? so wouldn't it be hard to select control group and treatment group? If the treatment would be something different like being influenced by the insurrection, then wouldn't being treated/untreated have some bias? i.e it would be hard to think we have smooth distribution of being close to those who are eliminated and would there be some other factors that influence the effects? I would really appreciate if you could share your ideas of research design. Thank you again for your presentation.

LFShan commented 2 years ago

Thank you for your work professor. I would like to know your opinion on the power of social media platforms. Should a private company like Twitter be able to moderate online content? The ability to decide what content to be moderated is trenmendously powerful, should that be regulated?

Hongkai040 commented 2 years ago

Hi Prof. Lazer, Thank you for coming to the workshop! I am very interested in the findings from the paper ' Fake news on Twitter during the 2016 U.S. presidential election' I am wondering does findings like 'A cluster of fake news sources shared overlapping audiences on the extreme right' suggest that there groups on Twitter and these groups have to do with misinformation? Another extending question is what’s the influence of the spread of fakes news on emotion states on Twitter?

sdbaier commented 2 years ago

Reading your 2021 Public Opinion Quarterly got me thinking, wouldn’t data linking of social media profiles via the survey method only be appropriate for uncontroversial settings or accounts? Particularly when looking at polarizing accounts, I would expect that they would be much more reluctant to share demographic information – introducing a biased subset depending on the research context. The fake news sphere on Twitter – as illustrated in the 2019 Science piece – would constitute one such context. I am sure you’ve thought through this when writing the 2021 piece or in your prior research. What is your take?

Side note: I have recently read your 2019 Nature Communications paper with Charles Gomez on diversity of ability and diversity of knowledge. Fascinating, particularly your approach of using ABMs. Thank you for making the code open access via Harvard’s Dataverse!

Yaweili19 commented 2 years ago

Hi Professor Lazer,

Thank you for coming to our workshop and sharing your amazing articles! I'm working in a related field and are finding them immensely helpful.

I wasn't especially clear of your definition and measurements of polarizing/polarized accounts, would you mind sharing more about this definition and measurement? Would it be otherwise doable with less computational power so that we can make a similar attempt?

YileC928 commented 2 years ago

Hi Prof. Lazer, thank you so much for coming to the workshop! My question may be a bit unrelated to the two papers you are sharing, but I really hope to hear your insight on studying misinformation and platform influence from a network science perspective.

sudhamshow commented 2 years ago

Dear Professor Lazar, thanks for presenting your work, it has been quite revealing! A couple of questions on the readings and research design - 1) How does one usually go about, or decide when to ask for permission when involving individuals for research? If the researchers are made known of the experiment, the survey response could be biased (reactivity due to awareness of being studied). But researchers often emphasise on the importance of getting user consent (exhaustive coverage by M. Salganik). This could often seem infeasible specially when studying users at a scale of your study (1.5M). What would you suggest to fellow researchers who are caught in this dilemma?

2) M. Salganik in his book (Bit-by-Bit) talks about innovative methods of collecting data, one of which he calls Amplified asking, where the authors (Joshua Blumenstock) try to impute missing data using machine learning models and data available of other users. In your research you mention how critical survey data is and how non-response could bias results. I was wondering if these data (intent and sentiment corresponding to particular topics) could be imputed based on other available data like age, demographics, etc.

3) In your second article you attribute the 'fakeness' of an article to the publisher rather than the source. Don't you feel that social media platforms should not play the role of the arbitrer of truth? Pushing the onus of truth keeping to social media platforms is going to let perpetrators off scot free. What is your take on this? (And the recent arguments of big tech on gatekeeping)

97seshu commented 2 years ago

Hi Professor Lazer, I was reading your article about linking Twitter data with external datasets (e.g., surveys) to better measure behaviors. I do feel that combining the methods can give us more representative measurements. But I also think it can be more costly and time-consuming. Do you think what kinds of studies would be benefited the most by using this mixed method? Which ones would be harder to implement this procedure? Thanks.

Coco-Jiachen-Yu commented 2 years ago

Hello Professor Lazer, Thank you so much for sharing with us your research! I'm very interested in the ethical discussions in your paper that used administrative records and survey data to construct tweeters and tweets. Do you think ethical considerations of your approaches might vary in investigating other topics that are more private or subject to demand characteristics (e.g., family history, moral decisions, etc.)? In such cases, do you believe that people's intentions to give consent to use their data might influence the representativeness of data?

mdvadillo commented 2 years ago

Hi Professor Lazer, thank you for your presentation. In reading the paper on 'Constructing Samples of Tweeters and Tweets', I was wondering if there is any additional advantage if researches were to link multiple social media accounts to one person? Say, through a survey and with participants informed consent, get their LinkedIn and Facebook information and complement demographic data with participants' multiple profiles