uchicago-computation-workshop / Winter2021

Repository for the Winter 2021 Computational Social Science Workshop
7 stars 5 forks source link

03/04: Renee DiResta #8

Open smiklin opened 3 years ago

smiklin commented 3 years ago

Comment below with questions or thoughts about the reading for this week's workshop.

Please make your comments by Wednesday 11:59 PM, and upvote at least five of your peers' comments on Thursday prior to the workshop. You need to use 'thumbs-up' for your reactions to count towards 'top comments,' but you can use other emojis on top of the thumbs up.

bjcliang-uchi commented 3 years ago

Thank you so much for delivering this presentation! I have two questions:

adarshmathew commented 3 years ago

We're so excited to have you talk at out workshop, Ms. DiResta! As a longtime follower of the SIO and your posts on the EIP, this report reads like a handbook on classic patterns to look out for in disinformation campaigns.

I have several questions:

  1. Collective sense-making & the polluted ecosystem: While you go on to distinguish between top-down and bottom-up campaigns, I wanted to take a step back to look at the ecosystem itself. As Ta-Nehisi Coates points out, the right-wing conservative base has spent decades de-legitimizing Democratic politics and office-holders: birtherism with Obama, corruption and satanism with Clinton, the spectre of communism with Bernie and AOC. In this regard, the creation & spread of (what we know to be) misinformation is inevitable and been cultivated for years, through resentment and a cumulative weakening of faith in non-Republican institutions. If one thought the Democrats were corrupt or satanic or pedophiles or anti-American, you would latch on to any incident that confirms your bias because it follows from what you already know (and risk moral dissonance otherwise), and these campaigns build/feed off of this collective sense-making process. As such, all the measures you list in the report fall short because they are limited to analyzing campaigns and not the ecosystem and the history of embedded truths in it.

Based on your work with the SIO and info-ops research, how do you think we -- all stakeholders -- should go about dismantling this cultivated ecosystem, re-establishing a modicum of faith in institutions, and have the ecosystem converge on a minimal but common understanding of the truth? What is the knowledge equivalent of decarbonizing the environment? I ask this as someone who's studying QAnon, and the path to 'radicalization' seems to link back much further than the emergence of Q in 2017/18.

  1. The role of verified accounts, recommender systems and friction: Verified status and volume of content seem to go hand-in-hand; the only accounts who are verified and rarely post content I can think of are celebrities, where the account is created to prevent misrepresentation, which points to the origin of the verification system. The blue-check inevitably adds a veneer of legitimacy to the content these user generate, even if that isn't the intended objective of the marker. Through a combination of perceived legitimacy, high-volume production, and large follower-base, it seems inevitable that their content would rise through the ranks of recsys results, even showing up high on the Search results. It gains this status without any checks on veracity or history of accuracy by the user. Do you think introducing friction to the recsys for users with a track-record of mis/dis-info could help platforms, give them time to verify content before letting the virality machine kick into hyperdrive mode, even if it gets in the way of their engagement-means-dollars business model? There has been some effort in this regard when it comes to COVID news and live events that the platform has identified (Twitter does this well). What do you think are the limitations to this schema when we extend it to, well, any and all content?

Bonus Questions

On Automation & Cross-Platform Surveillance. Feel free to repurpose them. Click to expand! 3. **Potential for automated ticket identification**: You have an excellent dataset at hand which tracks the origin of new narratives and how they evolve in this setting. I'm curious if your team ever tried implementing automated ticket identification for new narratives, instead of them being reported by volunteers. And once you identified a new narrative, how did you go about distinguishing variants of the same, with minor tweaks, at scale? I ask this because this seems to be the question haunting platform moderation policies, and they seem to be stumbling quite a bit. 4. **Cross-platform laundering & surveillance**: I'm borrowing this term from your earlier work on the non-siloed nature of users on platforms. The examples in the report follow the pattern of originating from the least moderated platform and making its way to the biggest and ostensibly the most heavily moderated one; a deliberate pattern as agents weigh the tradeoffs between audience size and virality with moderation strictness. This divide also mirrors the difference in resources: if Reddit had a moderation team as sophisticated as Facebook's (without the political restraints), they could have nipped this in the bud on their platform, forcing the narrative to find a different path onto the popular platforms. _Should more platform companies be monitoring content on other, less-moderated platforms to pre-emptively catch disinformation cascades?_ Given your work in the Valley, do you know of firms which are employing this beam-search approach?
nwrim commented 3 years ago

Thank you so much for coming to our workshop! I think this is one of the most time-sensitive issues that we have to deal with, and I feel the problem is not even restricted to America (for example, "the election is stolen" kind of discourse is quite easily found in my home country, Republic of Korea).

I am by no means literate on the literature and works on this topic, but personally, I am getting more skeptical of the notion that a centralized "fact-checker" can be an effective solution. I feel that people put the "biased" marker on whatever institutions or individuals that do not tell what they believe in (maybe the image of FBI to both republican/democratic partisans during the 2016 election flipping multiple times is a good example). Reading through the recommendation section on the summary of the report, I kept thinking that "even if this all happens, would people believe the information and fact-checks on the "reliable" sources?" Or even worse, what if the party controlling the governmental agencies and other institutions actually amplifies the disinformation?

Reading the above passage, I do think I am overly pessimistic and skeptical - but essentially what I was curious about why you thought that this recommendation you came up with is the first step toward better information space, both online and offline. I feel that my question is vague and maybe even arrogant, but I am really curious about your thoughts!

JadeBenson commented 3 years ago

Thank you so much for this fascinating and incredibly relevant research! I thought the recommendations were compelling and I'll be interested to hear your further thoughts on those (as some have already mentioned). One group that I thought was missing in those recommendations was the public. What are your recommendations for us to avoid this type of mis- and dis-information? How can we as individuals better recognize this and perhaps help those that we know from believing it?

sabinahartnett commented 3 years ago

Thank you for sharing your work with us! I'm really looking forward to your talk as well.

As this reading quite generally referenced mis/disinformation and some of the participating actors, I was thinking about how certain campaigns purposefully target specific groups with their messaging (i.e. capitalizing on 'morality' or 'patriotism' in the christian nationalist movement) as well as the algorithmic biases that have been revealed on many social media platforms to prioritize increasingly more extremist content.

I would be really interested to hear more about any research you or your colleagues have conducted regarding targeted disinformation campaigns (included as your third goal) and user susceptibility (are there trends in user behavior prior to radicalization or belief in disinformation campaigns and might recognizing this be a good intervention strategy)?

chrismaurice0 commented 3 years ago

Thank you for speaking with us! I look forward to hearing your presentation tomorrow. Throughout Trump's presidency, he consistently discussed the need to revoke Section 230. While I think Trump may have been confused about what Section 230 actually does, the idea of revoking (or reforming) Section 230 has broad bipartisan consensus in Congress. I am curious if you think this is a path the country should go down, and if you think a reform for this legislation will bring about the necessary changes to compel social media companies to monitor the content published on their platforms?

k-partha commented 3 years ago

Thanks for presenting! There's a lot of excitement and entrepreneurial energy pushing for fully decentralized social media platforms powered by crypto. Given that the primary selling point of such technology - including cryptocurrency - is censorship resistance, do you see the development of these new, centrally ungovernable, virtual spaces as a looming disaster with respect to the health of social information dissemination, should they entirely disrupt current platforms? In general, how should we evaluate the potential impact of these technologies on misinformation and censorship debates?

ttsujikawa commented 3 years ago

Thank you very much for sharing your fascinating research work, I am looking forward to hearing your presentation tomorrow! Reading this reading regarding dis-/mis- information provided to us, I thought that it would be almost impossible to avoid the involvement of dis-/mis- information. In this context, how would you think we should evaluate the healthiness of social information and its ecosystem to prevent certain entities from purposefully targeting one group? Thank you!

a-bosko commented 3 years ago

Thank you very much for sharing your work with us!

It was very interesting to learn more about misinformation and the 2020 election. The extent of false and misleading narratives is shocking. I believe that everyone should be very conscientious about online media and the sources that information comes from.

Do you believe that misinformation will only get worse from here? Or do you believe things will improve with increased awareness?

Thank you, and I look forward to your presentation tomorrow!

LFShan commented 3 years ago

Thank you for sharing your work. Since misinformation can compromise the integrity of the election process, do you think there is an effective way to counter or correct it? While twitter tags some of the Twitter with the misinformation, it does not stop people from spreading it and believing them. Why is this the case?

chiayunc commented 3 years ago

Thank you for sharing your work. Whenever I read about misinformation and the form of its circulation, I think of Professor Tim Wu's principle of net neutrality, which regulates internet providers to not discriminate between users and content. I wonder if this type of principle would help with the problem.

NaiyuJ commented 3 years ago

Thanks for sharing your fantastic project! I'm thinking that if we want to deal with misinformation, we may first precisely detect and gauge the misinformation. So I am wondering what specific research techniques (eg. specific survey techniques) may help you to precisely "calculate" the misinformation. And how do we use different research methods on different actors (state, congress, the mass)?

bakerwho commented 3 years ago

Thanks for presenting at our workshop, Professor!

Building on @adarshmathew's essay, I would love to hear more about your thoughts on automated ticket identification for cases of mis/disinformation. What theoretical frameworks would you employ in thinking about this? What methodological toolkits/algorithms?

jsoll1 commented 3 years ago

Thanks for sharing your work! I'm interested in how experts can better build up predictions of future misinformation, which was included in your recommendations

MkramerPsych commented 3 years ago

Ms. DiResta,

Thank you for sharing your research with us! In your recommendations, I noticed that there may be a dissonance between recommendations of strong paper auditing of votes while simultaneously pushing for online communication between the government and voters. How do you suggest we develop equitable voting practices when dealing with those who are not regular internet users (rural Americans, who are more likely to lean conservative than liberal) without introducing some kind of bias to the election process?

Leahjl commented 3 years ago

Thank you so much for this fascinating and incredibly relevant research! I'm curious about the relationship between future misinformation and targeted groups.

xxicheng commented 3 years ago

Thank you for sharing your work with us. Have you thought about the relationship between misinformation and inequality?

william-wei-zhu commented 3 years ago

Thank you very much for sharing your work. I look forward to your talk tomorrow.

Lynx-jr commented 3 years ago

Thank you for sharing your work with us. I'm interested in how we can better forecast future misinformation, can you elaborate more on that? Thanks!

Yutong0828 commented 3 years ago

Hi, thanks for introducing your work with us! I have several questions about EIP.

  1. Does EIP collaborate with any official entities or mass media companies? Like who are more able to take actions once misinformation is detected?
  2. Could you introduce more about how EIP helps with detecting and clarifying misinformation/disinformation instantly? Like the process or procedure you follows? Thank you very much!
Bin-ary-Li commented 3 years ago

Thank you for sharing your works. My question is that how do you view the control of mis-/disinformation and the authoritative power of government? Do you think that the former will more or less contribute to the making of the latter?

tianyueniu commented 3 years ago

Thank you so much for your presentation! I agree with the above comment in that I'm not sure if people would believe information and fact-checks on the "reliable" sources. Looking forward to hear your comments on this issue!

j2401 commented 3 years ago

Hello,

As @bjcliang-uchi mentioned, people often migrate to platforms which are less regulated. Do you believe that, regulations/censorships, or "fact checkers" on those platforms are actually intensifying the transmitting of dis/misinformation. Some account holders might claim: I disclosed the fact that they are trying to conceal- or other statements like this. I wonder whether a "centralized" fact checking process will be helpful to reduce dis/misinformation like this. Thanks!

bowen-w-zheng commented 3 years ago

Thank you for sharing your work! I am very interested to hear your thoughts on the question raised by @k-partha. I would imagine a fully decentralized platform could exacerbate the issue by allowing people to misinform without punishment. But at the same time, for people with extreme ideology and believe that their voices are silenced by a centralized platform, the decentralized system might be the only legitimate proof that their ideologies are less popular than they think.

romanticmonkey commented 3 years ago

Thank you for your presentation! I am very curious about the network structure of (mis)information dissemination. Have you done any research on the spread of (mis)information specifically for the Jan 6 event? Does it differ from other fake news events? (e.g. Obama assassinated)

Raychanan commented 3 years ago

Hi Ms. DiResta, I have a broad question.

The 2020 election demonstrated that actors—both foreign and domestic—remain committed to weaponizing viral false and misleading narratives to undermine confidence in the US electoral system and erode Americans’ faith in our democ- racy.

So in your opinion, is the current state of disinformation and misinformation in the 2020 election worse or better than in 2016?

Thank you very much for your presentation!

luckycindyyx commented 3 years ago

Thanks for sharing such interesting paper! A lot of the media think that there are much more misinformation when "mailing voting" becomes possible. I was wondering if you could go a little bit further into what exactly could go wrong, and if "mailing voting" becomes a more popular choice in the future, is there anything that should be adjusted or reformed? Thank you!

boyafu commented 3 years ago

Thank you for sharing your research! I am curious about the role that external research could play in mitigating the misinformation.

mintaow commented 3 years ago

Thanks for sharing your research. I am also curious about the questions brought up by Yutong and Naiyu. Would you mind elaborating a bit more on the specific techniques you use to make a timely detection and evaluation of the misinformation?

Jasmine97Huang commented 3 years ago

Thank you Dr. DiResta for presenting your research. I feel like much attention has been paid to the spread of misinformation. However, the effect of misinformation is a little unclear/difficult to measure. What types of misinformation are deemed most credible/effective/persuasive?

lulululugagaga commented 3 years ago

It's an honor to have you. Misinformation is often hard to identify and false information is likely to spread, especially for those who have particular political faith. For example, when Trump said disinfectant could combat Covid, even if many would refute his saying, some would still believe it, for the words did come from a person they believed in. What do you think we can do in this scenario?

yutianlai commented 3 years ago

Thanks for your presentation. I'm wondering how we could predict misinformation.

shenyc16 commented 3 years ago

Thank you for sharing this interesting research with us. I'm very curious about the specific model mentioned in the reading. Would you mind elaborating more on how the distortion of midinformation or disinformation is measured according to the model? What is the structure of input data?

vinsonyz commented 3 years ago

Thank you for your presentation. My question is how we can extend your conclusion to other fields in social science.

MengChenC commented 3 years ago

Thank you for sharing your work. Since the misinformation is emerging and evolving, I am wondering what will be the trends and types of misinformation in the near decades. Thank you.

mikepackard415 commented 3 years ago

Thank you for sharing your work! It seems like the fundamental problem with misinformation is that the amount of information available is quickly outpacing our ability to effectively make sense of it. In other words, the problem isn't that people have gotten dumber, but that it has become harder to make sense of an increasingly complex world and easier to pass along inaccurate information. The paper you shared has some great near-term recommendations, but I'm curious, what does the long-term solution look like? Do we need to train ourselves not to "pollute" the information ecology with bad info, educate ourselves to better discern good from bad info, a combination of both, or is there another path we should pursue?

Yilun0221 commented 3 years ago

Thanks for the sharing! I wonder how you consider the sequential influence of election-related policies on mis- and disinformation? Thanks!

AlexPrizzy commented 3 years ago

Thank you for coming to our workshop. Rapid sharing of information allows for false information to spread quickly, requiring fact checking to be as quick as the spread of information, essentially making censorship and online communication to go hand in hand. Do you think that the spread of misinformation is a natural flaw of social media?

cytwill commented 3 years ago

Thank you for your presentation. I am interested in how you would define misinformation of different types and what kind of mechanisms or measures can those tech companies or social media platforms deploy to handle these different forms of misinformation according to your research conclusion?

minminfly68 commented 3 years ago

Thanks in advance for your presentation. I am wondering how we can apply it to other fields and how could that trend affect the whole society? Thanks.

WMhYang commented 3 years ago

Thanks for sharing your work. Looking at your policy recommendations, I did not have a clear impression on to what extent will these work, as well as how much they will cost. From economics perspective, how could we provide incentives for the relative authorities to act their responsibility? Thanks again.

mingtao-gao commented 3 years ago

Thank you for sharing your work! As the paper suggested and recommended, many social media platforms took actions against policy violations including spreading misinformation. My question is to what extent, do you think the platform should have control over the contents that users are posting and sharing? How can we clearly define what posts are "misinformation", or just expressions of thoughts?

NikkiTing commented 3 years ago

Thank you for sharing your work! What specific recommendations do you have for preventing the spread of mis- and disinformation on unmoderated platforms?

luyingjiang commented 3 years ago

Thank you for your presentation. I am wondering what is your opinion about the relationship between misinformation and inequality?

Dxu1 commented 3 years ago

Thank you for your work on such interesting topic! What is your take on the line between platform taking necessary steps on prevent the spread of misinformation and violating "freedom of speech"? Should the treatment be different based on whether the user is an influencer or not?

YanjieZhou commented 3 years ago

Thanks very much for your presentation. Disinformation is a really interesting topic to study, but I am wondering how to set the boundary when the context of information becomes complex?

YuxinNg commented 3 years ago

Thanks for you presentation. I share the same question as @lulululugagaga, what can we do given the misinformation is hard to identify and misinformation is likely to spread. Thank you!

ghost commented 3 years ago

What do you think about the future of misinformation?

FrederickZhengHe commented 3 years ago

Thanks very much for this summary. My question is: different political parties have their own pattern of misinformation and disinformation, are they quite similar or how do they differ from each other?

Yaweili19 commented 3 years ago

It is our great honor to have you share your work. However, I am still a bit confused by the methodological part. Looking forward to your speech!