deholz / AreWeDoomed24

2 stars 0 forks source link

Week 5 Questions: Misinformation & Conflict #10

Open jamesallenevans opened 5 months ago

jamesallenevans commented 5 months ago

Questions for Carl Bergstrom based on the readings:

  1. Bak-Coleman, Joseph B., Alfano, Mark, and Barfuss, Wolfram. 2021. “Stewardship of global collective behavior.” PNAS 118 (27) e2025764118.
  2. Joseph Bak-Coleman, Carl T. Bergstrom, Jennifer Jacquet, James Mickens, Zeynep Tufekci & Timmons Roberts. 2023. “Create an IPCC-like body to harness benefits and combat harms of digital tech: Emerging information technologies, including ChatGPT, require proper stewardship. An intergovernmental panel to synthesize the evidence offers the best path forward.” Nature, 17 May 2023.
  3. Bergstrom, Carl T., & West, Jevin D. 2021. Calling bullshit: The art of skepticism in a data-driven world. Chapter 2, “Medium, Message, and Misinformation.” Random House Trade Paperbacks.
timok15 commented 5 months ago

In discussions of AI, I have seen the conjecture that the flood of low quality AI-generated drivel combined with the existing human-made drivel on social media might make people abandon social media, or at least distance their relationship with it. I am skeptical of this possibility, however: Do you see any merit in it? Or do you think this drivel is primarily the “intellectual junkfood” you talked about? With the story of the fake Indian Whatsapp gangs, do you find it more likely that people will instead drown in unreality? And, although the year has still only just begun, have you seen anything notable in the misinformation space, for good or ill?

lubaishao commented 5 months ago

Intergovernmental organization always require a large bureaucracy and a long procedure. How can a intergovernmental approach (IPCC-like body in the article) ensure that its oversight and governance mechanisms are agile enough to keep up with the evolving nature of misinformation and disinformation campaigns in the realm of digital technologies, while also fostering innovation and responsible use? Or will government share all its information, which may engender its rapid development of given technologies, for an international governance purpose?

miansimmons commented 5 months ago

Big technology companies first argued that they were not responsible for any content posted on their sites and have now evolved to provide some content regulation (e.g., codes of conduct, report feature, bans). However, this regulation is minimal and mainly targets the individual user on an ad hoc basis. Should content on social media sites be regulated and, if so, who should do it? Should it be the tech. companies themselves or an outside governing body? Which types of content should be targeted?

DNT21711 commented 5 months ago

With those issues raised by 'Stewardship of Global Collective Behavior' and 'Calling Bullshit' over the difficulties brought out by digital communication when nurtured into malicious or pervasive global collective behavior and the spread of misinformation in the digital world respectively, what is the best way of integrating considerations from both articles to provide strategies that not only improve public understanding about science-based information but also knowledge on combating the spread of misinformation in the increasingly connected societies?

ldbauer1011 commented 5 months ago

Social media companies walk a tight balance between ensuring their platforms are friendly for advertiser, and claiming to be an open forum that discourages excessive moderation. During the initial Trump candidacy in 2016, this conflict was brought to the fore due to Trump's embrace of conspiracy theories, which naturally increased the discussion of these theories. In recent years, social media companies (apart from Musk's X), have largely followed the money and landed on the side of moderation to keep advertisers happy. Do you think that the internet's potential of a free and unlimited forum for speech is dead? Is is irresponsible to believe that such a dream was ever achievable with the embrace of free content?

M-Hallikainen commented 5 months ago

In recent years many social media platforms have implemented misinformation safeties, such as fact checkers and community corrections on posts. However, as these features become more prominent, they are also chafed against, with many people already invested in misinformation narratives seeing fact checks as a sign that the "truth is being suppressed" or that the "narrative is being controlled." How can platforms and users combat misinformation in a way that better engages users already bought into misinformation?

lucyhorowitz commented 5 months ago

A while ago many people involved in AI were seriously advocating for a "pause" in development while people worked to answer some essential questions about safety. Has such a thing ever been suggested for social media? To my knowledge the answer is no, but why not? Were we unable to forsee the bad things that could (and did) happen, or were we immediately too invested in them? Was it too natural of an extension of our older means and modes of communication to stop them outright?

imilbauer commented 5 months ago

Do you see society as shaping our technology use or our society as a product of our technology use? Or is it some of both? In "Medium, Message, and Misinformation," you seem to emphasize that technology has driven us to this point, writing that the rise of the internet has created a new informational environment that engenders "misinformation, disinformation, and fake news." If the way that we engage in information is so much shaped by the technology itself, and the technology exists, is it even possible to but the genie back in the bottle?

agupta818 commented 4 months ago

In Calling Bullshit, you mention that the internet democratized who could publish so anyone can contribute online, a benefit being that those who are minorities or marginalized now have a platform to share their stories. While I believe that spaces like these should exist for such benefits, how do we address the downsides such as people who push for free speech rights regardless of truth of their statements (and even if something is fact checked, it does not mean everyone will see the new story that proves the original statement is incorrect/not factual)? Also, how do we as readers separate subjective and objective truth as we consume information we find online?

cbgravitt commented 4 months ago

Many fields have been deeply affected by the spread of misinformation and disinformation. Which of these fields do you think has been and/or will be affected the most? In the context of existential risk, which area poses the greatest threat if accurate inaccurate information continues to thrive?

oliviaegross commented 4 months ago

Based on the readings, what do you think the future of human agency looks like in relation to new informational technology?

WPDolan commented 4 months ago

One major contributor to decline of quality information on the internet discussed in the book is that social media companies are incentivized to serve maximally "engaging" content to generate ad revenue. Would alternative, non-corporate forms of social media like the fediverse alleviate some of these problems or would they instead exacerbate issues through the development of social media bubbles?

maevemcguire commented 4 months ago

Why do you question whether or not PeaceTechLab can operate with independence?

mibr4601 commented 4 months ago

In your book, Calling Bullshit, you suggest that government intervention and technology are unlikely to be able to counter the spread of misinformation independently and that education will likely do a better job. Do you think the government can build an incentive program to try and get tech companies to spend more time and effort on building a system that properly counters misinformation? Even if the government built an effective incentive system, do you think that if tech companies spent more resources on building effective anti-misinformation models or refined their algorithms, they would be functional at all?

madsnewton commented 4 months ago

Social media sites have begun flagging posts for potential misinformation. However, I still see many people online disagree with the fact checking and think it’s a conspiracy to hide the “truth”. Considering these types of people still end up spreading the misinformation anyway, do you think there are any solutions for this? Or is the fact that there is a content warning on these posts enough to dissuade most people?

AnikSingh1 commented 4 months ago

Do you fear regulation/mitigation of what people could say online (even if it is misinformation) could result in hyper intensive backlash, ultimately resulting in no informative solution to how we weaponize the use of the internet? It's odd because people can say anything with online media, but it is others who ultimately interpret that garbage in the first place - so it's a circle of, like you mentioned, misinformation.

emersonlubke commented 4 months ago

How do we prepare the next generation of people (I guess that includes us students) for the rising AI created content and purposeful disinformation that's already taking over the internet? Should there be media literacy classes in schools that teach people critical thought and the ability to see through misinformation? With the internet at practically everyone's fingertips we need some sort of education on proper usage, or else misinformation can have profound impacts on our society, which we are already seeing.

Hai1218 commented 4 months ago

Given the potential for technologies like deepfakes to be politically exploited, what collective actions can be taken by international bodies, governments, and tech companies to address and mitigate the use of such technologies, especially during sensitive periods like elections and when decision-makers are the set up to reap the benefits of misinformation spread by these technologies?

kallotey commented 4 months ago

Thinking about the purpose of algorithms (seeking the most profit without regard to the kinds of information shared to users), can you see any ways to mitigate the spread of misinformation? Or rather, how to user could adjust how they make better use of social media to maybe “reset” algorithms or not fall too deep into the pools algorithms pull them into?

jamaib commented 4 months ago

The approach of governmental regulation of information, specifically when concerning essential information such as the news was mentioned in your works. In what form would you see this being most effective and should this be the job of a separate entity? How involved should the chosen "moderator" be?

tosinOO commented 4 months ago

Considering the rapid advancement and integration of technologies like ChatGPT into our daily lives, how can we ensure that the creation of a panel to oversee these tech advancements effectively balance the need for innovation and the ethical implications of such advancements, particularly in terms of privacy, autonomy, and the potential for a digital divide in society?

briannaliu commented 4 months ago

The internet is a vast expanse of information, opinions, and people with an interest in giving biased information to sway others. Do you believe there is a future where we can eliminate misinformation?

What do you believe is the greatest source of misinformation today, and how can we stop it?

ghagle commented 4 months ago

Do you think that there are problems with social media that lie beyond just disinformation and overwhelming abundance of extreme standpoints? In particular, do you think that just regulating these features is enough to limit the dangers that come with the social media landscape we are facing today (depression rates, poor school performance, etc.)?

Daniela-miaut commented 4 months ago

As a student of social science, I am interested in using simulation to study misinformation, so as to explore its mechanism and do predictions. I wonder if you can share your precious insights on this area of study.

AudreyPScott commented 4 months ago

I greatly appreciate your stance on science as not an end-all, be-all rule to society, but rather something shaped by subjective experiences of culture and individuals (something also of note when it comes to the ways science intertwines with power structure). You note on your website that one dependency of science -- and, in my view, one that can lead to misinformation as was seen recently with the superconductor debacle -- is "prestige... [that may cause one to] shape their research practices in accord with the incentives created by community norms and institutions." To what end do you see prestige as a modern motivation of misinformation, and what role may journal ranking systems like your Eigenfactor play?

aaron-wineberg02 commented 4 months ago

What is your vision for how civic institutions can provide justice for the victims of disinformation? Consider the people who have been lured by extremist sects, scams, and conspiracy theories that have led people to horrible realities. I wonder if the courts and laws need to change pace to meet the needs of people being victimized. Do you have any reactions to large-scale change or should this come in the form of corporate and local decisions?

GreatPraxis commented 4 months ago

In considering the future of combating misinformation, several paths emerge. Do you think governments will resort to mandating solutions, or will they opt for incentivizing companies to address misinformation? Alternatively, could social media platforms develop self-regulatory measures, such as dedicated misinformation moderation teams or community-driven solutions like Twitter's/X's community notes? Additionally, in all of these solutions, there's the concern that aggressive fact-checking might inadvertently exacerbate the problem by fostering the feeling of someone hiding the truth. Therefore, is there a delicate line between vigilant fact-checking and eroding trust in fact-checking sources?

summerliu1027 commented 4 months ago

Your May 2023 Nature paper talked about creating an IPCC-like body to combat the harms of digital tech. If we were to create an IPCC-like organization, what do you think are the most important goals that this organization should aim to accomplish? How would it achieve this goal?

gabrielmoos commented 4 months ago

Networks now move at the speed of light, or at least the speed of your ISP. With social media already moving and spreading information so quickly, "Can we go any further in regards to the speed of networks?". Also, we've started developing tools like community notes or fact checking websites that attempt to mitigate the spread of disinformation. Are there any other effective roadblocks that can help slow down the speed of networks?