deholz / AreWeDoomed24

2 stars 0 forks source link

Week 5 Memos: Misinformation & Conflict #9

Open jamesallenevans opened 5 months ago

jamesallenevans commented 5 months ago

Reply with your memo as a Comment. The memo should be responsive to this week's readings on Misinformation from Carl Bergstrom, with 300–500 words + 1 visual element (e.g., figure, image, hand-drawn picture, art, etc. that complements or is suggestive of your argument). The memo should be tagged with one or more of the following:

origin: How did we get here? Reflection on the historical, technological, political and other origins of this existential crisis that help us better understand and place it in context.

risk: Qualitative and quantitative analysis of the risk associated with this challenge. This risk analysis could be locally in a particular place and time, or globally over a much longer period, in isolation or in relation to other existential challenges (e.g., the environmental devastation that follows nuclear fallout).

policy: What individual and collective actions or policies could be (or have been) undertaken to avert the existential risk associated with this challenge? These could include a brief examination and evaluation of a historical context and policy (e.g., quarantining and plague), a comparison of existing policy options (e.g., cost-benefit analysis, ethical contrast), or design of a novel policy solution.

solutions: Suggestions of what (else) might be done. These could be personal, technical, social, artistic, or anything that might reduce existential risk.

framing: What are competing framings of this existential challenge? Are there any novel framings that could allow us to think about the challenge differently; that would make it more salient? How do different ethical, religious, political and other positions frame this challenge and its consequences (e.g., “End of the Times”).

salience: Why is it hard to think and talk about or ultimately mobilize around this existential challenge? Are there agencies in society with an interest in downplaying the risks associated with this challenge? Are there ideologies that are inconsistent with this risk that make it hard to recognize or feel responsible for?

nuclear/#climate/#bio/#cyber/#emerging: Partial list of topics of focus.

For one session over the course of the quarter, you may post a memo that reflects on a film or fictional rendering of an existential challenge. This should be tagged with:

movie / #novel: How did the film/novel represent the existential challenge? What did this highlight; what did it ignore? How realistic was the risk? How salient (or insignificant) did it make the challenge for you? For others (e.g., from reviews, box office / retail receipts, or contemporary commentary)?

timok15 commented 5 months ago
image

misinfo, #framing

So that you, the reader, weren’t primed for that book title’s trick, I put the picture before the writing. (I’m talking about this book purely for the effect of the title, not its content.) A native English-speaking brain automatically expects only one “the” preceding a noun, so it auto-deletes the second one. While you yourself might have caught it on the first try, the title is so effective because it does work on many people at least the first time.

The combination of fake and fraudulent information with the modern info-torrent has certainly already proved itself detrimental, dangerous, and deadly. However, Bergstrom’s example article of “Israeli Defense Minister: If Pakistan Send Ground Troops to Syria on Any Pretext, We Will Destroy This Country with a Nuclear Attack” has made me consider the intersection of the far older problem of language disjunction, both involving native speakers and second language speakers.

I invoke the above book title's effect because the mistake I immediately caught in that fake article was the non-idiomatic (as in weird sounding, but not ungrammatical*) “Destroy This Country with a Nuclear Attack.” My brain expected “sends,” not the actually written “send,” so the first time I didn’t catch it. I myself can’t give a solid reason why the alternative phrasing “destroy Pakistan with a nuclear strike” sounds more correct. Thus, I am curious to consider how much the fact that more exchanges in English now take place between non-native speakers than between native speakers affects the spread of dis- and misinformation through writing that is only recognizably false because it is non-idiomatic.

Unfortunately, with this problem, the obvious solutions are almost certainly not workable. At first glance, an algorithm could be developed to try to highlight such grammatical mistakes and non-idiomatic statements to try to warn people to tread lightly because dis- or misinformation might be afoot. Yet, in such a scenario, I would foresee the algorithm wrongfully dinging native and non-native English alike either because they're learning and doing nothing wrong or just wording something strangely on purpose either for a joke or to make a point. This angle of fraudulent English language articles which are difficult to spot by their target audience, even if easily caught by another audience, is one I am very curious to see explored further. While I personally believe that there are more pressing aspects to the current threat presented by the modern info-torrent, I wanted to provide another angle on one aspect of the crisis that stuck out to me particularly that the readings didn’t talk about.

*A good example of the distinction between idiomatic and grammatical would be saying “the book of John” when you mean “John’s book.” The first phrase is perfectly grammatical, yet it doesn’t properly communicate that John is the owner, instead of the writer with biblical phrasing.

lubaishao commented 5 months ago

origin #risk

I think misinformation driven by AI has already attacked many procedures and principles we take for granted by giving more people the power of manipulating narratives and distorting reality.

Argentina should be the first case of the weaponization of AI in a presidential election. Both candidates extensively utilized AI tools during the presidential campaign. In a tweet posted on October 31st, Javier Milei generated an image portraying his opponent as a Leninist communist in a tweet image generated by AI. This tweet had a significant impact on the online sphere. unknown

The elections in Argentina became the world's first testing ground for AI-driven political campaigns, with both final candidates and their supporters using this technology to manipulate existing images and videos or create new ones to attack their opponents. This included, but was not limited to, using AI-generated deepfakes to fabricate opponent information and placing them in famous movies and memes.   Candidate Sergio Massa's campaign team developed an AI system capable of creating images and videos of various activities for many key election participants (candidates, campaign partners, political allies). Massa portrayed his opponent Milei as an unstable character, based on his extremely right-wing liberal economist image, placing him in movies like "A Clockwork Orange" and "Fear and Loathing in Las Vegas."   The use of AI technology in Argentina's presidential election garnered significant attention. On one hand, AI technology reduced the costs of campaign activities for each candidate, saving expenses on services like graphics and advertising traditionally provided by personnel. It also increased the efficiency and ways in which voters could connect with candidates. The reduction of campaign costs and the improved efficiency for ordinary people essentially contribute to fair elections by reducing candidates' dependence on interest groups and capital. On the other hand, AI technology also provided conditions for defaming candidates and escalating online violence. Additionally, due to the convenience of AI-generated graphics, political elections exhibited a trend towards entertainment.

At last, should we consider it as a further liberal expression of candidate or a degenerated and putrid democratic system due to the advancement of technology?

DNT21711 commented 5 months ago

Risks #Solutions

One specific threat highlighted within the realms of both "Stewardship of Global Collective Behavior" as well as "Calling Bullshit" revolves around the exponential proliferation for disseminating 'misinformation' during this age of digital communication spree, particularly through social media. Misinformation or false information with intentions to deceive is a severe growing concern for public understanding and decision making. It so fast spreads in the current digital age that may lead to public misconception, panic, and inappropriate action which can prove destructive particularly during crises such as pandemic spread or political unrest.

Those digital communication platforms carry a number of inbuilt factors that enhance the chance of misinformation online. Most of these platforms adopt algorithms which generally show content that users will engage with this, often with scant regard for its accuracy or factuality. This can lead to an amplification effect, maximizing the spread of more dramatic but less truthful information. Moreover, the quantity of information as well its speed in its flow online is quite high making it practically impossible for individuals to critically analyze all information that comes their way. Moreover, the chamber effect can further solidify misinformation in such a way that one is exposed to other views but are most likely similar to that of what they already believe.

One specific solution for this risk includes tackling this issue from various perspectives through digital literacy education. This solution would break beyond the traditional norms of media literacy and cover critical thinking skills specifically attuned for digital sunlight. The teaching of digital literacy would emphasize learning how to critically assess sources, deciphering the intent and action behind algorithms, and also identifying characteristics of misinformation. This education has to be integrated within the school's curriculum at multiple levels and also be made available to the larger public through communitarian programs as well as online tools.

Besides education, there is a requirement of better content moderation policies and mechanisms on digital platforms. This involves a mixture of algorithmic and human oversight in order to identify as well as limit the propagation of demonstrably false information. While raising concerns of censorship and freedom of expression, a balanced approach will be imperative in targeting the clear cases of misinformation, especially that which pose significant harm to public health or safety.

This approach is about combined learning on digital literacy and proper content moderation, which tackles the risk by empowering individuals to be more conservative consumers and sharers of information as well as minimizing the prevalence of misinformation in digital spaces. It appreciates that addressing misinformation is not just on the removal of false content but rather building up the public resilience to navigate its way around the complexity and diversity of information in the digital domain.

download

DNT21711 commented 5 months ago

Movie / #Novel

"The Matrix," as a cinematic work, adeptly confronts viewers with an existential quandary, portraying a world where humans, unbeknownst to them, live in a simulated reality governed by artificial intelligence. This narrative delves into themes like the distinction between actuality and illusion, the balance between freedom and control, and the essence of human consciousness. It underscores the philosophical inquiry of what is considered 'real' in an era increasingly shaped by technology and virtual experiences.

This film adeptly brings the existential crisis of differentiating between authentic experiences and fabricated realities to the forefront. However, it somewhat overlooks the pragmatic and ethical ramifications of such a scenario, particularly concerning societal structures, the dynamics of human relationships, and the psychological effects on individuals who unearth the truth about their existence.

In terms of the realism of this risk, "The Matrix" enters the domain of science fiction. Although the swift progress in virtual reality and AI technologies slightly enhances the plausibility of the concept compared to its original release, the notion of humanity, en masse, being unwittingly ensnared in a virtual world remains an implausible hypothesis.

For a broad spectrum of viewers, "The Matrix" served as a stimulating examination of intricate philosophical concepts, rendering the existential challenge tangible at a conceptual level. It ignited conversations about the essence of reality and our interactions with technology. Yet, for some, especially those less inclined towards philosophical or speculative fiction, this challenge might have appeared more as an imaginative narrative element rather than a grave existential issue.

The film's influence is evidenced by its significant commercial success, critical recognition, and its enduring impact on popular culture. This suggests that it resonated profoundly with a diverse audience, not merely as a form of entertainment but also in its capacity to spark intellectual curiosity and discourse on the nature of reality and the role of technology in our daily lives. The lasting popularity of "The Matrix" demonstrates its effectiveness in bringing existential queries into the mainstream, prompting viewers to contemplate scenarios that were previously confined to philosophical thought experiments.

download-1

ldbauer1011 commented 5 months ago

origin #risk #salience

During the COVID-19 pandemic, vaccine hesitancy was one of the main drivers of continued large scale death after the introduction of mass vaccination. People felt, for a wide variety of reasons, that the vaccine was harmful to them and their families. Whether it was concerns over the efficacy of the vaccines, the harm of the disease itself, or theories ranging from systemic microchipping to targeted sterilization, deniers had their reasons. One common theme for these reasons was their lack of statistical, empirical backing, and their reliance upon misinformation.

Before the pandemic, in 2019 the World Health Organization (WHO) had already classified misinformation as a threat to global health. An “infodemic”, or an overwhelming amount of both conflicting information from credible sources and misinformation from other sources, was identified as a potential impediment to combating the spread of disease. When confronted with the Internet’s ability to equalize and democratize speech, both conventional and unconventional sources of information are displayed the same way. People can therefore select the information that fits their prior knowledge and assumptions of society and the government and be assured that the information gained is as legitimate as any other. Since the information transmitted is “more convincing”, it is preferred to the information that challenges preconceived notions.

Additionally, misinformation is a diffused and organic phenomenon that arises from a wide variety of avenues. As a result, individuals who are predisposed to accept misinformation are also diffused across a wide range of media backgrounds and physical locations. This can form a sort of collaboration between people, and form a group of like-minded people that can reassure and reinforce each other’s beliefs. Specifically, when talking about vaccine hesitancy, three of these groups coalesced around specific narratives: medical concerns such as side effects, the speed of development at the cost of missed safety procedures, and outlandish conspiracy theories involving secret societies and microchips. When collaborating, these groups form a dangerous and very vocal proportion of the Internet, one that believes very passionately in its causes, and will use every trick in the book to “help” people “see the truth”.

Full_0921_Covid_Conspiracy

lucyhorowitz commented 5 months ago

origin

The problem of “disinformation” is in many ways a perversion of one of the most important “myths” (let’s say) in the American psyche: the myth of the everyman.

One of the most common stock characters in American stories is the country bumpkin who is actually smarter/knows better than the city slicker. This is obviously code/shorthand for “average joe” and “elite,” but the rural/urban divide is for another day. This conception apparently goes back to ancient Greece/Rome, but I don’t know much about that. What I do know is that America (or I guess, its founders) viewed themselves as direct inheritors of Greek/Roman democratic ideas, this included.

In many ways we are the stories we tell ourselves. An important example: Huck Finn and Jim outsmart almost everyone else. An “uncivilized” and a slave! Imagine that! Various presidents and presidential candidates up to and including Jimmy Carter emphasized their humble upbringing to their aid. American democracy has always required that the average American be (or at least, be believed to be, and also this is subject to the contemporary standard of who counts as American/a person…….) capable of governing themselves, making rational and informed decisions about their own lives. We often forget that this is not at all the view that most other cultures have/had. Rigid class system in England! Serfs in Russia! And so on.

So it is with a little bit more optimism than I had last week that I say I think it might actually be possible to have a capable and educated populace. If these companies and the ideologues want our attention, clicks, and screen time so bad, it might be because it is actually valuable. They work very hard to take us out of “reality,” so to speak, and occupy our awareness with “mis/disinformation.” Of course, we have to be careful about who decides what’s “real” or “true.” But maybe, just maybe, this isn’t purely a cash grab. Maybe its really an ideological one, not just something using ideology as a tool, and if so that could just imply that what Americans believe and what they are able to reason about for themselves is actually meaningful and has an effect on the way the country is run.

r2v38tweohg71

M-Hallikainen commented 5 months ago

misinfo, #framing, #salience

I chafed against a lot of what was in the reading this week, and not because I disagree with its central arguments. I think online misinformation poses a major threat, potentially even an existential one, and that platforms and stakeholders have not done enough to address it in part because misinformation pays. Where I take major issue is the conflation of "inane fluff" and misinformation, particularly in the "Inadequacy of the Unvarnished Truth" section of the Bergstrom reading. It paints this image of the pre-internet media scape as this pristine prelapsarian world of media integrity, where The New York Times and Walter Cronkite were indicative of the general editorial standard and commitment to truth, and the internet represents a vapid and devoid hellscape of surface level drivel. The fact that the author has the foresight to say "Every generation thinks that its successor’s lazy habits of mind will bring on a cultural and intellectual decline" and instead of using that moment to reflect instead says "it's our turn now, and we’re not going to miss the opportunity" frustrates me to no end.

To the first point, pre-internet media was no bastion of truth. Tabloids and bunk journalism have existed for as long as the legitimist stuff and sensationalized news was starting trouble (and even wars) as early as the 1800s. The idea that it was all Woodward and Bernstein until clickbait got involved is not only ridiculous, but hinders our wider understanding of news misinformation and its causes.

Secondly, I take issue with the idea that most of the internet is "mental junk food." I am showing my hand a bit here, but in my own research I study social media communities, particularly those dedicated to the kind of "empty calorie" content being so maligned in the reading. It's not run past a editorial board and it would certainly be a bad idea to use it as your primary news source, but to dismiss it as noise or even misinformation is to miss the social, emotional, communicative, and artistic meanings it does carry, particularly for those people who have lacked a platform in the eras of analog media. To call back to the authors own words, its not unlike protesting the works of Ovid because they "crowd out" the bible.

The exact mechanisms that allowed the internet to become the printing-press level communicative revolution that it has been are the same that prevent it from being entirely grounded in editorial standards. Democratization and decentralization mean lowered barriers and more "fluff." If we are to address misinformation in the digital age we need to understand that fluff is not a bug but a feature; an inseparable part of the ecosystem with innate value (even if it is outside of the journalistic sphere). To pretend otherwise, consider it all noise or "verbal excrement" that obscures some gold standard of truth from a nonexistent past is not only counterproductive to the fight against misinformation, it is setting up an impossible challenge.

image 1898 headline on the explosion of the US ship, the Maine. Misinformation by the press suggesting the explosion was cased by a bombing is cited as a leading cause of the Spanish American war, highlighting that low quality journalism and its disastrous consequences are hardly exclusive to the internet

M-Hallikainen commented 5 months ago

Movie, #Inequality

Apologies for the somewhat off-topic film memo. I started working on this back when the week’s topic used to be social inequality. Snowpiercer is currently available on Netflix and I highly recommend you watch before reading as this memo will spoil the entire film. It is among my favorite movies and is really best experienced blind.

Snowpiercer is a series of French graphic novels, an ongoing American television show, and a Korean film by esteemed director Bong Joon-ho. In a way, it is the perfect encapsulation of this course, covering nuclear winter, pandemic, climate change, geoengineering, and civil unrest across its various adaptations, but constant to all of them is the setting; a world frozen over and devoid of life baring the denizens of the Snowpiercer, a thousand car train perpetually circling the globe. The cause of the apocalypse is so fluid between the adaptations in part because it's not about the apocalypse, or even the post-apocalypse, but instead how existential crisis ferments and maintains social inequality. The train is a stratified class hierarchy placed on its side and sent rolling on its predestined rails. The film follows Curtis, a survivor of the global geoengineering attempt that accidentally froze the planet, living in squalor in the back of the train. Egged on by cryptic messages sent from the front, Curtis leads a revolution from the back of the train all the way to the front where the train's soul owner, Wilford, rules from. Car by car, Curtis sees both the progressively opulent quality of life that the trains other passengers live in, but also the trains decline; machines breaking down, security who’s guns ran out of bullets generations ago, classrooms taught about the deaths of previous revolutionaries. When he reaches the front with only two members of his crew left, Curtis finds out that his revolution was planned by Wilford, a dual purpose exercise in culling the population and selecting an heir to the engine. Curtis refuses, instead setting off an explosive that derails the train, killing all but a teenager and young child who depart from the wreckage into the white unknown. Hierarchy and social inequality are built into the DNA of Snowpiercer. Beyond the stratified structure of the train, the apocalypses that ushered it in were products of the same inequality. As we have discussed in class and across our readings, the climate disaster that prompted the use of geoengineering is in many ways a crisis brought on by the globally wealthy and felt most by the global poor. The geoengineering that froze the plant was a similar executive order, with the film explicitly stating that it was deployed against the protest of developing nations. At each stage the crisis deepens, the need for action grows, and the seats of power to control said action shrink until all that remains is Wilford and his total rule of the snowpiercer. Wilford too is fighting a perceived apocalypse, this time of the Malthusian population bomb variety, culling the lowest classes to maintain an idea of balance. However, across the film we see that Wilford’s efforts to preserve humanity are doing little besides perpetuating the status quo as it sinks towards entropy. The film repeatedly makes reference to things “going extinct;” bullets, cigarettes, spare parts. Even the engine, a supposed perpetual motion machine, now requires children to be abducted and work between the gears. The train is hurtling towards an inevitable death. What Wilford is aiming to do is not save it but perpetuate its society (where some people are preordained “hats” and others “shoes”) until that point. Even the armed revolutions are subsumed into its mechanisms, culling the lower classes and acting as parables for the rest. The first truly revolutionary act of the film is the bombing in its last act. Before that point, the responses to existential threats had always been helmed by the heads of society, the conductors of the societal engine even before it was a literal engine. Their responses to existential threats prioritized their position at the top, not only necessitating positions at the bottom, but ensuring further more dire existential threats would emerge in the future. As with climate change, as with geoengineering, as with the snowpiercer, the goal was not merely to address the threat but perpetuate the hierarchy, and short of that, maintain the hierarchy until the whole thing collapses. The bombing seeks not to perpetuate the status quo under new management but quite literally derail society from its preordained tracks. It understands the social inequality of the train as not a necessary cost of survival, but a symptom of an ongoing disaster, a social order that will continually hoard power and ferment existential threats until there is no one left. In one sense the wreckage of the snowpiercer is another in a string of apocalypses, the end of what's left of the world. In another, it's the first genuine response to existential threat, one that seeks to face the crises of the time and tear down the systems that brought it to pass. The society dies in the crash, but humanity lives on. In facing our own existential crises, there is no single bombing that will solve our problems (regardless of what any accelerationists might say). What the film is arguing for is not necessarily violence in the face of existential threats, but an acknowledgment that social inequality and existential threats are byproducts of the same systems. It wants us to see these systems, like the engine of the snowpiercer, not as divine and preordained, but malleable and mortal. It asks us to see the apocalypse as the end of the line on our current societal tracks, one that we can only avoid by derailing the hierarchies we live under. image The Hat and Shoe is an idea brought up throughout Snowpiercer, build upon the idea that class in preordained and that upper classes, lower classes, and the inequality that separates them are needed to sustain society. Within the Snowpiercer, the idea becomes a matter of apocalyptic survival in the face of existential threats

imilbauer commented 5 months ago

cyber #framing #origins

In these three texts, the authors engage in aiming to understanding the social side of technological change and open worthwhile avenues of thought. Underlying the multiplicity of social possibilities discussed are two sociological issues that are essential to framing the problem and that are under-theorized by the texts. What is the future of human agency in relation to new informational technology? And, what is the future of social meaning-making in relation to new information technology? Regarding the first issue, the Bak-Coleman texts emphasize the possibility of humans to socially engineer better informational environments. However, an ecological approach of “stewardship” or social engineering is not fully grounded in feasible or ethical understandings of political organization and human agency. Regarding social meaning-making, Bergstrom opens worthwhile avenues for considering the role of apathy and meaninglessness in our information environment.   Bak-Coleman et al.’s under theorization of human agency begins with their abstract where they write “we argue that the study of collective behavior must rise to a 'crisis discipline' just as medicine, conservation, and climate science have.” The study of collective behavior comes by many channels: sociology, economics, and anthropology, data science, public policy, and social welfare. These disciplines contain the origins of the type of thinking this paper engages in and contain intense debates about the sort of ecological modeling of human behavior that Bak-Coleman et al. seem partial to. For example, the sociologist Gabriel Tarde suggested ecological notions of the spread of human ideas in the 19th century. However, these disciplines also contain words for caution for the ecologically minded social thinker. Ecological models can lead to social Darwinism and the assumption that some groups are better than others. While the powerful computing approaches of ecological disciplines may be able to contribute to the social sciences, the robust literature on these ways of human agency should be kept in mind. Even as our tools to predict human behavior improve, the evidence is clear that humans can act in spontaneous and surprising ways. Thus, the first problem with large-scale social engineering is it cannot fully predict human behavior.   The other issue with Bak-Coleman et al.’s notion of stewardship is it is vague and potentially undemocratic. Bak-Coleman et al. write “ecological models suggest strategies such as establishing protected areas and using ecological cascades to manage deteriorating ecosystem. A similar approach can be adopted to study issues arising from human communication technology.” In this metaphor, if human behavior is presumably the equivalent of an ecosystem, this “crisis discipline” would suggest the social control of information systems by an unspecified “manager.” Who gives this “manager” authority? Who determines how to control information in a way that doesn’t violate free speech?   Although Bergstrom is also an ecologist, he approaches the agency question with more nuance. He’s of the view that technology can steer human behavior as much as humans have agency over technology. He doesn’t suggest we can predict human human behavior or socially engineer ourselves out of our information crisis. I am drawn to how he describes our information environment as creating a decay in social meaning.  He describes the meaningless content we consume as “mental junk food.” (22) Humans have long consumed bad and misleading content but without the current volume and potential geopolitical ramifications. As addictive scrolling replaces social interaction, garbage content is not only fueling political strife but also a crisis of anomie.

Theory is important, but too much theorization may limit action. Perhaps why Bak-Coleman et al.’s paper has received many citations is because it yearns for a bold framework for action. But action may be less grand than the management of global “collective behavior.” Probably, companies will have to be regulated, schools will have to improve regarding digital citizenship, and cyber experts will have to root out bad algorithms/actors.

Picture: image Gabriel Tarde was one of the first who was interested in the modeling the spread of ideas and information but spoke about change as proliferating through small interactions

aidanj5 commented 4 months ago

solutions #origins

I very much like M-Hallikainen's memo, describing how this problem of biased media, potential "fluff," or many of the other considerations are not unique to this moment. We have built off of previous media and are participating in the same struggle as we always have with misinformation.

Now, it feels as though we've moved from being in a steady creek of information to white rapids. It's hard to get a grasp on speaking about how our society processes information for us, and it is also uncomfortable, at least for me. This firehose is turned on for a multitude of reasons. One aim is to drown out coherent sources of information we might be able to hear online. The example that comes to mind to me isn't a well-evidenced claim about an agent trying to distract from a certain important story, perhaps about some grand project like Keystone XL or the Russia-Ukraine war, but from how Prigozhin, the lead of Russia's Wagner group, was hiding his movements before his demise. I remember a video released ~3 days after Prigozhin's death of a man training Wagner troops in Africa that had the same voice and figure as Prigozhin. Looking more into how media was released, oftentimes it was reported he would be flying to Belarus, while video would show him at the same time in Africa, where he would then appear on a flight manifest to Moscow followed by an appearance in St. Petersburg simultaneously. Information showed him to be in multiple places at once, and ultimately these decisions led to a small conspiracy believing that Prigozhin did not really die, despite most media claiming he did. A general characteristic of misinformation then is to be providing excess information, not even just wrong information. This volume is characteristic of our time.

Bergman borrowed partially from McLuhan and Neil Postman, who were communications scholars following the thought "the medium is the message." Neil Postman in particular made several public lectures about the horrors of TV, at that time. In a video I link below, Neil Postman describes how we might organize how we perceive information with a personal narrative in mind. Perhaps we already are experiencing these personal narratives, which we are using to selectively tune into this information overflow. I, after all, do not watch a lot of TV shows, but I engage with other media instead that I value higher in my own schemes. I think, instead of thinking of stymying the distribution of knowledge on the internet, we should cultivate how we construct these narratives to ensure that the information does not lead us away from ourselves. For a more concrete example, perhaps we have local town governments subsidize local media outlets. This action would likely be contested, because it sounds like propaganda. But if all the town governments' outlets are out there on the internet, there wouldn't be information that the propaganda is hiding. And, the local government story allows people to construct meaning out of the information without being incredibly overwhelmed. This policy would be effectively scaling our education structure to adult life afterwards.

https://www.youtube.com/watch?v=8ApPkTvQ4QM&t=1s A video of Neil Postman describing how we are Informing Ourselves to Death.

miansimmons commented 4 months ago

cyber #misinfo #solutions

I argue that technology companies, especially social media giants like Facebook and Twitter, are in the best position to limit internet misinformation. Broad awareness and education campaigns would take a large amount of time and resources, and the rapid development of new technology requires action in the short-term. Additionally, strict government regulation of content may give rise to allegations that the government is restricting freedom of speech or investigative journalism. It would also put the government in a position to do so if they desire.

Technology companies to date have mainly put the onus on users to manage misinformation and discern truth from deceit. However, research has found that humans have a difficult time distinguishing between AI-generated news and real news, believing AI text to be written by humans the majority of the time. In light of AI's new ability to generate fake videos and speech, it is unfair to place the regulation of content on users, who are often overwhelmed by the "firehose" of information coming their way.

These companies should first use algorithms for good and invest in those that are capable of identifying false information. As Bak-Coleman mentioned in his article "Stewardship of Global Collective Behavior," data-driven models on how information spreads may inform strategies to reduce misinformation without requiring censorship. Algorithms can be modified to protect consumers by detecting and removing misleading content from their feeds; once algorithms are set up properly, company employees can push monetary incentives away from clicks and toward quality content. Further, in Carl Bergstrom's book, he mentioned that many influential internet presences are not actually real people. Social media platforms should therefore require users to use their real names and take steps to verify identities. For example, dating apps like Hinge and Tinder and currently doing this to ensure that users are safe when dating and interacting with people not bots. People will also be more careful about what they post if they have to stand behind their claims.

Though there is a risk that technology company content management tools may be influenced by political interests, I maintain that they are in the best position to take quick action against misinformation on the internet. Minimizing propaganda and false information in the future depends on technology companies, which can either choose to encourage misinformation through their current algorithms and poor incentives or adopt a consumer-centric approach.

images-9 fill size_2000x2913 v1626796896

Example of dating app identity verification

agupta818 commented 4 months ago

risk #policy

The paper: Create an IPCC-like body to harness benefits and combat harms of digital tech, has some strong ideas on how to deal with issues of misinformation and safety regarding online platforms, however I fail to understand how the proposed committee, a UN Intergovernmental Panel on Information Technology, will be able to enforce their regulations on the large corporations who are at fault for or at risk of committing harm to their consumers. Can such a panel actually force companies like major social media companies to change their ad targeting, data sharing, or algorithms? Can they ban people from accessing sites online or downloading certain applications? We have seen such bans done on a more localized level with countries banning TikTok, a social media application, but this has not been seen on an international level. While I am not too familiar with the current laws and regulations surrounding such online developments and how an international system could be enforced, I feel as though there are always loopholes for such targeting.

For example, many social media applications have algorithms that tailor the posts or videos towards an individual's interests based on their likes and engagement time. Thinking about the goals of such a panel to protect rights such as "rights to meaningful privacy and consent, a healthier information ecology and better safety online," do we as users consent to this targeting or use of algorithm? As a user of such apps, I know I have personally never sifted through the terms and conditions when creating accounts and the mention of the use of such an algorithm could be mentioned here. Yet, if it is there and I am agreeing to those terms, then it seems as though a panel would be unable to step in and block the use of such an algorithm because we might be consenting to it...without even realizing it. I looked into Instagram's privacy policy and discovered they can in fact collect data on what I like or comment on, how long I look at an ad and if I interact with it, and then takes this data to personalize my explore feed, my ads and the order of posts and stories that appear when I open the application. When you go into the privacy settings that you can change, this information cannot be changed, but rather other minor privacy settings like who can comment on your post of if you want to hide your like counts. Thus, when I signed up and agreed to this policy that I did not even read, I consented to this personalization based on my information and then these algorithms can then perpetuate misinformation spreading based on liking a false post on a feed. Again, in such situations, would it not be difficult for such a panel to intervene? I had the option when signing up to read this, but I know that many young people like me are not taking the time to shift through these policies, or at least I assume the majority are not. Thus, from a policy standpoint, how do we protect people's privacy and right to consent if they do not even know what they are consenting to? For people like me, a start is to encourage individuals to sift through the terms and agreements before blindly signing away their data/information, and for companies, instead of requiring people to click external links to read their terms, they should have them consent to them step by step, so that people are forced to engage with their guidelines. While this may take longer for people to sign up, a downside for corporations who might think this may impact their numbers, it will better inform their users and could protect them from future interventions by their governments or international panels.

Pictures of the terms and privacy policy of instagram I agreed to without reading and how they use their data:

image image
oliviaegross commented 4 months ago

cyber #framing #origins

Throughout the quarter we have been assessing what appears to be the present pivotal point in human history in which civilization is facing unprecedented threats. These readings reminded me a lot of the movie The Circle, directed by James Ponsoldt, which emphasizes the potential for, and existence of, self-inflicted human catastrophe. The Circle portrays a fictional Silicon Valley with timely themes related to technology, social media companies, and the existential challenges they present in the speed and scale of information exchange. Similarly, these three texts attempt to understand technologies' impact on social dynamics and open insightful lines of inquiry. Bergstrom opens lines of thought for considering the role of apathy and meaninglessness in our information environment. Separately, Bak-Coleman emphasizes the possibility of humans to socially engineer better informational environments. The Bak-Coleman’s focus on the evolution of the scale of our social networks and its impact on collective behavior fascinated me as it relates to the existential crises that it simultaneously presents. I appreciated the articles framing for understanding how the actions and properties of groups emerge from the way individuals generate and share information. Beyond actions and properties, I wonder how values and culture are additionally impacted by such change. While information flows were initially shaped by natural selection, the article breaks down how it is increasingly structured by emerging communication technologies (as we are now seeing). Additionally, the speed at which content can be generated and shared creates new possibilities for amplifying hate speech, misinformation and disinformation. We have seen the threat this poses for our democracy as articulated in these pieces. More specifically, I am most concerned for how generative artificial-intelligence (AI) systems that create visual and written content at scale can be used in ways for which the world is not prepared, culturally or legally. We have already seen the consequential impact that the scale of technology has on culture, so how will a completely new technology (AI) impact our culture and speech habits? I am very worried about the consequences of AI, not necessarily because of the technology itself, but because of our poor ability to adapt and respond to aspects of tech that present threats like scale.

Screenshot 2024-01-31 at 5 41 18 PM
maevemcguire commented 4 months ago

risk #solutions #salience

While I thoroughly enjoyed the readings for class this week, I do not think, based on the readings, that misinformation and the like will lead to existential doom. The readings wrote about how these changes in collective behavior (Bak-Coleman, et al. 2021), the difficulty in attaining “the unvarnished truth” (Bergstrom, 2021), and “emerging information technologies” (Bak-Coleman, et al. 2023) present eminent threats to our democracy. Therefore these emerging ways of interacting technologically may lead to the end of democratic humanity as we know it, but not all of humanity.

The discussion of bullshit, misinformation, disinformation, and fake news in the article “Calling bullshit” reminded me a lot of a class I took a while ago called ‘Truth.’ In this class we explored the linguistic differences between bullshitting and different types of lying. For example, bullshitting is different from lying in that while bullshitting, the speaker doesn’t actually have an opinion, while when lying, the speaker intends for the listener to believe the opposite of what the speaker knows is true. I think an important perspective I took away from these conversations is that sometimes bullshit can be useful to suggest things we think to be (or might be) true or correct, but which we can’t fully pin down, and that bullshit isn’t inherently bad, truth isn’t inherently good. We also explored how high frequency persuasive bullshitters are more susceptible to bullshit.

We also raised the question: To what extent are we as humans doomed to a world of bullshit? While I think that the development of the internet, social media, and especially AI is a threat, I had never before heard of its unpredictability compared to the threat that climate change poses to society. In the 2023 article, they emphasize the inability to even gather data on – let alone understand or regulate – the threats of new artificial intelligence technologies. I also thought that analyzing the progression from Bak-Coleman’s 2021 article to his 2023 article highlighted the rapid progression information technology is currently making. The authors went from highlighting the need to collect data on these technologies to the need to create an inter-governmental regulatory body for their development. I also thought that the transformation of changes in collective behavior becoming instantaneous online speaks well to the feasibility and speed of cancel culture.

Screenshot 2024-01-31 at 7 06 15 PM
cbgravitt commented 4 months ago

cyber #misinfo #origins #solutions #policy

As many of our readings alluded to, online news sources, real or otherwise, earn money from what are called click-through ads. In print media, the business model is that a popular publication is viewed by more people, and thus an ad in that publication is more valuable. However, in the age of online media, advertising companies realized they can do one better. They can explicitly track how many people interacted with an advertisement and monetize based on that. This is the notion of a click-through. Its not enough for a lot of people to see your page: a decent portion need to click on an ad for you to make some money. But, the ads are already designed to be catchy, so all you really have to do is make people see them. And just like that, we've arrived at the motivation for flashy headlines and mass-produced articles about nothing covered in the readings. It's already somewhat clear that many of these news sites, especially smaller and more extreme or fringe ones, aren't about to change up their strategy. So what if the advertisers changed instead?

Online advertising is nearly a monopoly. Google and Meta combined control about half of all the ads seen online, and other massive companies like Amazon and Microsoft have large stakes too. That, worryingly, means that the same platforms hosting the bulk of the misinformation problem are also its root cause, and they're profiting doubly from it. That means convincing them to do just about anything is going to be tough. The solution I propose, fortunately, does not require their cooperation. The key idea revolves around a way we already have of authenticating websites: certificates. Pretty much every website you visit will have a certificate signed by a certificate authority that basically says "yes, this website is legit". Now, there are many problems with this system that I won't go into here, such as rogue certificate authorities and the fact that relatively few people know that a signed certificate doesn't guarantee security. I instead propose a new type of certificate authority exclusively for websites claiming to be news websites.

News websites will have to apply to receive a signed certificate authenticating themselves as a relatively well fact-checked news organizations. News websites without this will have to have a substantial label on their front page and articles declaring that they don't have one and what this means. More importantly, though, advertising companies will receive tax penalties for how many news organizations without such certificates they buy space on, discouraging them from providing revenue to untrustworthy platforms. I view this as a way of preventing websites reliant on misinformation and disinformation from growing substantially without actually restricting their free speech. They can still post, they just can't make much money. One glaring issue is how to come up with a new certificate authority that can objectively and apolitically determine which news sites are trustworthy. I leave this issue up for discussion, as there is far too much to be covered in the remainder of this post.

image

WPDolan commented 4 months ago

solutions #emerging #misinfo

One issue that I would like to elaborate upon is the ability for generative AI to create disinformation, and what tools society has that can reasonably curtail its use.

I don't have much faith in human or AI-based detection efforts in the long term. Currently, most AI generated images are identified by observers when they find differences between the AI generated image and what they would expect a "real" image with those qualities to have. For example, AI generated images of human hands are often identified as illegitimate because they contain more than five fingers or are otherwise bending in physically impossible ways -- this isn't something we would usually expect to be present in an image sourced from a physical camera.

However, there is no evidence that future generative tools won't eventually learn how to remove these artifacts. It is very feasible that a maximally-efficient generative AI tool will learn valid distributions for result images that are indistinguishable from images of real events with similar qualities taken by digital cameras. Digital images are just large grids of editable pixels, and without metadata we have no way to determine their source. If I went and copied values pixel-by-pixel from a real image onto a similarly sized blank canvas, my resulting "fabricated" image would be identical to the source image generated by the camera. There is nothing that we could really do to determine the veracity of images purely based on how they are displayed on our devices; Photoshop and Midjourney don't place giant watermarks over their images that state "this is not real".

One potential solution would be to force generative tools to embed information signifying fabrication in their outputs, either in their metadata or through stenographic techniques. We could then develop tools that search for these messages in an output image. This solution is contingent upon the willingness for AI toolmakers to include these features in their products and their ability to somehow prevent the public from manually removing them. This would also require a closed-source AI ecosystem (which was advocated by Geoffrey Hinton in week 2), where only trusted individuals have access to the underlying features of groundbreaking AI tools. This effort would also be defeated by the release of any advanced generation tool without safety features.

On the other hand, we could ensure the veracity of online content through cryptographic signatures and trust in the original publisher. For example, I would know that figure X really did say the words in their speech because the attached file contains some data that could only feasibly be generated by a trusted news organization. This would however restrict trust to a small circle of pre-vetted organizations, significantly limiting the ability for individuals and small-scale journalists to have their voices heard on the internet.

Both approaches are suboptimal, but I nevertheless believe that society will need to develop alternatives to “eyeballing” digital images and AI-detection tools to remain resilient to future generative models.

Image: AI generated hands from craiyon

mibr4601 commented 4 months ago

cyber #solutions

I am very interested in the battle between creating and countering fake news. In the CS department at the university, they recently came out with Nightshade. Nightshade’s purpose is to deal with AI models copying and using artist’s work to create their own. The idea is to distort feature representation in the artists’ works so that it doesn’t visually change that much for the human eye. However, when the AI views the image, it sees something completely different. What I found the most interesting is that they consider themselves to be the ones on offense and thus they can more easily adapt their program to any countermeasures taken against them.

In chapter 2 of Calling Bullshit, Bergstrom and West think that the opposite is true when it comes to fake news. They argue that the same artificial intelligence techniques used to create these countermeasures can also be used to get around the detectors. They believe that in this arms race, the detectors are unlikely to win. The detectors will have to understand the small changes in AI that are used to get around their defenses. Adapting AI would only need these small changes in their algorithms while the defense would have to build completely new defenses. They are not very optimistic about any regulations or countermeasures, instead, they believe that education is the best way to counter the spread of fake news. Nightwish is similar in many ways to these detectors as it is a method to mess up the AI generation that has ill intentions. The main difference with Nightwish is that Nightwish is not taking defensive measures, but rather working to attack the algorithm. In this case, the people working on AI will have to find ways to identify Nightwish, but then the roles reverse. Nightwish will only need small changes, and then AI will have to find a new way to identify poisoned images.

This primarily works with images as the image doesn’t alter much to the human eye. However, how would this affect the spread of misinformation? To start, I think that this could work well to make it harder for AI to create believable images, videos, and maps. One of the scariest parts of emerging technologies is the ability to create believable falsified videos. If AI can accurately make videos, then it will be much harder to distinguish between the real and the fake. Fake videos could be used for a plethora of dangerous operations. It could be used to scam people by creating videos of fake situations. Even worse, it could be used on a national scale to create false impressions of an ongoing war or situation. It is easy to accept a video as reality which makes believable deepfakes a scary topic. Maps are also in a similar vein. Most people are willing to look at a map and not fully understand it, yet still believe it. If AI can produce maps that look accurate at first glance, that could be very dangerous. These maps could easily circulate on social media and change people’s perspectives before it is realized to be falsified. These poisons are essential in trying to stop AI from being used for spreading misinformation. I do think that it is much harder to apply AI to text though. Thus, I do agree with Bergstrom and West in that it is crucial to also work on the education front. If people are well educated, then they are less likely to believe catchy headlines that are not true. Awareness on social media should be taught more in school so that people don’t just believe everything they read and can make well-informed opinions.

image

mibr4601 commented 4 months ago

movie

I watched Zero Days which is a 2016 documentary about the Stuxnet worm that was discovered in 2010. We now know that it was created by the NSA and Unit 8200 of Israel and was designed to slow down Iran’s progress toward nuclear weapons. The Stuxnet case raised some important questions about cyber security and warfare. The discovery of Stuxnet questioned how governments should and if they really could regulate emerging technologies. While it was not designed to spread misinformation, it did have effects that misinformation could have. For example, it led to the firings of many Iranian nuclear scientists as many believed that the scientists were messing up and that there wasn’t a worm. The documentary highlighted how easy it is for a nation to sabotage another through technology and how difficult it is to create global regulations on this matter. While creating a worm that blows up centrifuges in a nuclear plant is a more extreme example of cyber warfare, this can take on many forms such as the spread of misinformation in the 2016 election. 
The documentary highlighted the vulnerability of the infrastructure of the United States. Because we are so technologically interconnected, we are at a very high risk of these attacks. If another nation wanted to, they could likely turn off power and shut down the internet for many hours. This would have huge economic repercussions. Furthermore, it would be very difficult to figure out which nation was hacking into the infrastructure. With Stuxnet, there were some clues, but it still took years to figure out that it was the NSA and Unit 8200. Even with this being leaked, both countries still don’t own up to Stuxnet. In response to Stuxnet, Iran created their own cyber army that wiped code from Saudi Aramco computers and attacked online banking in the US. While we will always have cyber security trying to make sure these attacks can’t happen, we are always at risk of some attacks getting through. 
Gibney does a very good job depicting the danger created by the interactions between cyberspace and global politics. What stood out to me was the fact that quite a few people I talked to had never heard of Stuxnet before. These viruses and worms live in an almost invisible space yet they are so dangerous. Stuxnet could easily have been considered an act of war and sparked a global war, yet most people have not even heard about it. Furthermore, the government wants to use zero-day exploits, so they don’t get them patched and leave many computers vulnerable. After reading reviews, many people were left feeling disturbed by how dangerous cyberspace can be and how security institutions keep everything secret. The government holds so much information back and is unwilling to talk about any operations in the past or present at all, so most people don’t know anything. This documentary gave a very good portrayal of the hidden and terrifying world of cyber warfare. 

image

madsnewton commented 4 months ago

misinfo #risk

Following the widespread Black Lives Matter protests in 2020, a new kind of social activism took over social media: Instagram slideshows and infographics. The trend takes relevant social issues and turns them into easily digestible slideshows and infographics that are guaranteed to match the aesthetics of Instagram user’s feeds. Only on Instagram can you read a post about an active humanitarian crisis accompanied by a sparkly pink slideshow that probably took longer to make than it did to research the topic. These types of posts often go viral and are shared thousands of times. How much misinformation is inadvertently spread when people repost them without a second thought?

Users are so desperate to share their beliefs with their followers in a trendy, aesthetic way, that often, independent fact-checking goes out the window. In 2020, University of Chicago Professor Eve Ewing shared her own infographic warning about misinformation associated with these kinds of posts. In the caption, Ewing writes, “Really excited that people are using social media and graphic design in exciting ways to educate and inform. But let’s keep our critical lenses with us as we would with any other media. Who is the source? Where are they getting their information? What are they about? What’s their track record? Why are they a credible party to listen to on this issue?” This type of information should be accessible and social media is a good vehicle for getting the information out, however, as with any media, the threat of misinformation is still there. This comes more to the responsibility of the consumers of these posts. The slideshows and infographics are a great starting point for social justice issues, but they shouldn’t be anyone’s primary source of information. Nor should they be blindly reposted without verifying the information. In Calling Bullshit, Carl Bergstrom and Jevin West say, “Social media posts are unconstrained by most borders. And they are shared organically. When social media users share propaganda they have encountered, they are using their own social capital to back someone else’s information”. And while someone you went to high school with sharing one of these posts that may have some facts wrong isn’t going to perpetuate the misinformation, an influential celebrity or content creator sharing it will.

IMG_2939 Eve Ewing's post warning social media users to be conscious consumers.

emersonlubke commented 4 months ago

solutions

https://www.tiktok.com/@misterhigh5/video/7330011900660501790

Instead of an image, I've linked a Tiktok which is a deepfake of Kansas City Chiefs head coach Andy Reid talking about one of their wide receivers, Kadarius Toney. In the video, Reid seems to say that Toney has "Brick ass hands" and other disparaging remarks. When I first saw the video I thought that it was real for the first 10 seconds, but then once Reid started to say stuff I knew he wouldn't say I saw that it was fake. The scary part is though, I try to be pretty keen on this stuff and not fall for any fakes, and anyone who didn't know Andy Reid or anything about him would be much more likely to think that this was a real video of him speaking. In the past stuff like this happened all the time in a much less convincing manner, people would just make up quotes and pretend that people said them, which could be seen as false pretty immediately. Now, however, with video evidence of these quotes, it's becoming much harder to immediately discern what is fact and what is fiction. In Calling Bullshit, Bergstrom says that malicious AI in this regard is going to evolve and develop faster than some sort of fact detecting and verifying AI, so AI can't be the solution to this problem. I wonder if that's the case, because not only could you train an AI to look at the video and try and detect the fakeness of it, but you could train an AI on Andy Reid speeches and teach it everything about him, would it be able to tell that that is out of character enough of him that the video must be fake? Because that's how I could tell the video was fake, not because it looked fake (I wasn't looking very closely I suppose) but rather because I knew Andy Reid would never say something like that. Could we teach an AI to have common sense in that regard and use critical thought to understand Reid would never say that?

AnikSingh1 commented 4 months ago

risk #salience

I believe that something that not many sources are pointing to is the risk an authoritative figure emits by starting regulation/limitation of online/internet media. The internet has been wholly built and made into a subculture entirely on its own over time. The creation of a safe space that preserves anonymity and allows for people to share information has created a system that is used by the whole world today. While misinformation does pervade our "normal" day to day lives by giving us biased views and layers, I feel that the solutions to the problem can actually create more problems than we realize.

Yes - social media and the internet as a whole have drastic problems, as highlighted through Calling Bullshit. But I am interested in the after effects of Bergstrom's ascribed solutions. More specifically, Bergstrom cites the usage of government regulation in this space of fake news as a violation of the first amendment and the questionability of deciding what is and isn't fake news. These points are strong and valid to comprehend if we were put into a position of making that call. I am interested in the impact this would have on how we view the Internet - more specifically, how would those that use it feel about these laws? The internet subculture is full of memes, inside jokes, subcommunities, and more - you could call it its own country if you wanted to. Part of the beauty of this system is how there isn't really a position of authority; you put a bunch of users through multiple websites and pages, and so long as you "abide" by what the page says, you can't really do something on the level of criminal charges (unless you directly violate another person's real life... but with anonymity that becomes much more difficult).

The introduction of a power position in the internet space, I feel, would invoke a response similar to counter culture. This system would be looked down upon, people would band together, and we would play the real world all over again but online instead of offline. It is this dynamic that makes me question if you can even regulate what the internet would be - it itself doesn't even know what it is. Aside from a means of passing information, it can take many forms, sites, and backgrounds. Misinformation is definitely an issue that should be targeted, but part of the information IS the misinformation. A more pressing issue I feel is the amount of time/trust we put on these sources - rather than looking at it as a resource as we did back then, it has become the end all be all for many people's perspective on opinions, facts, and perspectives. That value has been taken offline and put online for all to see and entrust in. THAT issue, which is cited as the third reason to fight against misinformation, feels like endgame for people to start fighting misinformation from the ground up. Changing the culture by introducing a newer, sharper one.

image

Here's a fun comic I sound that is funny at first glance... and kinda depressing when you think about what authority could do to a space like this if it was tapped into.

Hai1218 commented 4 months ago

policy #framing #misinformation

I believe the rise of social media platforms and the widespread use of AI tools have significantly impacted our behavior, creating an urgent need to resolve the disconnect between the profit goals of AI companies and the societal values upheld by academia and government. The creation of an Intergovernmental Panel on Information Technology marks progress in understanding and molding the information landscape, yet its impact depends on harmonizing these conflicting interests. It's essential to introduce mechanisms that enforce transparency and data sharing and also motivate AI companies to ensure their business models promote societal progress along with profitability.

However, aligning the profit-focused objectives of AI firms with the societal goals of academia and governments is a formidable challenge, potentially undermining the effectiveness of a centralized information governance authority. This divide raises issues of credibility and enforcement. The real task is to build an ecosystem where financial success and social responsibility are interwoven into the DNA of tech companies. This requires inventive policy frameworks that encourage cooperation among the private sector, academic institutions, and governments, driving initiatives that benefit both the marketplace and the wider community. Through this alignment, the Intergovernmental Panel could lead to more accountable management of digital technologies, ensuring that the tools and platforms we rely on enhance societal communication and decision-making.

In wrapping up, I hold reservations about the prospects of the Intergovernmental Panel on Information Technology. Its success rests on successfully merging the market-centric goals of AI companies with the collective societal interests of academia and government. However, unlike climate change, which affects everyone and can be collectively addressed without direct losses, the realm of information is often politicized and profit-oriented. The real question is: if technologies like deepfakes can be politically exploited, would decision-makers actively work against them, especially during critical times like elections? The stakes in information technology are inherently high, and zero-sum, turning it into a battleground of competing interests.

image TikTok chief Shou Zi Chew testified in front of the Senate Judiciary Committee alongside the CEOs of Meta, X (formerly Twitter), Snap, and Discord about the risk of child sexual abuse material on their platforms. Sen. Cotton repeatedly demanded if Chew had been a member of or affiliated with the Chinese Communist Party. Chew, clearly growing increasingly frustrated, replied every time that he is Singaporean. A hearing on potential actions against abusive information on online platforms can be spun into an awkward moment of making a political point.

kallotey commented 4 months ago

framing

The “Stewardship of global collective behavior” article by Bak-Coleman et al makes the point that the field of human collective behavior must be studied as carefully as other crisis disciplines such as medicine, conservation biology, and climate science. In understanding human collective behavior, we can better address challenges such as the lack of urgency to avert crisis (one of the examples mentioned that we’ve been discussing a lot lately is climate change). Of this crisis discipline, four factors that the authors want us to pay particular attention to is the increased size of human social networks, the changes in social networks (by virtue of connection ability across many borders), the types of information that happens to spread, and the use of algorithms in circulating information. The authors explain that not understanding human behavior with the advancement of technology is going to ultimately lead to our decline (I would imagine in this case demise with regards to crises) because amidst all this confusion, mis and disinformation continue to circulate.

I agree with the authors especially with regards to our conversations on oversaturation of news on certain issues. So many differences in opinions, loud opinions, polarized opinions greatly influence human agency. I wonder at an ethical level, how we can desaturate people’s feeds. It would be hard to police how information is spread and who sends what to other people, so what could genuinely be a plausible solution this issue? The more we study how humans interact with communication technology, the more we can grasp casual mechanisms for actions, but that doesn’t necessarily help us come to a solution. Can we hope that algorithms are adjusted to mitigate this issue? I highly doubt it when the way they function now is extremely profitable. Should we bring policy into this matter? Won’t that just upset the general public? I’m unsure how to go about a solution.

Source: https://images.app.goo.gl/9SEgs6L9EXdawVYz9 image

summerliu1027 commented 4 months ago

Solutions

When reading Carl Bergstrom's piece, I was surprised by how removed I personally felt from the threat of misinformation before taking a closer look at misinformation. I am under the belief that the peers around me and I, who have benefited from higher levels of education, rarely fall into click baits and Internet scams. We often have been taught critical thinking and fact-checking skills at some point during schooling, particularly in high school and college. This educational background has instilled in us a habit of skepticism that serves as a defense against misinformation.

Reflecting on this, it becomes clear that education could be the key to addressing the misinformation challenge much more effectively than some of the other methods mentioned by Bergstrom. By incorporating media literacy and critical thinking into the education system, we can equip everyone with the skill set to navigate the digital world with discernment. The skillset could be as simple as (1) Always be in doubt; (2) When in doubt, fact-check. As Bergstrom mentioned, smartphones provide the two sides of the same coin, and although we often receive false information, we can also actively gather facts using the Internet. This also promotes a societal norm where questioning information is encouraged to extend these critical skills beyond academic settings.

Technical solutions, such as developing smarter algorithms to flag misinformation and controlling media biases, are also vital. However, these must be complemented by educational and cultural approaches to see change happen as soon as possible. Proper algorithms or regulating the media can reduce the total amount of false information, but these methods might not always get down to the last piece of false information hidden somewhere in the vastness of the Internet; people who aim to spread false information, especially those that make a handsome profit from it, will always attempt to outsmart the existing regulatory structure. Therefore, it becomes important that each individual still possesses a level of discernment. While it's not always possible to verify every piece of information, maintaining a mindset that is open to questioning and reevaluating facts is crucial. This ensures that even if we are misled, we remain receptive to the truth when truth is presented to us.

image The modern information era is an unprecedented challenge to our critical thinking skills.

jamaib commented 4 months ago

salience #risk #solutions

Like Maeve although I am very cognizant of the dangers of misinformation, I could not help but to feel that we will be ok when reading Bergstrom's Calling Bullshit. I would be the first to admit that the sheer amount of information (a large portion of which is misinformation) is often times overwhelming and disheartening. Not only is it mentally taxing to sift through the millions of contradicting viewpoints but it seems that the quality of opinion has suffered with the inception and advancement of the Internet. However, barring the older generations, most of humanity has seamlessly adjusted to life on the Internet. Although media literacy needs improvement across the board, the average millennial and Gen Zer can discern between biased or unbiased posts, factual blogs or “clickbait”. In fact, it seems that for as many Internet users spreading misinformation, there are just as many users telling us that we are reading misinformation (Bergstrom’s article is proof!). Although there is no doubt that the advancement of technology has introduced a large number of people to disinformation, it has simultaneously made people more aware of disinformation. I believe that often the problem lies more with the individuals' desire to consume trustworthy information whilst actively avoiding disinformation and misinformation.

I do however agree with Bergstrom’s assertion that government policies need to be implemented to regulate “essential information”. For example (though I am no policy maker), companies that cover the news, advancements in science, medicine, etc. Should all have to undergo some sort of vetting system in which the information that they release is checked for authenticity and bias. Furthermore, each company or entity could be given some sort of grade that is known to the public allowing the people to have some sort of easy way to know if the information received can be trusted. Of course, a more through or thought-out policy would be necessary. It also should be noted that if this process is solely the responsibility of the government, there is a possibility that the information could be corrupted.
doomscrollling

ghagle commented 4 months ago

#origin #solutions #framing #policy

At the very end of "Calling Bullshit," Bergstrom and West call for the "educating [of] people in media literacy and critical thinking" in order to ideally "solve[] from the bottom up" the problem of disinformation. "Stewardship of global collective behavior" discusses how our current communication and information age is surely warping our global collective behavior in ways we don't understand--it's detrimental shaping of our collective health is, at the very least, well researched and known. Last, we read about new theories of disinformation regulation in "Create an IPPC-like body...". All three perspectives are interesting and useful, but none seem to actually target the real source of all these issues: social media itself. Social media is highly addictive. It harms our focus, health, relationships, and--perhaps through its extremism-inducing quality and disinformation--the very existence of democracy. From population decline to the loss of data autonomy, the downsides of the new internet media universe are bountiful. Therefore, like recreational drugs, which also at one time had poorly-researched effects on people and their relationships, no regulatory limitation, and certainly no adequate educational or restriction regime influencing their uses, social media should come under heavy scrutiny not just as a disinformation carrier, but as an unhealthy 'drug' full out. We should be thinking much deeper than just a couple of its features.

First, our articles left out the way that many are encountering doom through social media use today. Social media is known to be correlated with increased rates of suicidal thoughts and actual suicide. Recent studies draw a monumental relationship between time online and youth depression as well as poor school performance. Yes, these things aren't quite the same kind of doomed as 5 billion dead through nukes or cities across the world being engulfed by rising oceans, but they are certainly bad nonetheless. Compounded with the threats detailed in the readings from disinformation and the infinitude of content, and the future looks bleak: a more addictive and more harmful online experience that only threatens to encompass more of daily life. The risks are more expansive than just democratic strife (although the anecdote of the Israel-Pakistan nuclear hiccup is perhaps indicative of a different level of risk than any other).

Consequently, the response to the social/internet media phenomenon should be more expansive. For one, Bergstrom and West's idea of an education system training young people to discern fact from fiction is essential, as is a regulatory commission and more research into what is happening to people themselves. But, like a drug, we should also consider regulating social media's availability and dosage in ways that protect rather than infringe--there is always a risk of potentially using media restriction as a political silencing tool. Increasing (and actually enforcing) age limits in use in order to protect the health of the young and allowing individuals to "pick their doses" and regulate the extremity of the algorithms that feed them content would certainly be approaching a more sustainable path forward. It is, of course, a tricky balancing act between protecting rights and protecting safety and non-doomedness, but it is a balance that is important to find.

image

Should we be treating social media like a drug?

briannaliu commented 4 months ago

#misinformation #risks #solutions

Misinformation has existed since the internet was invented, but recent developments in AI have exacerbated its prevalence.

In November 2022, the chatbot ChatGPT was launched to the public, taking the world by storm. At the time, ChatGPT was text-only and could answer questions based on its training data only up to September 2021. People would turn to the chatbot to answer questions of all types, whether it be math problems, science questions, or issues on current events, and the chatbot’s lagging knowledge made the technology highly prone to making up facts when it didn’t know the answer – these phenomena were dubbed “hallucinations.” Fast forward to today, in January 2024, and ChatGPT is trained on data only up to January 2022, a whole two years ago. Yet, there is only a tiny cautionary message written in fine print: “ChatGPT can make mistakes. Consider checking important information.”

Aside from the advent of ChatGPT, the ability to generate AI-developed images of whatever you wish has opened a whole new can of worms. Just this week, explicit AI-generated photos of Taylor Swift circulated across X (formerly Twitter). One image shared by an X user was viewed 47 million times before the account was suspended on Thursday. It was only after tens of thousands of reports of abuse by users that the photos were taken down, a whole 17 hours later (article). This incident demonstrates the growing tendency of tech companies to put responsibility for problems more frequently on individual users. If it took 17 hours and tens of thousands of reports to get these explicit AI photos of one of the most famous people in the world to be taken down, AI image surveillance is clearly not a priority for X. In fact, X has seen an increase in problematic content including misinformation, harassment, and hate speech since 2022, when Elon Musk bought X and loosened its content rules, restored several banned accounts, and laid off many employees who were in charge of removing misleading content. X’s efforts toward curbing misinformation are now limited mostly to warning labels for misleading content, but research suggests that these labels when used in isolation are unlikely to make a serious impact on the spread of misinformation overall.

Even more, the fact that Taylor Swift, a world-famous pop icon named “Person of the Year” by Times, was a victim of nonconsensual pornography and had so little control in fighting it speaks volumes. Taylor’s incident is not novel, however, as deepfakes have become an increasingly powerful force of misinformation since AI image generation tools were first made public. Deepfakes enable ordinary internet users to create and proliferate nonconsensual explicit images or false portrayals of important figures, from celebrities to political candidates and leaders.

When thinking about solutions, it is perhaps more helpful to target the source first – that is, the AI image generators who produce the image in the first place. Perhaps one way to prevent the spread of unsafe AI imagery is to ensure that AI models are not trained on unsafe images so that they will not be able to “replicate” this type of imagery and generate explicit photos when prompted. In addition, blocking unsafe words from text prompts could help stop the production of these images.

All in all, misinformation is an increasingly critical issue in the age of generative AI that has outsized risks for society. It is a problem that is hard to contain, so sufficient efforts must be made to curb it.

Screen Shot 2024-01-31 at 10 53 35 PM
tosinOO commented 4 months ago

policy #solutions #risk

In addressing the challenge of harnessing the benefits and combating the harms of emerging digital technologies, the article from Nature proposes the establishment of an intergovernmental panel, akin to the IPCC, for digital tech. This proposal mirrors historical contexts like quarantining during plagues, where proactive and collaborative measures were vital. The creation of such a body would involve a comprehensive approach, evaluating the implications of technologies like ChatGPT and implementing policies that maximize their advantages while minimizing risks. Historically, it seems collective responses to existential threats, such as the widespread use of quarantine during plagues, have shown the effectiveness of coordinated action. I think the proposed panel for digital tech would function similarly, providing a platform for synthesizing global research and insights. This body might be able to develop standard protocols and ethical guidelines, ensuring that advancements like large language models benefit society without compromising privacy or security.

Unlike unilateral actions by individual nations or companies, an intergovernmental panel ensures a balanced, global perspective, reducing the likelihood of biased or limited viewpoints. This body could conduct cost-benefit analyses of new technologies, recommend best practices, and propose regulations that foster innovation while safeguarding against misuse. I personally believe the panel might end up turning into a UN type situation where the most powerful voices are from the biggest nations and cater to their interests. In theory though, the panel would serve as a guardian of digital rights, addressing concerns about data privacy, misinformation, and the digital divide. A best case scenario is the policies could encourage responsible AI development, emphasizing transparency, accountability, and public engagement. This approach aligns with principles of ethical AI, promoting technologies that respect human dignity and diversity.

I think public awareness and education about the implications of digital technologies are essential. Individuals can advocate for responsible tech use, participate in discussions about digital ethics, and support regulations that protect digital rights. They could use their votes to try and influence change in the industry, although, this hasn't held true for many other political points (climate change or even gun activism). Fostering a culture of digital literacy and responsibility is key. Artistic endeavors can raise awareness about the impacts of digital technology, sparking public discourse and influencing policy. today-ai-chart-3-160416

AudreyPScott commented 4 months ago

misinfo #risk

Firstly: my apologies as to how this will look in Github. Apparently trying to write in Word to check your word count and then copy-and-pasting is a terrible crime. It clocks in at under 500, promise.

image

Screenshot 2024-01-31 at 11 39 14 PM

Image: Had to whip out the old Meme Generator for this one. All roads lead to Rome.

aaron-wineberg02 commented 4 months ago

I propose that the entire system of controlling misinformation is incorrect. In my observations, there have been three half-brewed methods: 1) educating people to detect false information online, 2) filters to automatically remove content, and 3) removing/flagging incorrect information either through users or moderators. None of these methods have been successful in controlling fake news, disinformation, or incitement. At least anecdotally, deliberately harmful content has amplified social tensions globally. We observe this regularly with political unrest and massive social change being propagated through just a few platforms.

The question ought to be who is responsible, not who should be policing misinformation. In the Stewardship of global collective behavior, the authors argue that online speech is overwhelming and rapid. It is also cheap. People can be quickly overwhelmed by many new sources of information. Rather than focusing on the outcomes, can there be societal changes outside digital platforms that encourage users to be thoughtful?

Consider the testimony earlier today in Congress from the CEOs of various social media platforms. No platform is held accountable nor any anonymous user despite massive public outcry. Despite this massive change in human communication and social order, as the authors argued, there has been no legal framework to follow suit.

One alternative, I propose, is creating a brand new courts system for users of digital platforms to seek restitution for the harms of misinformation. Considering the social and economic harms presented, malicious acts would lead to legal settlements to remedy the specific harms. One could claim this presents the first amendment and free speech concerns. However, consider how misinformation can lead to incitement— based on the information. Laziness will be replaced with accountability. Speech will not be cheap.

The Stewardship text discusses how people reacted in harmful ways to the Covid-19 pandemic due to misinformation. This was a situation of survival for many. Would people embrace a reality where accountability rested in the results of one’s speech? This would create a real economic incentive for social media platforms to build trust among users. Perhaps it would shut down many major forums, however I ask people, are any of these forums so valuable to protect at the expense of clarity?

This is possibly an authoritarian proposal but I suspect it may become a reality in some countries.

Policy #Risk #Solutions

11zuck-highlights-superJumbo

GreatPraxis commented 4 months ago

solutions #misinformation

In today's society, many pressing issues, from the COVID-19 pandemic to climate change, which rely on widely accepted scientific facts, have become hotly debated, with some even denying their existence altogether. A crucial driver behind such false beliefs is the rampant spread of misinformation, which proliferates at an unprecedented rate, largely facilitated by the wide use of social media platforms.

In "Calling bullshit: The Art of Skepticism in a data-driven world" by Bergstrom et al. (2021), the authors shed light on the significant role played by social media in the dissemination of misinformation. They argue that as more individuals gain platforms to voice their opinions, the prevalence of false information increases, making more people prone to believe false information. This phenomenon mirrors the democratization of information seen with the typewriter, where access to information expanded but was accompanied by a surge in misinformation. Thus, while social media offers advantages, it also amplifies the drawbacks.

Methods to address the spread of misinformation largely fall into two broad categories. On the one hand, proposed solutions involve the complete prohibition of misinformation. However, for this approach to succeed, there must be trust in the entity responsible for enforcing such bans. Without trust, people may perceive the suppression of certain information as a manipulation of truth, potentially reinforcing belief in the misinformation and even elevating those spreading it to martyr status.

On the other hand, other solutions involve directly confronting misinformation by providing accurate information alongside a misinformative post. Platforms like Twitter have community notes like the ones shown in the picture below that exemplify this strategy by allowing users to highlight misleading content and provide easily accessible fact-checking resources. This method leverages crowdsourcing, fostering trust in the correction process as it involves peer contributions. In a sense, this method utilizes the same anonymous voices discussed in "Calling Bullshit" to make accurate information more prevalent than misinformation. However, this method is not without drawbacks. The process can be slow as it requires enough people to gather, write, and then vote on a note, limiting its effectiveness to widely circulated posts, meaning that a lot of posts that are not popular enough will not be fact-checked. Moreover, this system is also susceptible to manipulation by malicious actors who may exploit the voting system to perpetuate misinformation to spite people with opposing views.

A more comprehensive solution may lie in a hybrid approach that combines elements of both strategies. By integrating the authority of independent moderating entities and having them write a list of possible community notes, with the transparency and inclusivity of public voting, this model could potentially maximize the benefits of each method while mitigating their respective limitations. Such an approach could possibly foster trust in the correction process while maintaining the integrity of the information ecosystem. Ultimately, combating misinformation requires a concerted effort that acknowledges the complexities of the digital age and seeks innovative solutions to safeguard the truth.

x-twitter-verify-qatar_preview_maxWidth_3000_maxHeight_3000_ppi_72_embedColorProfile_true_quality_95

Daniela-miaut commented 4 months ago

risk

Though I am not sure whether every one agree on the assumption that democratic politics take public sphere as its basis, modern media — the high-speed and personalized circulation of information (and misinformation), as well as its entertaining essence — is killing public sphere in an unprecedentedly radical way, thus undermining the basis for democracy, if you believe in the statement above as I do. This happens not only in the sense that information and media are used to manipulate people’s opinion, as has long been discussed since the rise of mass media, but also in that personalized recommendation algorithms constrain people in the private space of their own, cutting off their access to the real lives and opinions of others.

Many philosophers have written about this phenomenon in the modern age, mostly linked to mass media or totalitarianism. One work I have read that builds a connection between totalitarianism and contemporary forms of media is Today’s Portugal: the Fear of Existence. It is in Portuguese, written by Portuguese philosopher José Gil (there seems to be no English version). Its main idea is that, people of Portugal have never left the shadow of totalitarianism. Before 1974 they live under political dictatorship, and once the dictatorship was overturned, what came subsequently was the totalitarianism of post-truth era. What made this transition so awkwardly smooth was people’s remaining in their comfort zone of living only in their private life, apathetic towards outside events which they regard as uncontrollable and irrelevant. People give up on learning about the world, for they are encouraged to escape into a state of ignorant satisfaction with what the outside world feeds them. They seem busy living their everyday life and seek fun in their personal or family life, but they are paralyzed in terms of facing the public world.

I feel that this is what today’s internet and algorithms is feeding us. It is just like we are each trapped in the Truman Show of ourself. When there is enough amount of fake information, they will end up knitting an entire fake world for us (and every individual lives in their own fake world).

I am attaching the cover of the book I mentioned. Btw, I have never been to Portugal so I really would like to hear about your comments if you are from the country or are familiar with it.

download

gabrielmoos commented 4 months ago

risk #origin #framing #networksarefast

GameStop was the first thing that came to mind when I heard the phrase “collective behavior”. The real fear behind collective behavior is that someone will be able to harness its power and use it for malicious purposes. I am not here to claim whether the GME revolution was malicious, but rather I want to argue that the powerful are afraid of collective behavior because it provides a route for those with less money, power, and social capital to have a larger voice and impact. Further, without a true leader through collective behavior, this mode of interaction also results in an equitable exchange of power after the interaction. Also, I argue that collective behavior and “irrationality” are effective tools against small groups of powerful individuals. A brief reminder on the GameStop debacle, Melvin Capital and other Hedge Funds had shorted more than 100% of the float of GME. Individuals on WallStreetBets (WSB), were fans of GME and noticed the potential for a short squeeze on the stock, collective behavior via the social networks Reddit (mostly) and TikTok led to the acquisition of the stock. In fact, a Harris Poll from 2021 showed that 28% of Americans had purchased a “meme stock”. We saw similar stories with AMC or BBBY, however, neither was as powerful as GME because they lacked the power of collective behavior. The entire financial and political worlds were shaken to their core because this was one of the first instances in the American financial markets where the collective was more powerful than the oligarchs (used for hyperbole). Whether it's posting about a stock on social media or the spread of fake news on X, these massive fast fast-moving networks provide avenues for the collective to disrupt the status quo. With a society full of individuals who feel wronged by the system, collective behavior provides an avenue for the system to change. Bad actors can ultimately catalyze non-virtuous situations with misinformation/disinformation, however, the collective will only choose to follow these falsehoods if there is a bias to do so. The systems we live in today are ultimately more democratic than those of 40 years ago, however, our institutions are weaker than ever before, which begs the question, “Can we repair our institutions by empowering them with the validation of collective behavior?”. JC