Open deholz opened 3 years ago
It's amazing to think about how prevalent issues of cybersecurity and social media have become in our day to day lives. Something that had not even existed a few generations ago is now a legitimate topic in a university class about whether humanity is doomed.
I remember that I genuinely couldn't help but find humor in the day that Donald Trump's twitter being deactivated was BIG news. But after a while it began to make me question the power of those running social media. In a country that is supposed to have free speech, it is very frequent that the words all types of people are censored by those who simply disagree with what is being said. There's a very gray line when it comes to determining whether the censorship of people online is dangerous or beneficial.
Sometimes I find myself thinking that social media may have been one of the worst things ever introduced to mankind, and other days I don't think it's that bad. For a lot of the bad it has done, it's also done a lot of good. But where is the line? When do the cons outweigh the pros? Social media is still a fairly uncharted territory when it comes to the distribution of information. Sometimes when you open an app or webpage, the only thing to protect you from the spread of false information is your own basic sense. I remember there was once a day in elementary school where my class was taught the difference between reliable and unreliable sources on the internet. For some reason this is the one class that has stood out most vividly from my elementary school career. Perhaps the new generations of youth would greatly benefit from more in-depth sessions on how to interpret and use information online, and how to protect themselves against false information, especially considering how much of it now comes from social media. And if people could be taught not to abuse social media, then maybe there wouldn't be a need for online censorship in the future.
As a person who grew up in a world run by technology, I found this week's readings both fascinating and worrisome. In a digital era where information can spread around the world in seconds, it is more important than ever to place an emphasis on the importance of reliability and truth, as well as the risks associated with cyber security threats and information warfare.
What struck me most from these readings were the accounts of Russian interference in the false information spread prior to the 2016 US election. One group responsible, the Internet Research Agency (IRA), has been linked to ‘trolls’ and fake accounts spreading lies in not only the US, but also in Ukraine and Russia (Chen). I did some research and what I found about this agency was frightening. Essentially, the IRA (supposedly shut down in 2018 after US Grand Jury indictment) was a Russian organization located in St. Petersburg that hired employees to create fake accounts and profiles on social media platforms to post about various topics, mostly political (Cipriani). Not only did they create fake pages for both the radical right and left, they also created fake news organizations, shared fake news stories, and even planned extremist protests. The employees had quotas for how many comments and political posts they had to create daily, and the ultimate goal of the IRA was to create chaos, distrust, polarization, and lies leading up the election (Chen).
This research on the IRA and the 2016 election, combined with the readings for this week, made me think very differently about information warfare’s threat to our democracy, and also to the rest of the world. Even more difficult is addressing this issue; because of country division, mobilization against corruption of information is difficult to tackle because of the deep mistrust on both sides. Also, unlike climate change or nuclear war whose effects can be felt physically (ie droughts from climate change or obliteration after nuclear bomb), the consequences of many cyber threats are difficult to feel, especially with political actors who have great interests in downplaying events like election interference. Further, this all happens behind a screen. In a world where everything feels so real online, it can be difficult to even imagine that certain profiles, political groups, or organizations can be secretly run by Russians who are paid to meddle in US politics. A possible solution lies in a complete re-framing of this issue from a non-partisan, unified standpoint in government. If this doesn't happen, distrust will continue on a vicious cycle: the more lies and distrust perpetuated, the harder it will be to ever find unity in the US to confront cyber security threats and take the the immediate actions, policy or otherwise, to combat information warfare.
Chen, Adrian. “The Agency.” The New York Times Magazine, https://www.nytimes.com/2015/06/07/magazine/the-agency.html?smid=url-share. Accessed 14 April 2021.
Cipriani, Casey. “Agents Of Chaos Charts Exactly What Went Wrong With The 2016 Election.” Bustle., https://www.bustle.com/entertainment/what-is-the-internet-research-agency-agents-of-chaos-explores-the-russian-trolls. Accessed 14 April 2021
Hands, Phil. “Hands on Wisconsin: Political extremists are pawns of Putin.” Found, Wisconsin State Journal, 21 May 2018, https://madison.com/wsj/opinion/cartoon/hands-on-wisconsin-political-extremists-are-pawns-of-putin/article_f26dffc8-a1fe-5504-bffd-17b22c3b8a6d.html
For me, one of the more interesting attempted solutions in regard to information warfare has been the introduction of fact-checkers and dispute banners to tweets and Facebook posts that contain misinformation. Though well-intentioned, I believe they have backfired in that rather than coming off as a neutral, nonpartisan referee, they tend to just reinforce the beliefs of conspiracy theorists that a cabal is dictating what can be said on the internet. In fact, following the 2020 election, having a warning label on a tweet almost appeared as if it were a badge of honor within some disinformation circles.
In the article, Lin mentioned a hypothetical scenario in which the Cuban Missile Crisis occurs in the social media era, writing, “The shooting down of a U-2 spy plane over Cuba might […] soon accompanied by numerous tweets and relentless commentary on Facebook and other social media platforms.” [187]. Now, imagine JFK posted a tweet containing misinformation and got fact-checked by Twitter. How would we have functioned as a nation?
I’m not entirely certain what the solution to this is though. It seems as if confirmation bias will mislead individuals no matter what the truth is. That is, even if the fact-checkers were fact-checked (which is not to suggest that they need to be), I don’t believe it would have any tangible effect on the viewers who need to see the truth the most. In 2019, Singapore passed The Protection from Online Falsehoods and Manipulation Act, which criminalizes fake news. While in practice this may seem like a much more thorough way of combatting misinformation than simple fact checks and warning labels, it does allow for the probability of manipulation, as information is deemed ‘true’ or ‘false’ by the government. While with a ‘good’ government this could work in theory, if the wrong people get into power, this law could easily be exploited for political and social gain.
![Uploading f6435bec-91b7-46c9-bae4-7995235f1b03.jpeg…]()
One of the things that concerns me with our own cyberspace is less of a foreign one, and more of a domestic, homegrown one. Sure, foreign governments hacking into our infrastructure can be damaging, but what does that matter when another enemy is at our doorstep, literally. A lot of people have started to take misinformation more seriously, especially when it comes to things like facebook and the misinformation spread on there, but I think we also need to be concerned with the younger generation. Platforms like ifunny and 4chan have created alt-right breeding grounds, where the concerning messaging is slipped in between funny memes. As memes grow into more and more of our culture, the risk here only grows as well. I worry that while all eyes are on facebook, an increasing amount of alt-right messaging is being slipped into unsuspecting minds. Starting with dogwhistles and graduating to "edgy humor" promises young people a community where they believe themselves to be liberated, something that many of these people don't have elsewhere. I think this goes hand in hand with combatting other forms of media disinformation, and it's really insidious because memes are so simple and can convey a lot of deeper meaning than what's just visually present. Bea mentioned the Bad News game, which I think helps connect to this specific demographic as well, by gameifying a critical lens through which to view media.
Consider the following quote from Lin's At The Nexus of Cybersecurity and Public Policy:
"In an environment of many competing priorities, reactive policy making is often the outcome. Support for efforts to prevent a disaster that has not yet occurred is typically less than support for efforts to respond to a disaster that has already occurred" (4)
The extensions of this axioms beyond matters of Cybersecurity are clear and significant. I'll proceed by raising a point / case study concerning each of the concepts highlighted in bold above.
The first, the tendency of governments/institutions to pursue reactive (as opposed to calculated) courses of action in times of crisis or attack. It's no secret that reactive behavior results from concerns about cybersecurity--see the chaos that ensued surrounding the integrity of this year's presidential election and the almost instantaneous lawsuits that were filed by the sitting president against his own states. But the misinformation and cybersecurity crisis upon us, at least with regards to data privacy, can perhaps be traced back to this reactive phenomenon. In particular, the passage of the Patriot Act shortly after 9/11 was extremely bipartisan for today's standards (source), and allowed for the creation of the NSA among other things. This spearheaded the massive public and private data collection campaigns that help to create the medium for today's cyberspace.
The second point: the tendency of institutions or states to respond after crises occur rather than beefing up their preventative measures in the short term. The obvious example candidate here is COVID-19 in the U.S. and the World. You may have your own opinions about how different countries have handled the pandemic, but the consensus before the pandemic on countries' preparedness is... misinformed (see image below). Regardless of the world's response to the pandemic, there is one objective truth: the U.S. pandemic response team was trimmed/disbanded in 2018 (source). This is in line with the idea that we react rather than prepare. Of course, the misinformation campaigns surrouinding Covid, which could have been exacerbated by the world's lack of preparedness, added another layer of complexity to the problem. We'll see if we ever learn!
Many above have commented on the threat of cybersecurity in the context of the spread of mis or disinformation. I want to take that a step further, and explore the massive potential risk that exists at the intersection of cybersecurity, politics, normalization of disinformation, and big-tech carelessness.
In July of 2020, Elon Musk's verified account posted the following tweet; Though the world later found out that the Tweet was not posted by Elon himself, it was not out of character for the eclectic billionaire, who is known to be a very large supporter of the adoption and usage of Bitcoin. As a result, Twitter users began to send Bitcoin to an untraceable wallet, hoping their money would double. This was the work of some very talented hackers- who, over the course of the next few minutes, posted similar messages from the official, verified accounts for Apple, Barack Obama, Joe Biden, Mike Bloomberg, Bill Gates, Warren Buffet, and more. In the few seconds before Twitter identified the problem and removed the tweets in question from the platform, hundreds of thousands of dollars in Bitcoin poured into the hackers' wallets.
Major US-based news outlets that reported on it tended to examine it from the lens of humor or entertainment. After a few hours in the spotlight, the turbulent events of 2020 caught focus again; and after a week or two, we collectively forgot about this Twitter hack and moved on. In doing so, we failed to sit down as a nation to address the fact that in the hands of hackers with more malicious intentions, a hack like this poses major, major risks to global politics (and potentially, the future of humanity). At this point, I think we know well that tens of millions of Americans watch the Twitter accounts of certain political figures like hawks, waiting for the next instruction or statement like it is coming from a divine, all-knowing being. Ask yourself- what would have happened if Donald Trump's account suddenly tweeted "Bye bye Rocket Man! WE WIN- WE ARE LAUNCHING FIRST"? It takes minutes to identify these hacking issues and resolve them, and in those minutes, I think there is a strong argument to be made that Americans would be wiped off of the face of the planet.
This week's readings do an excellent job of demonstrating the increasing threat of cybersecurity, a threat which seems to be constantly worsening in ways that we cannot tame. As technology improves, the cost to firms of defending against cyber attacks increases massively, causing many firms- like Twitter (and famously, Facebook), to ignore warning signs with the hope that the unlikely won't happen. Case in point- extremely recently, it was discovered that Facebook had "accidentally" compromised the phone numbers of more than 500 million users. What's worse- the company was aware of the problem as early as 2017 after complaints from users, and chose not to take action.
Ignoring an issue that is't bad yet may be the economically sensible thing to do if you're Jack Dorsey or Mark Zuckerberg...but a few million dollars fixing it now might literally save humanity. Can we trust that that will happen? Maybe not.
Sources: https://www.vice.com/en/article/88awzp/facebook-says-its-your-fault-that-hackers-got-half-a-billion-user-phone-numbers?utm_source=reddit.com https://www.forbes.com/sites/daveywinder/2019/11/19/google-confirms-android-camera-security-threat-hundreds-of-millions-of-users-affected/?sh=6b92f50f4f4e https://www.theguardian.com/technology/2020/jul/15/twitter-elon-musk-joe-biden-hacked-bitcoin#:~:text=Twitter%20suffered%20a%20major%20security,Gates%2C%20Jeff%20Bezos%20and%20Apple. https://economictimes.indiatimes.com/news/international/world-news/twitters-bitcoin-hack-signals-political-danger-too/articleshow/77000084.cms?from=mdr
Image Sources: https://www.theverge.com/2020/7/15/21326200/elon-musk-bill-gates-twitter-hack-bitcoin-scam-compromised
Despite the fact that the proliferation of misinformation and disinformation in the cyberspace has challenged the legacies of the Enlightenment, which are reason and reality (Lin 2019), it cannot be denied that the rise of the Internet - especially the prevalence of social media – is the most significant phenomenon since Gutenberg Bibles (as I write this down, the google homepage is commemorating Gutenberg). Gutenberg printing press and later development of print-capitalism democratized the spread and acquisition of knowledge and information. New forms of social media on the Internet and the consequential digitally networked public sphere have democratized the production of information. I think that such kind of democratization should also be regarded as a part, or consequence of the Enlightenment values.
Lin and Kerr (2018) mainly focus on the information warfare and influence operations. The arguments are established on the assumption that the IWIO is wielded intentionally by the adversaries. It is often the case, considering the Russian interference in the 2016 and 2020 election and the influence operation by the Chinese authority since the pandemic. However, the manufacture of misinformation and disinformation is not monopolized by “a few centralized choke points” in Tufecki’s words (2017) – this hypothesis contradicts the very nature of the digitally networked public sphere: openness and freedom. In fact, as the new Netflix documentary Q: Into the Storm has implied, the manufacture process can be collectively conducted by citizens. Thus it remains in the societal domain, not in the domain of national security in the conventional sense. Therefore, I argue that the majority of the responsibility for restraining and removing disinformation and/or misinformation falls upon private sectors, including internet providers such as AT&T, and social media companies. It is imminent to establish regulations and gatekeepers on new forms of social media and to enforce the gigantic social media companies to cope with the problem. Those companies are becoming quasi-sovereigns, not merely private corporations, with almost unlimited power within its own realm – Facebook, for example, can unilaterally and arbitrarily change terms of service and market making algorithms, just like absolutist kingship without democratic participation, civic consensus and constitutional limitation (please refer to the video of Sacha Baron Cohen’s speech). The potential conflictual relationship between democratic states and social media is salient in this regard.
In the Bytes Bombs and Spies reading, Herbert Lin details the high degree of classification about nearly every aspect of U.S. cyebersecurity capabilities and how this veil hampers the development of policy in cyber by scholars and politicians alike. For this reason, in addition to the limited specific discussions of the United States’ cyber efforts throughout our four assigned readings, I will focus on the role and relationship of the private sector to international cyber operations. Specifically, as posited in the Nexus of Cybersecurity reading, I agree with the assertion that too many decision makers focus on the short-term costs of improving their own organizational cybersecurity at the expense of their long-run security.
Though many industry analysts are of the opinion that the 8.4% compound annual growth rate (CAGR) projected in the U.S. Cybersecurity Industry from 2020 through 2023 is rampant, growing the industry to $76 billion by 2023 (see chart below), I am of a different opinion. I believe that the U.S.Cyber Security Industry should grow at a teens % CAGR in order to reconcile the fact that cybercrime costs businesses $400 billion per year globally, as per the British insurance company Lloyd’s. If we make the modest assumption that a third of this cybercrime occurs in the United States annually, U.S. corporate cybercrime stands at ~$133 billion per annum, about twice the current size of the U.S. Cyber Security Industry! This is the intellectual analog of saying that more money is robbed from banks each year than is cumulatively paid to bank security guards at those bank—a ludicrous proposition. As an aside, $40 million is stolen from U.S. banks annually and there are 83,000 banks employing on average 1 security guard for $30,000 per year—83,000 x $30,000 = ~$2.5 billion which means the market for physical security is 65x the expected damages per annum.
By no means am I attempting to suggest that 65x more money should be spend on Cyber Security than should be siphoned away from corporations in attacks. This type of ratio is not feasible given the minimal barrier to hacking (can be done from anywhere with reasonable chances of success, conditional to hacking ability, and decent hardware). Instead, I am merely trying to suggest that corporations are behind in their technological adoption—executives need to look themselves in the mirror and ask if their IT budgets are adequately sized and whether or not enough of it is allocated to enterprise contracts with reputable cyber security companies like Palo Alto Networks in Network Security, Cloudflare in Web Security, CrowdStrike in Endpoint Security, IBM in Analytics, Symantec in Messaging Security etc. The big losers going forward will continue to be the mid-sized companies that cannot afford these types of cyber security solutions, though these attacks will never be in the headlines because they aren’t as big as the occasional large-cap company hack, though they are probably both an order of magnitude more frequent and an order of magnitude less sophisticated.
In Lin’s article on the Bulletin, he explains that we don’t need that much misinformation to spread until we no longer trust the very platforms that deliver us information. Unfortunately, it seems we have come to that point already. Unsurprisingly, in the world of pessimistic and un-trusting cryptographers, this problem is quite familiar. We use certificates in order to verify that a webpage we are visiting is in fact the place we want to go. Google obtains a certificate that certifies that google.com is indeed Google. This used to involve a physical interaction, where representatives of the certificate authority would visit Google, now the process can be more automated through the use of ideas similar to two factor authentication.Once the Certificate Authority is sure Google is who they say the are, they sign Google’s certificate, and attaches their own certificate to Google’s . But why should I believe the certificate authority? Who has signed their certificate? And so starts a chain of certificates that get signed over and over again. Making a system resilient to misinformation is difficult for that exact reason, if I don’t trust anyone, then I can only trust what I want to believe. As Lin points out, modern social media platforms are designed to extract System 1 thinking and thrive on delivering information that conforms to our biases, obviously exacerbating the spread of misinformation. But before we dump all the blame on the “algorithms'' perhaps it is worth examining why we pay attention to misinformation in the first place. When did we lose trust in each other as a Society and why? Perhaps even more pressing, why do so many people want to believe in an alternate reality? The task of fact checking the entire internet, or building a chain of verification seems almost impossible. Maybe the key to building more resistant information spheres is addressing people’s needs so that they don’t seek alternate realities?
The reading by Herbert Lin and Jaclyn Kerr discusses information warfare and influence operations (IWIO) as weapons (in the broad sense) that are used by one country against another, and to which liberal democracies are particularly vulnerable. While many of the issues raised in the piece are having destabilizing affects right now in countries such as the United States and the Ukraine, one of the most terrifying current examples of the use of IWIO comes from Myanmar, where the military has been using IWIO against its own people, with devastating consequences.
For some context, in the early 2000s, citizens Myanmar lived under a largely restrictive government and internet access was very low companied to other countries. When democratic reforms occurred in the early 2010s, Facebook participated in a partnership with the telecom companies in Myanmar. Facebook as installed on every cellphone sold in Myanmar, and it could be accessed without incurring any data charges. Because it was so much easier and cheaper to access Facebook than any other website, Facebook became the main source of information in the country.
Additionally, in the mid-2010s, more violence began breaking out in certain states between the Buddhist majority and minority Rohingya Muslims. One of the main drivers of increased divisions and violence was inflammatory propaganda on social media. In 2017 and 2018, it was discovered that the military had created thousands of fake accounts on Facebook dedicated to things like sports, beauty, and pop culture. Once these accounts had lots of followers, they started posting anti-Rohingya propaganda, and additional fake accounts with fewer followers would promote those posts. These posts contained stories about fake terrorist attacks committed by Muslims, as well as misinformation about the country’s democratic leadership. During the violence that this propaganda helped encourage, over 900,000 Rohingya Muslims had to flee Myanmar over the past 10 years, and thousands of people were killed. Most recently, a military coup in February 2021 gave the same military complete control over the government.
In a connection to our reading, the New York Times reported that the Myanmar military copied misinformation techniques from past Russian use of IWIO. What happened in Myanmar is an example of how dangerous it can be when governments, or certain sections of governments, use IWIO against their own citizens. Due to the omnipresence of Facebook, Myanmar was particularly vulnerable to misinformation. This highlights the importance of having multiple independent ways to receive information. Additionally, while government interventions to prevent foreign influence in things like elections can be important to maintaining democracy, there must be solutions that do not entirely rely on the home government regulating the media. In any country, there is always the possibility of a bad actor coming to power, and the power of a local government, coupled with the force of social media, can be a very powerful vehicle for misinformation.
Source 1: [https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html] Source 2: [https://www.bbc.com/news/world-asia-55929654]
The image included is an example of some of the propaganda posted on Facebook. The photos claim to show evidence of conflict in Myanmar's Rakhine State in the 1940s, but the images are from Bangladesh's war for independence from Pakistan in 1971. (NYT)
Information on a mass scale is quite often misunderstood. The scope of the issue at hand is one that goes by unnoticed, by civilians and lawmakers alike. Policy isn't properly trained to encourage preventative measures, and with the advent of other global catrastrophes on the horizon, cybersecurity is often an issue that goes under the radar and literally finds a way to infiltrate the basis of our very lives. In the ways in which we're trained to perceive threat and risk, we recognize immediate and short-term issues, while often ignoring what lies beyond or pushing it away to not have to think about it. Policy relies on relieving that which has already happened, and cybersecurity is a front that's practically begging to be exploited with how much crucial information is conveyed online. What I fail to comprehend is the extent to which we can assure any random civilian of the security of the information they decide to consume and relay to others. If we're to depend on the safety and factual nature of that which guides all the decisions that govern our lives, then I have to wonder to what extent the integrity of this information is liable to falter or get abused by hackers or an advent in dark cyberspace-related affronts. Would we even know in time to properly salvage the data we've unknowingly provided to the darkest traders on the internet? Is society gripped to handle information leaks, misappropriation, or distrust on such a mass scale? If the people are to ever trust in the government and their fellow neighbors, there has to be some basis of trust and reliability to ground a civilized argument. Without that, there's no grounds to call anything truth instead of the biased opinion or uncertain statistic as conjured by an individual. Collectivism in our society is attracted to the premise of a societally-accepted response, foundation of knowledge, and understanding of who we are and where we come from. If information at its core is susceptible to being tampered with and knowledge is capable of being mishandled to deter that collectivist approach to accepting reality's truths, then the extent to which hatred, extremism, prejudice, and ignorance will reign will be unprecedented and ferment for generations to come. As an issue that's so desperate to pull society apart group by group, we as a society need to acknowledge the framing of this issue from a group perspective and cement the integrity of the information that defines us and the decisions that govern our lives.
I was initially somewhat surprised at the conflation of cybersecurity and information chaos. However, as the possibility of information chaos has slowly become more of a reality (capital riots, widespread misinformation campaigns, etc) it became clear to me how cybersecurity plays a pivotal role in alleviating these concerns. In recent history, I cannot think of any national cybersecurity threats that lead to widespread lasting chaos. Some students last week touched on the mistaken emergency warnings on the possibility of nuclear war. However, I feel as though information chaos has been shown to cause more "civil action."
Unlike the last weeks, "cybersecurity is a never-ending battle, and a permanently decisive solution to the problem will not be found in the foreseeable future." There is no end to the battle of information chaos, the goal may just be to reduce it as much as possible. How does this change the way we thinking about the existential issue of cyber? For global warming and nuclear war, there are futures nuclear disarmement and green energy take over. But a world without "cyber" or a connected internet? That already feel dystopian in some ways.
Herbert Lin writes about the coming information dystopia, and I believe we can see it today in some respects. When major sources of information are deemed untrustworthy, individuals will seek out answers from anywhere else, NASA and flat earth for example. Even without the deepfakes and other high tech tools, misinformation has already been proven to be very influential. The amount of information online increases so much faster than the amount of information truly being learned every day. How does the never ending nature and the reality of misinformation today change how we think about the future solutions to information chaos?
I recently stumbled upon a game that I think provides a simple microcosm of information chaos. Similar to some of the other "simulations"/games others have mentioned in "We Become What We Behold," your work as the news photographer and you can see how your photos impact the characters in the area.
Citizens originally get along well, but depending on what events you photograph, slowly become more and more aggravated with each other.
Until finally the citizens begin attacking each other.
From a more qualitative perspective, I think one of the biggest risks posed with the issue of cybersecurity globally is that it maintains no nation or specific location. With the right technology, any group or individual can commit acts of cyberterrorism and allocate the blame elsewhere, especially with the current global political climate. The real issue remains that anyone with a very strong knowledge of coding and cybersecurity systems can eventually breakthrough the strongest of walls (in theory). I grappled with this issue in my question this week, and how we can shorten the window for knowing the location or origin of an attack. If a country launches a missile or mobilizes their military, it is clearly evident that this was a move by their government, and a decision was made to “attack.” However, if they hire an independent third party, or even place the blame elsewhere, there is not immediate proof that this is not the case. We are currently in need of a system, and oversight that not only allocates cyberattacks to the proper party, but ensures that retaliation/punishment is justified and falls upon the correct party. If this is not the case, we can end up in a situation with a lot of pointing fingers and conflicting plotlines, and in the worst case, a cyberwar or global war over whatever situation arises. Additionally, it is far too hard to antagonize a government to the point of retaliation through their computer systems, and this would never be super evident to the public. There is a deeper secrecy to hacking and cyberwarfare that leaves the issue entirely jaded to the public eye, especially in the situation where an attack is directly allocated to a group that did not commit the attack. I think if we are able to parse out the where and shorten the window for acquiring this information, we will be much better off into the future.
In Herbert Lin’s article, The existential threat from cyber-enabled information warfare, he outlines that it would be interesting to see how the current state of social media would have effected events of the past. To demonstrate this idea, Lin gives the example of the Cuban Missile Crisis. Lin notes how during the Cuban Missile Crisis, the world was nowhere near as interconnected as it was today. That is, information that would have been protected and may have taken days to reach world leaders would be posted on Twitter in an hour for everyone to see. Given the current global information ecosystem, many experts note how it is a distinct possibility that something like the Cuban Missile Crisis may have been intensified, possibly to the extent of an all-out nuclear exchange.
Given how impactful experts believe information can be in our digital world, the idea of “fake news” or misinformation being posted online is extremely important. Since social media’s rise in popularity, false information has become a feature of these platforms, which poses a large problem for the integrity of information on these platforms as well as global security. Firstly, it has been shown that false rumors tend to spread faster and wider than true information. That is, researchers from MIT found that falsehoods were 70% more likely to be retweeted than facts, and “reach their first 1,500 people six times faster”. Moreover, this effect is exacerbated with political news as opposed to other categories. Another aspect of social media and fake news that poses a threat to global security is the fact that some misinformation is spread by politicians. One of the most interesting aspects of this is that researchers at MIT Sloan have found that some people appreciate candidates who tell lies (as odd as it sounds), even seeing this candidate as more “authentic”. This was seen during Trump’s presidency, and it does not take much imagination to see how a controversial, fake news post by a powerful politician could strain global relations. Perhaps the most frightening use of fake news over social media is when foreign countries use these platforms to influence elections through falsehoods: A prime example of this was Russia government during the 2016 election, where they used Facebook, Instagram, and Twitter to spread false information.
Based on all the aforementioned information, it is quite clear to see how fake news truly does pose an existential threat: all it takes is one detrimental fake news post from the social media community, manipulative governments, politicians seeking votes etc., and, due to the speedy dissemination of fake news on social media platforms, it could become extremely popular and strain global relations. This strain could have vast socioeconomic and political consequences, and, on the extreme end, the all-out nuclear exchange mentioned by the experts in Lin’s article could become a reality.
Image Sources: https://www.statista.com/topics/3251/fake-news/
Other Sources: https://www.nytimes.com/2018/02/16/us/politics/russians-indicted-mueller-election-interference.html https://mitsloan.mit.edu/ideas-made-to-matter/mit-sloan-research-about-social-media-misinformation-and-elections
This week's readings pointed out many ways in which cyber threats and misinformation could dramatically increase the risk of international conflict and damage our information infrastructure. What was particularly alarming was the way states seemed to view cyber warfare. When compared to a conventional military conflict, a cyber attack poses far less risk of escalating into a nuclear war.
Because of the danger of nuclear escalation associated with conventional military conflicts, nuclear powers around the world have been forced to go to great lengths to avoid direct engagement. The existence of varying forms of mutually assured destruction have severely limited the ways in which nuclear powers can compete with each other, forcing many to resort to proxy wars as the most extreme way to engage in any military conflict. This danger inherent in any conventional military engagement has greatly reduced the overall amount of military conflicts between great powers over the last century.
However, many states seem to view cyber conflicts as a new avenue to engage with their international adversaries that has a significantly lower risk of escalating into a nuclear conflict. Because of this view of cyberwarfare as safer, nuclear powers now have a way to engage in direct conflict with each other in ways they wanted to before without immediately triggering some version of mutually assured destruction. Although a cyber attack presents a much lower risk of provoking a nuclear response than a conventional attack, because nuclear powers simply weren’t engaging in risky conventional attacks against each other before, cyberwarfare could still significantly increase the risk of nuclear conflict.
While cyberwarfare is less risky for nuclear powers than conventional warfare, it is far more risky than no direct conflict. Cyber attacks also still have a significant possibility of triggering a nuclear response, either purposefully or accidentally. As pointed out in our readings this week, a state could easily do far more damage than originally intended with a cyber attack. A state’s adversary in a cyber conflict could also easily misinterpret the actions or intentions of their opponent. Both of these situations present a real chance of provoking nuclear war between nations. The use of cyberwarfare allows nuclear powers to take one step closer towards a full nuclear conflict without being the one to purposefully trigger the conflict.
I think the most prominent example of information chaos comes from the Internet Research Agency’s misinformation campaign during the 2016 US Presidential election. Not only is our cybersecurity at risk, but also the security of our informational reality. By that I mean the Internet Research Agency, or IRA for short, created fake websites and profiles to create fake information that would confuse and radicalize American voters. This group of black hat hackers supported by the Russian governments would pose as Americans giving their opinions on elections both on a national, state, and local level. You might think that they were only trying to get people to support Trump, and get him to destroy the country with his own stupidity, but that was only half the battle. These fake profiles creating informational chaos that undermined the citizens trust in each other. The Kremlin realized that if it could sew deep seeded mistrust between sides of the political aisle, then it could tear the country apart without ever shooting a bullet. By inflaming arguments, interjecting nonsense, and polarizing the American body politic, the Russian hackers at the IRA insidiously destroyed our country. Not only did the IRA exploit the minds of the voters, it exploited the algorithms themselves. They legally purchased data from facebook, twitter, and reddit and used peoples personal information to polarize their viewpoints. Through the use of both radicalized fake pages from the left and right, voters interacting with their pages would see more similar content on their news feed. If the IRA could make enough of these pages, the entire online ecosystem would become a mirage. All of the content we see would be shaped into a bubble, molded by our own preferences and interactions. Through manipulation of human psychology and social algorithms, the IRA was able to use information chaos to dilute the truth, and erode the fabric of American democracy. The image below is a list of some of the fake pages the IRA used to manipulate people.
Leave below as comments your memos that grapple with the topic of cyber inspired by the readings, movies & novels (at least one per quarter), your research, experiences, and imagination! Also add a thumbs up to the 5 memos you find most awesome, challenging, and discussion-worthy!
Recall the following instructions: Memos: Every week students will post one memo in response to the readings and associated topic. The memo should be 300–500 words + 1 visual element (e.g., figure, image, hand-drawn picture, art, etc. that complements or is suggestive of your argument). The memo should be tagged with one or more of the following:
origin: How did we get here? Reflection on the historical, technological, political and other origins of this existential crisis that help us better understand and place it in context.
risk: Qualitative and quantitative analysis of the risk associated with this challenge. This risk analysis could be locally in a particular place and time, or globally over a much longer period, in isolation or in relation to other existential challenges (e.g., the environmental devastation that follows nuclear fallout).
policy: What individual and collective actions or policies could be (or have been) undertaken to avert the existential risk associated with this challenge? These could include a brief examination and evaluation of a historical context and policy (e.g., quarantining and plague), a comparison of existing policy options (e.g., cost-benefit analysis, ethical contrast), or design of a novel policy solution.
solutions: Suggestions of what (else) might be done. These could be personal, technical, social, artistic, or anything that might reduce existential risk.
framing: What are competing framings of this existential challenge? Are there any novel framings that could allow us to think about the challenge differently; that would make it more salient? How do different ethical, religious, political and other positions frame this challenge and its consequences (e.g., “End of the Times”).
salience: Why is it hard to think and talk about or ultimately mobilize around this existential challenge? Are there agencies in society with an interest in downplaying the risks associated with this challenge? Are there ideologies that are inconsistent with this risk that make it hard to recognize or feel responsible for?
nuclear/#climate/#bio/#cyber/#emerging: Partial list of topics of focus.
Movie/novel memo: Each week there will be a selection of films and novels. For one session over the course of the quarter, at their discretion, students will post a memo that reflects on a film or fictional rendering of an existential challenge. This should be tagged with:
movie / #novel: How did the film/novel represent the existential challenge? What did this highlight; what did it ignore? How realistic was the risk? How salient (or insignificant) did it make the challenge for you? For others (e.g., from reviews, box office/retail receipts, or contemporary commentary)?