Open deholz opened 3 years ago
The nice thing about cyber warfare is that unlike climate change... cyber can be funny.
Tumblr user "lagonegirl" was at the head of a Russian psyop. This is the sort of thing they were posting. What agenda could this possibly advance?
(The honest answer might be "well, they made bogstandard tumblr posts to build a following for doing Russian psyop stuff, like pretending to be a BLM activist who believes the world needs a race war". That's the less funny side of what lagonegirl was up to. Even still... thanks, Russia, for making the worst social media site just a slightly bit worse.)
In this week’s readings, we examined the possibility that information chaos and cyber threats result in a doomsday, civilization-ending event. In this response, I will first underscore the risk of attacks by examining the impact of previous attacks. Next, I will reframe cyber as more likely a secondary agitator compiling the threat of nuclear, climate change, and political misinformation rather than a world-ending catastrophe on its own. Further, with this new framing in mind, I will show how cyber has already had major impacts on the United States.
As mentioned in Herbert Lin’s and Amy Zegart’s Bytes, Bombs, and Spies, there have already been a series of attacks in the last few years. In 2012, Iran destroyed 30,000 Saudi computers. In 2015, China stole millions of records from the Office of Personnel Management. And in 2017, North Korea launched a global ransomware attack and Russia attacked Ukraine’s infrastructure. Collectively, these attacks showed the potential for cyber attacks in bringing about millions of dollars in damages, stealing personal information, and threatening critical infrastructure that sustains life in foreign countries. These aggressive acts and the impending cyber threats led to the “elevation of U.S. Cyber Command from unified subcommand under U.S. Strategic Command to a full unified combatant command” (Lin and Zegart). This response by the United States military reinforces the prominence of a cyber attack. Overall, there is a significant risk to our world that can potentially end civilization.
However, upon reading the articles, I would argue that it is far more likely that cyber is a threat in that it compiles the risks and severity of several other major doomsday scenarios; these prominently include the increasing threat of nuclear, reducing climate change response, and propagating political misinformation. Concerning cyber’s impact on nuclear war and climate change, Herbert Lin in his “The Existential Threat from Cyber-Enabled Information Warfare” states that “Nuclear war and climate change are arguably the most important existential challenges today that are compounded by the corruption of the information ecosystem” (Lin). Lin explicitly mentions how the Cuban missile crisis might have resulted in a nuclear war if the event happened during the informational age (Lin). Also, governments and countries can spearhead efforts to deny climate change through information warfare inflicted on the people which vitalizes extremist views and exacerbates harmful debates.
One impact of cyber that Lin did not speak of in as great of detail is the potential for a foreign power to spread misinformation to change outcomes of political elections. For us in 2021, the most infamous recent case was Russia’s attempts to undermine the 2016 election in favor of Donald Trump over Hillary Clinton. Russian bots took to social media, blogs, and internet forums to stoke up conspiracies against Clinton. While it is impossible to know what would have happened without Russia’s involvement, this action by Russia helps reframe the way we see cybersecurity. To deeply impact the United States government at its highest office, Russia did not need to attack our infrastructure or organize cyber attacks against polling machines; rather cyber is now more subtle. In this picture linked below, we see an exchange between Putin and Trump at the G-20 conference as a reminder of the very real impacts that we have already felt of cyber. Cyberwarfare is possibly going to motivate a different threat, in this case political, rather than be a doomsday event on its own.
Source: NBC News
Unlike the nuclear Armageddon and problems with climate change, patrons and actors in the cybersecurity sector focus on both solutions to problems that are short term, but more importantly, unforeseeable problems in the future. This emphasis on long term problem solving is something I wish we would see in the other topics of this class, especially when it comes to climate change and the attitude of United States’ politicians and politics. I believe that this emphasis on long run problem solving comes from a certain greed that major corporations and companies have. There are many wealthy people that wish to have their assets safe and secure from hackers, terrorists, and criminals, and this is the root of cybersecurity problem solving (protection of property and self). People want to invest into something that is going to protect their assets and themselves now and definitely in the future. I think that this is most apparent when looking at AI and machine learning technologies that are being used in developing new security protocols and counterattack software.
AI uses its deep and machine learning technologies to learn from its mistakes and actions. When someone tries to breach a network that has AI cybersecurity, the AI is a program that is running and observing the cybersecurity’s defenses. It analyses the system’s defense and the attacker’s actions and creates new counter measures through its findings. This creates faster detection periods and means that the system is using behavioral analytics. The counter argument that I see in using AI in cybersecurity is that hackers and criminals can use it in attacks. AI can be used to evade system and network detection, hide in programs where system checks can’t find them, and use their learning programs to automatically adapt counter measures to system defenses. This can lead to huge breaches in confidential material, military arms, and people’s privacy which can be catastrophic for our nation and world. If a program can automatically learn from its mistakes, it will try to break into a system until it eventually is successful or is shut out. Imagine if a terrorist group uses AI in hacking United States military weapon systems. It would be world changing.
I think the biggest problem with cybersecurity is the way that it will evolve with AI technology. But the best part about it is, just like the AI it uses, it is constantly evolving and creating counter measures for any foreseeable and unforeseeable problems. If we can use this kind of thinking in fighting against the other topics of this course, we would find ourselves in hopefully a more promising future where there is no worry about a human made end of the world.
Sources: https://www.techrepublic.com/article/3-ways-criminals-use-artificial-intelligence-in-cybersecurity-attacks/ https://www.balbix.com/insights/artificial-intelligence-in-cybersecurity/
GameStop, AMC, Bitcoin; since March of 2020, the US markets have been in an unprecedented state of flux. The boom/bust cycle of the global economy proved to be ever persistent, as enormous losses and enormous gains seemed to occur within very tight timeframes, dependent on natural economic factors and social forces such as vaccine development, stimulus, racial strife, and presidential transition.
However, a different type of growth has been observed often as of late: speculative investing. Stocks such as AMC and GME experienced unheard of gains and volatility thanks to short-squeeze positions enabled by democratized forums of investors. Cryptocurrencies such as bitcoin and dogecoin experienced similar boosts, with bitcoin currently--as this is being written--at an all time high of USD$63,407. Many of these trends have been fueled by average people with little to no investing experience or knowledge. Enabled by an all time high in informational access (Wall Street Bets, reddit, Robinhood, twitter) accompanied by trillions of dollars in stimulus payments, the past year has shown us--over and over again--that many Americans are not afraid of risk when betting heavy on volatile trades.
Interestingly, some financial institutions have been using informational access and speculative craze to their advantage. "Astroturfing" refers to the "deceptive practice of presenting an orchestrated marketing or public relations campaign in the guise of unsolicited comments from members of the public." Big banks have recently come under fire for using platforms such as Wall Street Bets and other forums to their advantage, building computer bots that generate thousands of spam messages--under the guises of real people taking personal stances--to flood these forums and encourage individuals to take positions that would end up being profitable to the bank itself. Such practices raise questions that are not as internationally and politically pressing as those covered in this week's readings, but important nonetheless; in the realm of the digital beneath cyber-warfare itself, there clearly exists tremendous capacity for institutions to manipulate communicational networks to their own benefit. Though such networks may not be as "weaponized" in the sense of cutting off an electrical grid, or attacking a secure government server, perhaps the question of regulating our cyber-resources to maintain "security" should begin by taking a fresh look at our very own private sector domestic interests.
You just got your COVID-19 vaccine. Next up, inoculation against misinformation.
IWIO conflict and cyber warfare appear to be existential threats to democracy and humanity as a whole. Of particular concern is our innate tendency to fall victim to ‘fake news’ and failure to discern between fiction, half-truths, and the truth. Cognitive biases such as the availability heuristic, confirmation bias, fluency bias, loss-aversion bias, optimism bias, illusory truth bias, and recency bias can render otherwise rational people vulnerable to resisting any information that contradicts their prior beliefs. Additionally, viral misinformation is more likely to stick with individuals and be considered true, even after being debunked. In an information environment dominated by social media (which democratizes publishing capabilities and increases the circulation of misinformation) and ‘bad actors’ (both foreign and domestic), these biases are particularly dangerous. However, there may be methods to protect against the proliferation of misinformation online and influence of IWIO. According to On Cyber-Enabled Information Warfare and Information Operations, “research on the psychology of communications suggests that people can be ‘inoculated’ against fake news.” This approach, called prebunking (i.e., preemptive debunking), is based in the theory of biological immunization. By preemptively exposing individuals to examples of misinformation and common manipulation techniques, researchers hope to build resistance against future fake news and improve ‘immunity’ against IWIO. These proactive interventions or ‘psychological vaccines’ are intended to encourage individuals to become more critical of information and carefully assess manipulative claims. The hope is that rather than use their System 1 thinking (“an intuitive, reflexive, and emotionally driven mode of thought” prone to error and bias), inoculated individuals will use their System 2 thinking (“a slower, more deliberate, analytical mode of thought”). The Cambridge Social Decision-Making Lab, in collaboration with DROG, Gusmanson Design, the Department of Homeland Security, and U.K. Cabinet Office, has already developed three prebunking games. Each game- Bad News, Harmony Square, and Go Viral!- teaches players common misinformation and manipulation techniques. In Bad News, for example, you become editor-in-chief of a fake news site and use twitter bots and conspiracy theories to gain followers and ‘credibility’. Throughout the game, players earn a badge for each fake news technique mastered: emotion, discredit, polarization, conspiracy, and trolling. Bad News has been translated into 19 languages and played more than a million times around the globe. Research suggests that playing Bad News significantly improves individuals’ ability to spot and resist manipulative social media content. However, like many biological vaccines, the benefits of psychological inoculation decay over time. Importanly, little headway has been made with respect to reaching the subpopulations who would benefit most from playing. Nonetheless, prebunking doesn’t need to be restricted to games. During the most recent U.S. election, Twitter applied prebunking to warn users against election misinformation. One such message read “Election experts confirm that voting by mail is safe and secure, even with an increase in mail-in ballots.” Regardless of the tactic or technique used, proactive pre-bunking seems valuable should we seek to avoid a total “Informational Dark Age”.
Links to play the games:
Bad News - https://www.getbadnews.com/#intro
Harmony Square - https://www.harmonysquare.game/en
Go Viral! - https://www.goviralgame.com/en
Works Cited:
Arguedas Ortiz, Diego. “Could This Be the Cure for Fake News?” BBC Future, BBC, 14 Nov. 2018, www.bbc.com/future/article/20181114-could-this-game-be-a-vaccine-against-fake-news.
Ingram, David. “Twitter Launches 'Pre-Bunks' to Get Ahead of Voting Misinformation.” NBCNews.com, NBCUniversal News Group, 26 Oct. 2020, www.nbcnews.com/tech/tech-news/twitter-launches-pre-bunks-get-ahead-voting-misinformation-n1244777.
Lin, Herbert, and Jaclyn Kerr. “On Cyber-Enabled Information Warfare and Influence Operations.” Oxford Handbook of Cybersecurity, May 2019.
Lin, Herbert. “The Existential Threat from Cyber-Enabled Information Warfare.” Bulletin of the Atomic Scientists, vol. 75, no. 4, 2019, pp. 187–196., doi:10.1080/00963402.2019.1629574.
Roozenbeek, Jon, et al. “A New Way to Inoculate People Against Misinformation .” Behavioral Scientist, Behavioral Scientist, 22 Feb. 2021, behavioralscientist.org/a-new-way-to-inoculate-people-against-misinformation/.
"Fighting war on a battlefield is the most stupid and primitive way of fighting a war. The highest art of warfare is not to fight at all but to subvert anything of value in your enemy's country. Put white against black. Old against young. Wealthy against poor and so on---doesn't matter. As long as it disturbs society, as long as it cuts the moral fiber of a nation it's good. And then you just take this country, when everything is subverted, when the country is disoriented and confused, when it is demoralized and destabilized. Then the crisis will come..."
The world is facing a crisis of information security. The availability and relative ease of social media in spreading falsehoods has brought many of us intimate familiarity with the core of Brandolini's Law, that "The amount of energy needed to refute bullshit is an order of magnitude larger than to produce it."
Yet, it's important to remember that the frightening convulsions some might attribute to social media or others to specific political groups are not actually new. Many of the tactics, as well as the "classic" conspiracies (i.e. AIDS is an American bio-weapon, JFK was assassinated by the CIA, etc.) originated with the Soviet Union's "active measures" system, used during the Cold War to foment dissent and disorganization in opponents' backyards.
How does a deliberate active measures campaign operate? They can be broken into several distinct steps, as the New York Times reported, based on interviews with former KGB active measures operatives:
1. Identify Vulnerabilities
Look for any exploitable differences and tensions in the target society. Even the presence of a difference can be tilled into fertile disinformation. Find groups that have an agenda that lines up with potential disinformation.
2. Craft a Big Lie
Concoct a lie so preposterous and unexpected that the average person couldn't expect someone fabricated it.
For example, Mein Kampf includes this passage: "it would never come into their heads to fabricate colossal untruths, and they would not believe that others could have the impudence to distort the truth so infamously." The Big Lie in this case being the Stab in the Back myth that led directly to the wholesale slaughter of millions at the hands of otherwise decent and rational people.
3. Mix and Match
Mix in elements of truth among the lies to confuse and deceive recipients.
4. Hide the Origins
Find intermediaries to propagate or publically originate the story to convolute the chain of distribution and hinder active measure identification efforts.
5. Useful Idiots
Find willing subjects to spread propaganda. This could be many people in today's social media landscape.
6. Deny. Deny. Deny.
Take advantage of a fast news cycle. Never give any ground in public until the news turns over and the lie lives on.
7. Play the Long Game
Accumulate the reality distorting effects of active measures. Success can be measured in decades, not years.
These operations are not new. What is worrying, however, is the sheer willingness of average Americans to generate, propagate, and believe in effective dis- or mis-information. Many of the worst conspiracy theories, notably QAnon, are "organic"---made right at home. Coupled with visible and well-publicized fault lines in American society---wealth inequality, racial tension, dysfunctional governance, hardball politics---these techniques produce outsized results in places we associate with the heart of American democracy. It is important, however, to remember that these are not uniquely American problems, not uniquely perpetrated by America's enemies. For example, Ethiopia is embroiled in an ethnic war between the central government and the Tigray, with some using misinformation to justify their objectives.
While it might be an old idea. The dangers of dis- and mis-information are closer to home than ever.
One of the most fascinating yet alarming aspects of current information technology is its ability to take advantage of the limitations of human psychology. In Lin’s discussion on cyber-enabled information warfare, he highlights the dual process theory of cognitive functioning, with System 1 characterized by quick judgements that require less cognitive resources, and System 2 being more slow and effortful thought that uses more cognitive resources (1). Information warfare and influence operations (IWIO) take advantage of the heuristics used in System 1 information processing to cause various types of cognitive biases, which in turn influence the audience’s interpretation of that information (2): the availability, representativeness, and affect heuristics are exploited to cause confirmation bias, loss aversion bias, optimism bias, and many more (1, 2). Audiences absorb this information without slow, deliberate analysis, causing them to form biased and false beliefs about various important issues.
This is especially present in social media, which “rapidly transmit content among like-minded individuals, creating the ideal conditions for public polarization and divisiveness to occur” (1). People often discuss the “algorithms” of different social media platforms, and how sites analyze a user’s interests and views on politics and global issues to create a ‘feed’ of posts and information that are specifically catered to that individual’s preferences. As a social media consumer, this has already occurred on my personal feeds, especially on Instagram and Tik Tok. Tik Tok knows exactly what kinds of videos I like, so it shows me popular videos well-liked by individuals that share my preferences, succeeding in keeping me glued to my phone for hours. Instagram is a little different in that most of the posts I see are of the people I am following—people in my community that I know in real life, or people that I have chosen to follow because I like the kinds of things they post about. Because I have only lived in communities with mostly shared opinions (growing up in Chicago and attending UChicago which are both primarily liberal, etc.), my feed on Instagram is representative of these communities. As a result, the posts I see on Instagram are mostly in line with my world views. And the same occurs for individuals of all geographic, political, and cultural groups, creating highly polarized Internet communities.
^Social media is catered toward one's preferences, especially political ones, thus polarizing communities on the Internet.
The aforementioned IWIO are then spread among these polarized communities, and with all these like-minded people seeing information that confirms and strengthens their beliefs, it only polarizes them further. This poses a significant problem when spreading false or deceptive information about urgent topics such as climate change and nuclear conflict (1). Opinions on such crucial topics are also polarized, dividing communities, and delaying action and progress on the issues we currently face. I really wonder what these existential threats would look like today if social media hadn’t developed, or even if the ‘algorithms’ hadn’t developed. Would society be as divided on these issues as we are now, or would we perhaps be able to work together to help save humanity?
Sources 1 Herbert Lin (2019) “The existential threat from cyber-enabled information warfare,” Bulletin of the Atomic Scientists, 75:4, 187-196. 2 Herbert Lin and Jaclyn Kerr (2019) “On Cyber-Enabled Information Warfare and Influence Operations,” Oxford Handbook of Cybersecurity. Image: https://depauliaonline.com/24974/nation/facebook-reinforces-political-polarization/
The articles we read this week framed cyber security as a cognitive bias problem that is driven by propaganda. However, most of the discussion I have witnessed around cybersecurity was centered around hacking confidential information and using it maliciously. For many people, when the topic of cyber security comes up the first thing that comes to mind is hacking, however, this week's articles were an important step towards thinking about cyber security as a cognitive bias and propaganda problem. In the article, The existential threat from cyber-enabled information warfare, by Herbert Lin, the two types of cognitive processing systems were discussed. System 1 was classified as an “emotionally driven mode of thought” that essentially is less analytical and more willing to confirm biases. While system 2 is an “analytical mode of thought” that is much more analytical and less likely to confirm biases without proper evidence. Essentially, system 1 is an erroneous yet common way that people under stress, who have strong desires to be right, rapidly process information. The rapid framing of cyber security can ultimately be reduced to system 1 cognitive processing. More specifically, it has been repeatedly classified and framed as an hacking problem when it is not 100% the case, thus a “truthful hyperbole.” As a society, the idea of cyber security is framed as a hacking problem that solely concerns private companies or government which in itself is a prime example of our cyber security problem. I think the first step to solving our cyber security problem is changing the discussion on a global level from a hacking problem to both a hacking and a cognitive bias problem driven by propaganda. Until we, as a society, all acknowledge cyber security as more than hacking confidential files, we cannot solve this problem. Once we see ourselves as a part of the problem only then can we see ourselves as a part of the solution. I think the biggest barrier for most will be that they will use the system 1 cognitive processing system and may automatically think that they are too intelligent to be affected by propaganda or be manipulated into thinking or doing a specific thing. Essentially, they might be quick to dismiss the problem as they attempt to confirm their own personal biases about themselves and level of intelligence. Nevertheless, the first step to finding a solution to cyber security is to adjust the way we frame it as more than a hacking problem, but also as a cognitive processing problem that affects all of us.
Ps. Please google cyber security and look at the first results and then go to images. You will see my point.
https://images.techhive.com/images/article/2015/09/thinkstockphotos-479801072-100611728-large.jpg
On November 3rd, 2020 the largest ever domestic cyberattack took place as millions of Americans voted during the U.S. presidential election – according to Donald Trump that is. Since Trump made these unsubstantiated claims on the night of the election, and throughout the following months, the United States public soon became all too familiar with Dominion Voting Systems and the nationwide voting infrastructure that was set in place to protect the integrity of the election. As a result, millions of Americans blinded by the words of Donald Trump soon lost faith in the election process which many would argue is the single most important aspect of this country. This is a clear example of not only the dangers in cyberattacks themselves, but in the danger of how little is publicly known about them.
Throughout the various readings for this week, the high classification and lack of public knowledge regarding cyberattacks was discussed as a prominent concern and is what I believe enabled Donald Trump to bring “legitimacy” to his claims from the prospective of his closest supporters. Because of the nature of the alleged attacks, substantive proof that such an attack occurred was never necessary to Trump’s supporters, and of course, it was never provided. According to Trump and his supporters, the Dominion voting machines were breached and manipulated to delete millions of votes in order to ensure the election of Joe Biden. Such claims ignored the fact that these systems had been widely and vigorously tested under the Trump Administration well before election night to ensure the safety of the election.
In his 60 Minutes interview, Christopher Krebs, the recently fired Director of the U.S. Cyber Agency, called the 2020 election the “most secure in American history”. Like many of the papers this week, Krebs stressed that countless defensive cyber operations were put in place to ensure the election was secure. This is a perfect example of how important defensive cybersecurity measures are especially as attacks continue to become more complicated and common.
With all of this taken into consideration, I believe that the largest problem that cyber warfare currently presents humanity with is its ambiguity amongst the public. Because so little is known about these attacks, it is incredibly easy to spread misinformation about the origin of an attack, the severity of an attack, and in this case, if an attack even occurred. All of this becomes increasingly dangerous when those in powerful political roles, especially the President, use this to their advantage. As a result, the only real solution I see to this problem is more transparency from the government, a push for public knowledge on what cyberattacks are, and holding those in power responsible for their use or misuse of this evolving technology.
Sources:
The concept of an information dystopia is both intriguing and terrifying to me. On one hand, it seems unlikely that there is enough malintent actually out there to result in the systemic issue of false information actually causing significant, measurable harm. I find it difficult to imagine a world in which falsified videos of American soldiers performing atrocities become believable-enough to incite chaos, panic, widespread national distrust, or result in anything other than people looking at the video, being surprised by it, and then doing a little research to find out that it is fake. Deep faking is often touted as one of the next great existential concerns and obstacles of the future, but I wonder to what extent this is true. I do not disagree that the technology will improve, almost to the state of indistinguishability, but as this occurs will not our collective ability to decipher truth from reality also improve? For example, will there not be technologies to analyze a video or sound bite and determine if it has been doctored, thereby proving the authenticity or inauthenticity of the medium? It may be true that a fraction of the population will believe deep-faked content, but will that actually have a measurable negative impact when put up against scrutiny? If a fake video of President Biden kicking a dog came out, the media might go into an uproar, but this attention would also compound the efforts to prove authenticity, which would likely not be very difficult to do. It seems to me that the biggest danger of deep faking is the time and effort it consumes in order to deal with them, rather than them actually causing any harm. I find it hard to imagine a world in which we can no longer trust any of the information presented to us, because as falsifying technology improves, so will cybersecurity and the efforts to maintain sufficiently-trustworthy news. We have sources that the majority of people go to to get their reliable news, and these will not change, because society relies on them and so will put enough effort in towards preserving them. Deep faking may hurt specific candidates or corporations in the short term, but I do not believe any lasting damage will be able to be done from a single forged video or image. I actually see this having the biggest effect on children, who are growing up in the age of misinformation and so do not have the skillset to piece apart what is real and what is fake. This can be potentially damaging, especially to mental health and self-esteem, as instagram influencers significantly edit their photos, their bodies, and their lives, showing only what they want to show. When large majorities of kids start believing that this airbrushed view on reality is what real life is like, well, we don’t really know yet what will happen, but it’s not looking good.
The image below is from the US Government Accountability Office (GAO)...but maybe these tactics are just what the government WANTS us to think, so we think it’s all under control… What deep faker messes up the earrings?!?!?
Cyber threat security is a hot button issue, relatively new, technologically unprecedented, and far ranging in ramifications. Cyber data has become commoditized and even weaponized, thus there is immense responsibility for protecting data from the personal to the international level-that is, cybersecurity. The cybersecurity industry is both burgeoning, and like cyber technology itself, ever evolving and imperfect. Take for example recent attacks on the FireEye and SolarWinds companies, which revealed stark vulnerabilities in their systems, systems designed to protect clients from just such security breaches. These were extensively damaging, one a sophisticated contravention, the theft of “red” tools used in cyber defense, making their functionality now “a potential detriment for every FireEye customer”1. The other, against SolarWinds, attributed to deficient security in the company’s system itself, significantly affected many government agencies as well as private organizations, leaving vital networks for a time unprotected as they were immediately disconnected for damage control. While there is still debate over the nature of these breaches, espionage or something more malicious, and by whom they were perpetrated, one thing is clear, our cyber intelligence is exposed because even our cyber security protection systems are. The range of clients affected also evidences the borderlessness of cyber conflict, and underscores the need to boost cyber ramparts across the plethora of industries dependent on information technology.
As a result, augmenting cyber security seems best approached collaboratively. By sharing their knowledge of actionable threats openly, entities aid their peers in assessing and eliminating such threats. In this way, well-intentioned actors can collaborate to eliminate malign actors from the shared cyber ecosystem that is so essential for societal function. For example, the IBM X-Force Exchange23is a publicly available record of IBM’s actionable threat intelligence database, “including information on real-time attacks that can be used to stop cybercrime in its tracks”.4 The approach is “open and collaborative”, premised to be part of the solution so as not to be part of the problem.5 This database has been established in hopes that organizations taking advantage of this data will follow IBM’s precedent and likewise share their own.
A more active actionable threat intelligence network might incorporate real time alerts and warnings that could save many others from the dangerous backdoors and trapdoors and aid significantly in zero-day mitigations, eliminating potential risks while stopping cybercrime and attacks on all targets more directly and decisively. This is reinforced by the Department of Homeland Security’s statement, “cybersecurity is once again a shared responsibility”,6and while protecting infrastructure across vital sectors, “Evolving threats will continue to inspire a collective effort among both private and public sector partners.”7 Further, the DHS has committed to action with its own recently instituted “vulnerability disclosure program”.8
All of this rhetoric is promising, as are the actions behind the words. Malicious cyberactivity calls for cooperation utilizing consolidated resources to a synergistic effect. Perhaps, perhaps, perhaps we are on the brink of a social transformation in the perception and practice of self-protection in solidarity.
Recent cyberattacks by sector. 2021 Nonprofit Cybersecurity Incident Report. March 2021. Retrieved from
https://communityit.com March 2021
IBM Cybersecurity Solutions. Retrieved from https://ibm.com/security/solutions. no date
Lewis, Elliot. (2021, April 6). What’s the failsafe alternative to FireEye and SolarWinds?. Retrieved from https://www.securitymagazine.com.
Barlow, Caleb. Where is cybercrime really coming from?. TED, Ideas Worth Spreading. TED@IBM. Jan.2017.
IBM X-Force Exchange. 2021. Retrieved from https://exhange.xforce.ibmcloud.com.
The 16 Sectors of Critical Infrastructure. insights>Blog. 2021. Retrieved from https://cipher.com.
Moussouris, Katie. For US cyber defense, helpful hackers are only half the battle. The Hill. March 17, 2021. Retrieved from https://thehill.com
Increasingly the scope of our lives has been transported to information technology processing systems — and that includes our political infrastructure. Health and medicine, transportation, financial services, and our military all rely on information technology networks, which are subject to threats.
As such, security in cyberspace requires more than locks and keys around information technology; it demands technologies, processes, and policies that can prevent or reduce potential attacks from malevolent hackers. According to the National Academies Press, we are vulnerable to cybersecurity threats because a) more people rely on information technology, b) such technology is not perfectly secure, and c) there are bad actors trying to game the system (Clark et al., 2). Unless information technology is to be abolished, the issue can never be solved, just managed.
Most salient to the U.S. context, is not just whether information is hacked, but also whether information that is published is true — and what its intentions are. Much of managing our information technology spaces means keeping information truthful, honest, and matching belief systems reflected in the politics of any given nation-state. That is, promoting values becomes of paramount import as the intent of information has serious consequences. What’s worse, promoting false information tied to bad values is much easier than the inverse. One study by three MIT researchers found that bad information – fake news stories and the like — are 70 percent more likely to spread than accurate news stories.
That is, false information isn’t just dangerous because of what it’s promoting, but because of who is promoting it and how it is being promoted. Dangerous and violent information magnifies more hateful, violent, and reactionary feelings that help sow divisions between us, our neighbors, co-citizens, and international community members. The most recent example is the insurrection on the capitol on January 6, which had been facilitated by false information spread by then President Trump and networks of conspiratorial thinking. This event almost upended our political process and catalyzed war to defend the democracy. If something isn’t done to better regulate our media and information spaces, we could see more pulls toward extremism, authoritarianism, and hate.
Reversible warfare would save resources and lives.
In Bytes, Bombs, and Spies, the suggestion of “the use of offensive cyber operations... [as] the instrument of first military use if nonmilitary measures...fail” prompts us to consider cyber offense as a more ethical alternative to kinetic attacks [1].
I argue that 1) cyber offense can be a more ethical alternative to kinetic conflict by both duty-based (opposition to violence on the grounds of ethics) and pragmatics-based (opposition on the grounds of inefficacy) standards, if and only if it is reversible, precisely targeted, and escalation control is in place.
Cyber offense can preserve life and infrastructure better than kinetic when it is reversible- “[able] to be reversed by an attacker...better than by restoring from backup,” and targeted, ie. minimizing collateral and civilian damage, in accordance with Geneva Article 51 [2]. Ways to achieve reversibility that warrant further development include cryptographic attacks in which the attacker knows the decryption key; obfuscating attacks in which software and data in a system are disorganized; and withholding information by jamming a system [2]. Such an attack can be redacted by the attacker with minimal permanent damage to the target. The benefit is two-fold- 1) duty-bound speaking, lasting damage to human life/society is limited , and 2) pragmatically speaking, the ability to step back the attack and restore damage rewards and motivates target compliance, thus increasing efficiency and saving military resources. Rather than threatening a secondary similar attack- ie. using resources for two attacks- to gain compliance, reversal of the first attack is offered as a reward.
However at present cyber warfare is easily misinterpreted or misattributed, and may escalate into life/infrastructure-threatening attacks; to realize ethical cyber warfare, cyber interaction must be precise and attributable [3]. Precision requires technical advancement to identify well-cloaked targets, and attribution requires technical advancement to identify the malign source, and improved strategic understanding of intent.
Because well-executed and controlled cyber offense might allow us to effectively skirmish whilst preserving human life and resources, I argue that nations are ethically obligated, from duty-bound and pragmatic viewpoints, to develop and regulate cyber warfare. Internal actions such as The 2018 Command Vision for US Cyber Command and the initiation of the UK’s National Cyber Force must escalate to an external dialogue in IGOs and between countries [4]. This “airing out” of cyber capabilities and intent echoes individual companies taking an open-source approach to actionable threat intelligence as Janet discusses. An open-source model would neutralize warfare advantage, but policies from nuclear warfare, eg. No First Use and weapons disclosure, and protections as from the Geneva Convention, are feasible and would support the evolution of ethical cyber warfare- eg. protections for computer networks of medical systems and civilians, and disclosure of nations’ cyber intents [5]. This will also reduce the odds of escalation by improving nations’ strategic understanding of one another and thus attribution [3]. By making transparent and better regulating cyberwarfare we might innovate life and resource-preserving conflict.
Sources: [1] Lin, Herbert, and Amy Zegart, eds. 2019. Bytes, Bombs, and Spies: The Strategic Dimensions of Offensive Cyber Operations. Brookings Institution Press. [2] “Towards Reversible Cyberattacks.” n.d. Nps.Edu. Accessed April 14, 2021. https://faculty.nps.edu/ncrowe/rowe_eciw10.htm. [3] “Ethics of Cyberwar Attacks.” n.d. Nps.Edu. Accessed April 14, 2021. https://faculty.nps.edu/ncrowe/attackethics.htm. [4] Foreign Policy Centre. 2020. “The Ethics of Offensive Cyber Operations - The Foreign Policy Centre.” Org.Uk. December 2, 2020. https://fpc.org.uk/the-ethics-of-offensive-cyber-operations/. [5] Schneider, Jacquelyn. 2020. “A Strategic Cyber No-First-Use Policy? Addressing the US Cyber Strategy Problem.” The Washington Quarterly 43 (2): 159–75. [6] “World War II Soldier” Histroy.com. October 29, 2009. https://www.history.com/topics/world-war-ii/world-war-ii-history.
In the readings for this week, Herbert Lin covers a variety of cyber attacks: the 2015 theft of records from the Office of Personnel Management, the 2017 WannaCry ransomware attack, the 2017 NotPetya ransomware attacks, and the 2012 attacks against Saudi Aramco. While these do pose serious existential threats, I would like to shift my attention to the more personal cyber attacks. The reason I diverge from these attacks on organizations and governments isn’t because I don’t deem them important, but rather because I believe that increasing the awareness for personal attacks may lead to a societal realization that cyber threats are real, and by extension, (the realization) that these attacks pose an existential threat to our society.
Cyber extortion, while mostly focused on targeting small businesses, also targets individuals. Everyone has personal problems; and while this may seem dark, pause for a moment to think about how you would feel if your deepest secrets are exposed. This is what these extortionists target -- that weak spot in your psyche. These perpetrators demand money (usually cryptocurrency sent to a foreign server) in exchange for not releasing your personal information (e-mails, texts, pictures). Although most of their threats are empty (they won’t actually post these embarrassing texts or pictures, or they won’t even have them to begin with); they still extort money from many scared individuals. In a world all the more devoid of privacy already, these extortionists are attacking the core of our society: freedom and privacy.
By framing this issue in a more personal light, I believe people will be more inclined to think about, and even talk about cybersecurity issues. While a societal realization won’t completely resolve this existential crisis, it is definitely a start -- especially since most people are not currently aware of cyber attacks.
What struck me in this week’s readings was at the end of the “cyber2” PDF, in which the authors discuss the immense difficulties with attempting to solve issues with cybersecurity. They remark on the vast number of disciplines that need to be brought into the fold when constructing policy and considering how to move forward in a world of cyber-attacks and where the United States’ cybersecurity is paramount. However, I wonder if the sheer number of disciplines that need to be considered might hinder possible solutions to the problem rather than help to create nuanced ones that would address all the issues present.
As the chapter states, engaging with combinations of fields like computer science and information technology, psychology and economics, political science and engineering, is necessary to work towards solving the cybersecurity problem. I simply struggle to reckon with how all of these disciplines are supposed to work together. It seems to me that there would be far more dissonance between the fields rather than agreement that might hinder the creation of solutions to the problem. While I do not dispute that all of these fields are necessary, I simply wonder how they might all come together seamlessly.
The other piece I was thinking about while writing this memo is the relationship between the public and private sectors, and how this might affect any possible solutions in the future. As I shared in my question for the presenter, I wonder how these two sectors will be able to work together, if at all. Would the government be willing to work with the private sector to come up with solutions, or would the private sector be willing to subject themselves to the wills of the government to achieve these ends? I have simply been trying to think through how this might play out. I think there needs to be a combined approach where both private and public sectors in these various fields work together to solve the cybersecurity problem. As the political cartoon below demonstrates, if the private sector isn’t on top of things and actively working to at least address the cybersecurity problem, then the public sector is also going to suffer. The two areas need to work in tandem rather than against each other, despite the fact that we often see the latter and not the former.
Dr. Lin's warning of cyber-enabled information warfare as a potential existential threat rings worryingly true for me. However, I am not entirely convinced by Lin's framing of this threat as "the end of the enlightenment." Lin's consistent warnings about cyber warfare as degrading enlightenment values (e.g. "pillars of logic, truth, and reality"), as well as Lin's mentions of emotional biases as an exacerbating factor, imply that an Enlightenment-esque valorization of logic might be integral in combatting this threat. While Lin does not claim that—indeed, Lin notes that the "development of new tactics and responses" is necessary—and I agree that the current media environment degrades our collective reasoning abilities, I would like to reframe emotion as not only part of the problem but also the solution (at least in American society).
First, it is important to situate my response in America's current context. Regardless of whether or not we should seek to 'return to Enlightenment,' it seems like those values are no longer terribly valued: the Media Insight Project's latest survey notes that the values of transparency and spotlighting wrongdoing—which seem related to the pillars of logic, truth, and reality—are not shared by a majority of Americans. Thus, devising ways to ensure transparent factual communication is unlikely to fully resolve this democratic threat, if only because Americans do not seem to care all that much about that (e.g. it might not increase their trust in institutions, which is a fundamental part of democracy). Of course, that is not to say that improving information transparency and factual communication is not important—and indeed, as America's media environment improves, these values might become more widely held. However, given that the public does not necessarily share these Enlightenment values, we cannot assume that relying on those values will suffice in crafting solutions.
This seems problematic if we share Lin's concern that "without shared, fact-based understandings [...] what hope is there for national leaders to reach agreements?" Fortunately, the counterpart to logic, emotion, can step in to help. Lin repeatedly notes that emotions can impair the cognitive ability to reason fairly, such as feeling the need to maintain one's social identity. Emotion is often tied to our values: for example, anger at a group leader being criticized might be motivated both by a desire to maintain one's identity and by a high value on respecting group leaders. This means that leveraging commonly-held values in service to combatting cyber warfare and disinformation could be an important step towards protecting democratic institutions. How could we tailor our solutions to appeal to different audiences with values ranging from loyalty to fairness? Developing fact-based understandings might not mean trying to amplify one universal message or eschewing emotion: instead, it could mean investigating what frames most appealingly convey facts for different audiences.
As I read through this weeks readings, I couldn't help but feel a little disheartened. The issues of cybersecurity, cyber warfare, fake news, etc. are so complex and seemingly unsolvable that I found myself overwhelmed. So, what to do? Well, I decided to dive into one of the solutions to these problems that I've heard a lot about recently: ethical hacking.
Ethical hacking, also known as penetration testing, is the act of legally breaking into an organisation's hardware and software in order to test said organisation's defenses. Hackers are then paid based on what vulnerabilities they discover. The format of this hacking varies - some companies use in-house penetration testers, some use consulting firms made up of teams of ethical hackers, and some simply ask anyone who finds issues to reach out to them (with the promise of payment in return).
Ethical hacking is now becoming big business. More than $44.75 million in bounties was awarded to hackers around the world over 2019 - an 86% year-on-year increase. HackerOne, which runs bug bounty programmes for organisations including the US Department of Defense and Google, said the average bounty paid for critical vulnerabilities in 2019 increased to $3,650, up eight percent year-over-year, while the average amount paid per vulnerability was $979. At the top end of the scale, elite freelance ethical hackers can make over $500,000 a year searching for - and reporting - security flaws.
What I find so exciting about all of this is who these ethical hackers actually are. At BugCrowd, another penetration testing consultancy, 94% of these so-called hunters are aged between 18 to 44, and several are still in high or even middle school. The cost of entry is low and based largely on skills. In fact, about a quarter of the hackers on the platform do not even have a college degree. Many of these individuals, were it not for this legal route, would be pursuing illegal hacking efforts. Thus ethical hacking provides not only a chance to decrease the threat of data breaches and cyber warfare, but also lower the number of malicious hackers out there and provide an exciting and equitable opportunity for young tech aficionados. It gave me at least a glimmer of hope for what's to come!
(a black to white hat transformation playing on the names given to malicious hackers (black hats) and their more positive peers (white hats) and mirroring the ethical shifts that some in the hacking community are beginning to undergo)
Sources
This week's discussion of cyber security strategy is striking similar to first week's discussion of nuclear armament and shockingly more portending of a doomsday scenario. Like nuclear deterrence, the M.O. of cybersecurity seems to be that the best defense is a good offense. After all, strictly defensive measures of only using domestically-produced tech and implementing more stringent security measures would completely uproot the system of globalization and rapid tech innovation that foreign trade and limited liability facilitate. Therefore, a good offensive seems to be the only viable route. What's more, the very nature of cybersecurity demands such: "ISR [intelligence, surveillance, reconnaissance] capabilities for cyberspace must be ubiquitous, real-time, and persistent" (Lin & Zegart, 8). Ubiquitous because cyberattacks are not geographically-bounded, real-time because the opportunity for attack could strike at any time, and persistent because retaliation requires up-to-date knowledge of the target's hardware/software to succeed. However, unlike nuclear, in which the effects and consequences of action are well-understood on both sides, the implications of a suspected cyberattack–whether intended to comprise the integrity/availability of information/services or merely to snoop–are far murkier and more liable to lead to escalation. Jason Healey in the Lin & Zegart chapter corroborate as much when examining historical case studies in the U.S., finding "cyber conflict is more often escalatory than not" (Lin & Zegart, 12). Considering the perhaps the most notorious historical example of cyber warfare–Stuxnet–was deployed to disrupt Iranian centrifuges, the potential for cyber warfare to escalate to nuclear warfare seems likely.
Speaking of Stuxnet, perhaps what jumped out at me even more this week than the nuclear similarities were the fundamental differences in scale between cyber and tactical warfare. As discussed in the Clark et al. article in The National Academic Press, corporations can now become legitimate players in matters of defense even if they don't manufacture military products, and sometimes inadvertently. Stuxnet was an incredibly promiscuous virus developed by the U.S. and Iran that, in order to bypass the air-gap of Iranian nuclear facilities, needed to infect and lay dormant on an immense number of machines until it recognized one belonging to the Iranians. Consequently, uninvolved countries like Indonesia and India had 18% and 8%, respectively, of the their domestic computers infected, albeit not necessarily affected. Thus a novel element of cyber warfare seems to be the implicit participation of the public without their awareness. Subterfuge of social media companies poses as great a threat if not greater than direct military action, so how do we prepare the public for defending themselves against manipulative attacks that might already be occurring?
The image below is a basic timeline of the five companies initially infected by Stuxnet before becoming a global epidemic, albeit unknown to most of the world. The movie Zero Days really does this whole incident justice and although at times a bit dry, it in the end leaves your jaw hanging at the sheer scale of this operation.
In Herbert Lin's article for the Bulletin of Atomic Scientists he makes the point of differentiating the existential threats of nuclear war or climate change to that of information dystopia. While climate change and nuclear war are both events that are very physical and tangible in the ways that it will affect us and our cities, the existential threat of information chaos attacks in a less tangible way-- it will affect our current culture, society, and even our own psychology. Before reading these articles, I usually pictures government hacking and releasing of secrets (like the Edward Snowden situation) when thinking of potential cyber threats. I knew that there were problems with social media, fake news, etc., but I never pictured how much of an existential threat these things could turn into. These readings reminded me of a series that I watched on Netflix called "The Social Dilemma" in which the dangers of social media are put on display. While the content of the show is not 'new', it does infer that even if one believes they are too smart or too strong-willed to be manipulated by these sites, they are mistaken. It's known that popular social media apps like Instagram or Tiktok use your own viewing information to customize your feed, so that you get shown the kinds of videos and posts that you like. Over time, that causes you to have the false sense that everyone agrees with you because everyone in your feed sounds like you, which then makes it very easy to be manipulated. This lends itself to be another factor in polarizing various groups, as one person sees a lot of posts that align with their views while some else sees posts that align with theirs. Although this seems like a minor problem at the scale of singular people, if this gets larger or the topic at hand puts much more at stake, then this certainly can lead to an information crisis. I believe a big issue that must be reconciled before we could arrive at a potential solution is the way that this problem is thought about. Carl Sagan puts it best with this quote from his book, The Demon-Haunted World: Science as a Candle in the Dark, "One of the saddest lessons of history is this: if we've been bamboozled long enough we tend to reject any evidence of the bamboozle. We're no longer interested in finding out the truth. The bamboozle has captured us. It's simply too painful to acknowledge, even to ourselves, that we've been taken." People are not acknowledging the great impact that disinformation, from social media or fake news sites, is having on their lives and within society. This is the influence that social media and information has on our psychology, people want to think they're right and be in groups of like-minded people. In order to change for the better, I believe we must strive for truth rather than staying comfortable in our section of social media in which all the posts and shared news articles agrees with our beliefs. An even bigger step could be a total restructuring of social media and its role in society. I'm not exactly sure how to accomplish this, however, since I did grow up raised on technology and had social media be a big part of middle school, high school, and college. It's important to look at if social media in its current state is doing more harm than good. Yes, it connects people in amazing ways, but its spread of fake news and its ability to manipulate the psychology of people makes it a dangerous tool in our society that is already dealing with many other existential problems.
Cybersecurity is becoming an increasingly significant concern as digital penetration around the globe increases. One indicator of this is the rapid adoption of 5G technology, which will enable previously unseen levels of connectivity through superfast broadband, reliable low latency communication, high reliability/availability and energy efficiency. The graph below shows forecasted global penetration of 5G, indicating that the most rapid growth will be occurring primarily in countries with the least cybersecurity infrastructures.
Nevertheless, cybersecurity is different from climate change because it is only a threat to humanity because of other threats to humanity. It is indirect; this begs the question as to whether cybersecurity is the problem that needs to be solved or whether it is the underlying problems that make cybersecurity so relevant.
Thinking about the amount of time and sheer quantity of resources dedicated towards cybersecurity never fails to amaze me. These resources include highly-educated human capital technology, and energy. Can cybersecurity be targeted more effectively by addressing the underlying geopolitical and socio-economic problems that exist in the world today? I believe the answer is yes, but cybersecurity reinforcement is a more short-term solution that is needed to ensure the aforementioned underlying problems do not prematurely become out of hand. Thus, I disagree with the notion that “cybersecurity is a never-ending battle, and a permanently decisive solution to the problem will not be found in the foreseeable future.” It makes sense that as cyber infiltration techniques evolve, cybersecurity must evolve with it, thus starting a never-ending cycle. However, this cycle is broken when the need for that security is diminished. That is, the payoffs from the infiltration have been diminished either because the time invested in infiltrating is better off being invested elsewhere or some infrastructures make the infiltration lose power. Offensive cybersecurity strategies seem to counteract the sentiment required to make cybersecurity a non-issue. I almost see this as another form of warfare, where many solutions delay the problem, but the only permanent, effective solution is the cyber equivalent of disarmament. This is similar to the discussion we had about nuclear warfare, where de-alerting, “no-first-use,” and “no sole authority” are acting as stepping stones for eventual disarmament.
Works cited: National Research Council 2014. At the Nexus of Cybersecurity and Public Policy: Some Basic Concepts and Issues. Washington, DC: The National Academies Press. https://doi.org/10.17226/18749.
https://www.statista.com/chart/9604/5g-subscription-forecast/
Through the readings, I was struck by an essentially unique feature of cyber-operations: their dual nature. On the one hand, cyber-operations pose severe, even perhaps existential, risks to humanity. On the other hand, they may be key in our ability to mitigate existential challenges across the board.
The reason for this is that at the very bottom of almost every major anthropogenic existential risk is the same problem: imperfect information. In nuclear risk, we have imperfect information regarding the intention, capacity and resolve of state and non-state actors. In AI risk, we have imperfect information regarding key actors, their progress, and their intent. Imperfect information also lies at the core of risks posed by engineered pandemics, nanotechnology, unforeseen technologies, and a host of other existential issues.
This problem suggests a broad solution to existential risk: increased surveillance capabilities. For example, when Bostrom hypothesises the risk posed by ‘black ball technologies’ (technologies which, once invented almost certainly lead to world destruction), he speculates that the only viable solution is “ubiquitous, real-time, world-wide surveillance”. In other words, safety from existential risk, and our capacity to survey and extract information from unwilling actors, are inextricably intertwined.
Offensive cyber exploitation operations - while dangerous - are unique in their ability to enhance our surveillance and extract information from unwilling actors. Through this framing, it may be that cyber exploitation should be a centre-piece of our existential risk strategy -- contrary to framing it primarily as a threat.
This should bring some consolation, given that we are already well down the road of cyber-technologies, and there is no turning back.
It seems that a new age has dawned upon us. The intangible, invisible sphere of the internet has crystalized as the battleground of our future. Two days ago, President Biden nominated Chris Inglis to be the U.S.’s first national cyber director. This role would straddle the military and national security. The very creation of this position underscores the ways our national threats are transforming. The central question that I have with this role, especially when thinking about it in conjunction with and comparison to the military, is how offensive and defensive strategies will ultimately take shape. On the defensive side, I believe the private sector will have to play an unparalleled role, specifically in regards to building robust technology to protect nationally sensitive information. In the most recent attacks against the United States, it has been private companies that detected these strikes before the government was be able to. It seems that the sluggish nature of bureaucracy and government will have to be streamlined in this new sphere. When building offensive strategies on a national level, I believe that the United States is going to have to seek ways to breach other countries barriers to capture misinformation before it is disseminated. However, this type of trespassing on enemy ground seems inherently complicated as its detection could lead to an expedited attack. Thus, a robust defensive structure is intrinsically necessary before engaging in offensive attacks. Another element of this form of war that seems particularly threatening is the vulnerability of the civilian. Anyone with their information online, with a phone in their pocket or a laptop on their desk, is a civilian––but resting on the very frontlines of the battlefield. How will policy emerge to both protect and educate civilians? Is the government’s full encroachment upon our private lives necessary to achieve this protection?
The issue of being able to trust one’s sources has evolved rapidly over history. Originally, only directly-observed phenomena and the words of trusted, known, nearby friends could possibly be trusted. The advent of written language slowly carried this intimate nature of “trusted” information farther away. However, the difficulty of distributing written works meant that only authoritative sources, or sources powerful enough not to disagree with, were available to be read. Hence the decrease in intimacy did not immediately create a low-trust source of information. Only recently, meaning within the last few hundred years, did a particularly large volume of untrustworthy material arrive in the market available to the common rabble. Only much more recently has the quantity of accessible but untrustworthy information become so great that it seemingly outmatches that of accessible and trustworthy info. In short, though humanity has always been acclimated to low-knowledge situations, it has never been, even and especially now, been allowed to acclimate to situations where human-originating knowledge needs to be screened. When a source one listens to seems to be right quite often, we almost treat it as a reputable friend. The Righteous Mind by Jonathan Haidt discusses the ways in which a person’s mind changes most easily on political matters. This does not tend to happen in isolation, Haidt argues, but instead almost exclusively happens when a friendly but external speaker persuades us. This might now sound familiar. The incredibly regular and familiar statements found on the news, on Facebook, and on other social media and entertainment platforms frequently come across as friendly, or at least allied toward a common goal. As we are preprogrammed to be more willing than usual to trust any news or opinion conveyed in this manner, the social and friend-making aspect of human nature itself works against the desire for actual truth. In short, humans at most points in history paired a social nature badly tuned for truth-finding with a low necessity for fact-checking incoming information. We have a high necessity now, and so we face an old aspect of ourselves which we never needed to fear before.
The image, by the way, is from the book The Righteous Mind and is a basic model of idea exchange in a friendly conversation. Replace person A with a disembodied but friendly news source and see how person B might still simulate the experience.
Nearly two thirds of Americans have experienced some form of data theft: a headline finding from a 2017 Pew Research survey [1]. The most common subcategory was fraudulent credit card charges, which 41% of Americans have experienced, while 35% have received notification of a personal information compromise.
Really, far more than 35% of Americans have probably had their personal information breached, especially passwords. There are now lists of leaked email-password pairs collected from many separate breaches containing billions of unique accounts [2]. Malicious actors are taking advantage of these readily available lists to threaten millions. One common technique is the "sextortion" scam [3], where a "hacker" sends a victim their email password, using the shock power of having the victim's email to make them believe they have also recorded compromising footage from the victim's webcam, and demanding a ransom of the nonexistent footage in the hundreds or thousands of USD. My father received one of these scam emails, complete with his email password at the time.
The impact of unsafe cyber environments on our psychologies is hard to measure, but we can see that they are already damaging Americans' confidence in governing institutions. A figure from the same Pew survey as above [1]:
Since one of the main pillars of Lin's "Information Dystopia" is loss of trust in key institutions, there is reason to worry about breached information being so ubiquitous. Nevertheless, there is a simple way to be safe from many of threats: check and strengthen your password integrity often. haveibeenpwned.com lets you search for all your passwords among known leak lists, so you know which passwords you need to change. If you are willing to expend more energy and memory, you can change all of your passwords regularly.
Just as governments and businesses will have to adapt to new cybersecurity threats constantly for the foreseeable future to avoid a collapse of usable cyber and information space, individual users will also have to play a role.
Works Cited: [1] https://www.pewresearch.org/internet/2017/01/26/1-americans-experiences-with-data-security/ [2] https://www.tomsguide.com/news/3-2-billion-passwords-leaked [3] https://www.youtube.com/watch?v=pHW1p6QNTtI&ab_channel=VICE
Is cyberwarfare a solution for nuclear Armageddon? It seems like quite the perplexing prospect, but may be a band aid solution that is desperately needed in the coming years. Over the course of human events, war has been optimized to the point where the major deterrent is that everyone is so violent and warlike that the human species will go extinct with the wrong move. This is the worst case scenario, nuclear Armageddon. While there are many solutions, through disarmament and anti-missile treaties, peace is one that seems a bit naïve to simply grasp for right now. This is where the reality of cyberspace comes into play. Soldiers or actors in this field can do many malevolent actions, which the National Academy of Sciences lists as “can steal money, intellectual property, or classified information; snoop on private conversations; impersonate law-abiding parties for their own purposes; harass or bully innocent people anonymously; damage important data; destroy or disrupt the operation of physical machinery controlled by computers; or deny the availability of normally accessible services.” All of these can be devastating, and accomplish the ambition of malevolent powers and governments (ours included) all over the world. However, this is a new battlefield, a war zone that hasn’t been optimized towards nuclear assured destruction. It is our world wide buffer zone, so when one power wishes to bully another they have this option instead of proxy wars or nuclear warheads. It’s a new battle field of information, one where we have learned from the past. The DoD has already openly discussed in Bytes Bombs and Spies how any movement towards warheads will be judged with, using the shield on MAD doctrine to protect humanity from the worst outcomes. These other attacks are ways to exert influence against each other in ambitious and warlike ways that don’t cause massive amounts of widespread devastation. There is an arena for the terrible acts that will be committed that will be less dangerous than simply living nuclear as the only clear way to fight. It buys a certain level of safety in peace, where they can drop “cyber bombs” instead of kinetic ones. In addition, many of these attacks against large scale governance are on the topic of information, which can be added towards the Great Project of human knowledge. This is a type of attack that is devastating against tyrants, but not terrible for the average civilian. Knowledge and information, while it can manipulate the public and endanger democratic institutions, when the threats are understood can be used to make a larger picture of our reality. Cyber warfare is a threat, but doesn’t have the same overarching existential nature of human armageddon, and many civilians can be somewhat safe from it with internet education and knowledge about the threats that can be found there with reports like these. This threat may be the distraction that buys humanity time on the nuclear front, where superpowers can bully and act without the literal extinction on the line.
The first introduction by Lin and Zegart initiates conversation around cyberwarfare by emphasizing its pervasiveness despite public ignorance on the topic. Cyber warfare is such a salient topic because it has striking resemblance to the scale of the nuclear weapons and space races in the twentieth century but without any of the mass hysteria. One of the phenomena that Lin and Zegart take from Michael Hayden, former NSA and CIA Director, is that the secrecy of cybercops is clearly purposeful, but runs into the issue of policymakers unable to properly allocate funds in such a manor as to ensure our supremacy in the field. For the “Space Race”, public appeal was broad and the willingness to publicly display the government’s spectacular feats in science seems obvious. The cybercops of the twenty first century are quite the opposite. In this vein, it seems to me that one strong policy shift that, from the readings it seems has already occurred, is that congress does not complete uncensored views into the projects of the Department of Defense. If they did, national security risks would be immense. The National Research Council Summary provides strong support for this policy going forward. Given the fact that cybersecurity is a never-ending battle, a set percentage of the Pentagon’s resources should be allocated towards that arena into perpetuity. This, like nuclear weapons, comes back to game theory. If other nations will pursue it in every circumstance, so must we. Of course, if there were a way to broker a treaty that banned these methods of war, that could save the planet resources. On the other hand, the most salient issue with cyberops in general is that the methods are highly covert and easily disguisable so that the threats and attacks do not have clear and obvious perpetrators. This simple notion deteriorates the ability to make peace treaties over cyberwarfare.
Luckily, we are fortunate enough to live within a system that values and promotes cybersecurity in the private sector. Unlike climate change, which arises as an externality to civilizational development of fossil fuel use, cybersecurity actually has practical effects in private industry. Therefore, we have significant infrastructure and capital that has been and can be deployed in ensuring the U.S. is safe and secure in the wake of an evolution in warfare that could result in the complete disarmament of nuclear weapons, should that day ever come. It is my opinion that for this reason, I believe cyber warfare to be a threat that is beneath the imminent effects of climate change as they relate to the continued existence of human civilization.
From the readings this week, it is clear that in the United States, we must not only focus on cyber offense and cyber defense, but we also must make sure to educate the public on what is "fake news" and increase news literacy.
One of the more recent examples of a cyber-attack is the Russian interference in the 2016 United States Presidential Election. As the authors mention in the excerpt from “Bytes, Bombs, and Spies”, cyber-enabled information operations have raised the profile of the relationship between cyberspace and national security. What we currently know about Russian interference in the 2016 election is that a group of individuals were able to hack the Hillary Clinton campaign and the Democratic National Committee, as well as spread different kinds of propaganda on multiple different social media platforms. The goal of the group was to damage Clinton’s image while boosting the chances of Trump winning.
In order to hack into the campaign, the agents sent fake Google security notifications to individuals working for Clinton’s campaign. By clicking on the link, the staffers were told to change their password, which allowed the Russian agents access into their accounts, which included sensitive information. These malicious e-mails also allowed for access into the Democratic National Committee computer network, where hackers were able to install malware and steal a large amount of sensitive information.
The bigger question now is, what is the solution to interference within elections? How do we both defend ourselves from attackers, but also make an offensive move in order to prevent future attacks? As we learned from the readings, cyber offense is just as important as cyber defense when it comes to approaching hackers. A big issue that is addressed in “Bytes, Bombs, and Spies” is the escalation of these conflicts that can lead to bigger issues, such as nuclear war.
The first step is to convince the public that cybersecurity is important, and that interference in elections is a real threat. According to an NPR poll, only 1 in 6 Americans believe that foreign interference is a large threat to U.S. elections. Public opinion and pressure can impact the way our government functions and reacts. Therefore, we as a society need to push for more research and understanding within this realm.
Countries such as France, Finland, and Sweden have already taken steps to educate the public on what is “fake news”, as well as promote critical reading and understanding when it comes to the news. On social media, platforms such as Facebook and Twitter have machine learning algorithms that identify fake news and detect fake accounts. By promoting news literacy, we can put pressure on the government to make concrete changes in tightening cyber security. In turn, the strengthening of a informed community can act as a buffer and a defense against cyber-attacks.
Below is a picture from fbi.gov listing those wanted for Russian interference in the 2016 election.
Works Cited:
Abigail Abrams, Here’s What We Know So Far About Russia’s 2016 Meddling. https://time.com/5565991/russia-influence-2016-election/
The Effects of Public Opinion. https://courses.lumenlearning.com/atd-baycollege-americangovernment/chapter/the-effects-of-public-opinion/
Andrew Hirschfeld, How to Stop Russian Election Meddling. https://www.ozy.com/news-and-politics/the-invaluable-case-studies-in-fighting-foreign-election-interference/287338/
FBI.gov, Most Wanted: Russian Interference in 2016. https://www.fbi.gov/wanted/cyber/russian-interference-in-2016-u-s-elections
In our exploration of humanity's bleak future, we have turned towards finding solutions to the threats that we face, but with threats related to cyber security we find that this approach is not as feasible. Threats such as nuclear annihilation and climate disaster have clear, however improbable, methods for mitigation or reversal, such as nuclear disarmament or enforcing climate regulations. Threats related to cyber security on the other hand have no distinct path towards reducing the threat from cyber security. As our readings for this week have stated, governments such as the United States are proliferating their offensive cyber capabilities as a response to improving cyber defense across the globe, in turn creating the need for improved defensive cyber capabilities. As this cycle continues to turn cyber capacities will only ever continue to improve on both offensive and defensive ends. Other threats to our humanity call for the reversal of current action or to at least change the way we do things in the status quo, but the threats to our cyber infrastructure can only be diminished by moving forward with innovation. This could have interesting implications for how policy makers in the future decide to deal with cyber security. SInce there is no way to reverse innovation to cyber security because upgrades in technology would not be so readily abandoned, regulation of the technology and those who have access to it will be well enforced in order to ensure that the capabilities are not being misused. The real difficulty regarding cyber threats is the current inability to regulate much of cyber technology. With cyber threats acting as a multiplier of nuclear and climate threats, the rapid innovation and coinciding difficulty of regulation makes finding a way to minimize the danger a pressing priority. With the cyber sphere’s ability to weaponize information, the very foundations of our civilization are at risk due to its indirect consequences with the nuclear arsenal and the climate.
In Chapter 1 of Lin and Zegart’s "Bytes, Bombs, and Spies," offensive cyber operations are defined as “the use of cyber capabilities for national security purposes intended to compromise the confidentiality, integrity, or availability of an adversary’s information technology systems or networks; devices controlled by these systems or networks; or information resident in or passing through these systems or networks” (6). In other words cyber attacks work to harm the integrity and availability of information.
One such kind of “cyberattack” that has recently been in the news are deep fakes -- machine learning technology that manipulates and fabricates video and audio recordings that can depict people doing or saying something they never did in reality. These video clips can appear incredibly authentic despite portraying complete falsehoods. Earlier this year, several TikToks depicting Tom Cruise went viral, garnering over 11 million views. The clips showed Cruise golfing, performing magic tricks, and even laughing off tripping and falling on his face. These TikToks were all deep fakes created by Chris Ume, a video effects specialist, and Miles Fisher, a Tom Cruise impersonator. The clips were lighthearted in nature and ultimately harmless. However, they brought attention to just how advanced this technology is becoming, and just how easy it is to create incredibly realistic, but completely fake video footage of someone.
Although the previous example was relatively innocuous, this technology falling into the wrong hands can have incredibly negative effects, and it becoming so widely available only increases this risk. Back in 2018, a reddit post provided users with a tool that allowed them to realistically insert anyone’s face into porn, which resulted in the circulation of non consensual deep fakes depicting ordinary people performing sexual acts. Visceral reactions to shocking video paired with the ability we have nowadays to instantly and widely share content online creates a dangerous context for such technology to be abused in.
Lin and Zegart place an emphasis on the uniqueness of offensive cyber operations to have potentially disproportionately large effects relative to the “size” of the offense operation: “small actions can create large consequences”(9). On the one hand, we could view this as being efficient, in that “adversaries may perceive different forms of retaliation that do equal damage as differently punishing and differently escalatory. In particular, kinetic damage may be perceived as “more serious” than comparable damage caused by a cyberattack, thus reducing the likelihood and value of kinetic retaliation for deterring and responding to cyberattacks” (9). However, this stance is a worrying one to me, as I believe the more likely outcome would be a consequence (and thus retaliation) much larger than expected due to a lack of sufficiently high cultural understanding required to execute a powerful but contained/ targeted cyber attack on a foreign society. Unlike the physical damage one can predict and design a bomb to have, the manipulation of a foreign society’s perception of truth can provoke unintended violence in sensitive and emotionally heated contexts that may be culturally specific.
According to the Q3 reports from a Cyber Security firm called RiskBased Security, Data breaches exposed 36 billion records in the first half of 2020. (RiskBased). 86% of these breaches were financially motivated, with 45% of the break-ins resulting from hacking. (Varonis). This makes me think, how secure are the documents we have online? Are all of the documents on my computer open to getting breached and stolen? When you click that "Allow Cookies" button, what/who are you really allowing to access your computer?
Like many people, I have had some run-ins with the cyber security issues. An example would be that I called VISA regarding checking my Balance on a VISA gift card i had received a few months prior. Upon picking up the phone, the respondent had asked me if i would participate in a study, and would be compensated an extra $25 onto my gift card for doing so. A few moments after being questioned regarding the Gift Card Number and other information, i was left on hold for a long while before the person on the phone alerted me that the card had just been used online and the remaining balance had been rinsed. Although this was a minor issue as there was not much money on the card, it made me think... Did I just get scammed? Although this such a little example of fraud and the loss of cyber security, i wanted to highlight how common Cyber security issues are today and how they exist.
Reading Article 1 of this weeks readings, a line struck me when reading some of the threats in the past few years. "According to the Guardian’s reporting, offensive cyber capabilities can be used broadly to advance “U.S. national objectives around the world.” (Bytes, Bombs, and Spies). I think this statement is striking, because what does it mean by "Advance "U.S national objectives around the world"? What kind of objectives are we talking about? I think this is something that is interesting to talk about, because as much of you know Edward Snowden had leaked these classified materials, and as a result is now on the FBI's most wanted list. So what exact information is the US constantly collecting to advance its National Objectives around the world? That is something most likely, many people do not want to find out.
I feel as though with the advancement of Cyber Security and Cyber Warfare, this will be an engaging subject for years to come and i am interested to see what policy is implemented in the United States. Many people and firms are going to be needing to advancing their Cyber Offense and Defense with the development of new technology to make sure that their clients and their own files and information are secure.
The readings for this week explained what is cybersecurity and stated some of the cyber threats or risks. I realized that there are many types of cyber risks existing today. The main three levels of threats are: 1. Cybercrime, including crimes committed by individuals or gangs targeting certain systems for the purpose of obtaining economic benefits or causing damage. 2. Cyberattacks, which are often politically motivated and may involve information collection. 3. Cyber terrorist attacks, which are intended to destroy electronic systems and create panic or fear. People may wonder, how does a bad person gain control of a computer system? According to what I have learned from the readings and the internet, there are many ways to gain control of a computer system. For example, one of the most common cyber threats is malicious software. It is software created by cybercriminals or hackers to interrupt or damage the computers of legitimate users. They include Virus, Trojan Horse, Spyware, Ransomware, and etc.
The rapid development of the internet raises a concern that how do people or companies prevent potential cyber threats? I found that several ways may reduce the risk of being cyber attacked: update software and operating system, use anti-virus software, use strong passwords, do not open email attachments from unknown senders, and avoid using unsecured WIFI networks in public places.
In order to maintain the national cybersecurity environment, national governments of various countries have expanded their cybersecurity expenditures and intensively introduced relevant policies to help the development of the cybersecurity industry. Policies are mainly focused on ensuring that their countries maintain a leading position in technological innovation fields such as 5G and artificial intelligence. Since I am from China, I am aware of some of the policies made by the Chinese government. At the starting of 2016, Cybersecurity has been officially included in the key construction of the "13th Five-Year Plan", ranking sixth among the government's 100 major construction projects in the next five years. Aiming to improve the network security laws and regulations, promote the promulgation of the network security law, the encryption law, and the personal information protection law, and study and formulate regulations on the protection of minors on the Internet. The "Network Security Law" was passed by the Standing Committee of the National People's Congress on November 7. It is the "Basic Law" of china's cybersecurity and the core of the cybersecurity legal system.
In Herbert Lin’s article, The existential threat from cyber-enabled information warfare, he outlines that it would be interesting to see how the current state of social media would have effected events of the past. To demonstrate this idea, Lin gives the example of the Cuban Missile Crisis. Lin notes how during the Cuban Missile Crisis, the world was nowhere near as interconnected as it was today. That is, information that would have been protected and may have taken days to reach world leaders would be posted on Twitter in an hour for everyone to see. Given the current global information ecosystem, many experts note how it is a distinct possibility that something like the Cuban Missile Crisis may have been intensified, possibly to the extent of an all-out nuclear exchange.
Given how impactful experts believe information can be in our digital world, the idea of “fake news” or misinformation being posted online is extremely important. Since social media’s rise in popularity, false information has become a feature of these platforms, which poses a large problem for the integrity of information on these platforms as well as global security. Firstly, it has been shown that false rumors tend to spread faster and wider than true information. That is, researchers from MIT found that falsehoods were 70% more likely to be retweeted than facts, and “reach their first 1,500 people six times faster”. Moreover, this effect is exacerbated with political news as opposed to other categories. Another aspect of social media and fake news that poses a threat to global security is the fact that some misinformation is spread by politicians. One of the most interesting aspects of this is that researchers at MIT Sloan have found that some people appreciate candidates who tell lies (as odd as it sounds), even seeing this candidate as more “authentic”. This was seen during Trump’s presidency, and it does not take much imagination to see how a controversial, fake news post by a powerful politician could strain global relations. Perhaps the most frightening use of fake news over social media is when foreign countries use these platforms to influence elections through falsehoods: A prime example of this was Russia government during the 2016 election, where they used Facebook, Instagram, and Twitter to spread false information.
Based on all the aforementioned information, it is quite clear to see how fake news truly does pose an existential threat: all it takes is one detrimental fake news post from the social media community, manipulative governments, politicians seeking votes etc., and, due to the speedy dissemination of fake news on social media platforms, it could become extremely popular and strain global relations. This strain could have vast socioeconomic and political consequences, and, on the extreme end, the all-out nuclear exchange mentioned by the experts in Lin’s article could become a reality.
Image Sources: https://www.statista.com/statistics/657090/fake-news-recogition-confidence/ https://www.statista.com/statistics/620130/online-news-sources-trustworthiness/
Other Sources: https://mitsloan.mit.edu/ideas-made-to-matter/mit-sloan-research-about-social-media-misinformation-and-elections https://www.nytimes.com/2018/02/16/us/politics/russians-indicted-mueller-election-interference.html
In The existential threat from cyber-enabled information warfare, Herbert Lin discussed the corruption of the information ecosystem. This part of the paper was interesting because it brought to light the nature of how big a problem cyber-enabled information warfare can become. Without information infrastructure with trustworthy information, chaos would likely ensue. We are now more vulnerable than ever to cyber warfare as the volume of information accessible has created avenues for false information to take a stronghold in our society. The changing environment in which warfare operates has created incredible uncertainty in how future conflicts may take place. Building off of my question for this week with regards to cyber warfare versus conventional warfare, I don’t think cyberwarfare will replace conventional warfare, but the two will work side by side to inflict complementary damage.
So why is cyber warfare becoming more common? Well, strategic information warfare is appealing because it has a low entry cost as the development for these “weapons” is minimal in comparison to conventional weapons. Additionally, information warfare has blurred boundaries and increases vulnerability as where you are in the world has virtually no impact on limiting attack. This is arguably one of the most terrifying aspects of cyber warfare. As cyber warfare increases its prevalence in our society, everyone is more susceptible to attacks. Geography is irrelevant, which makes detecting attacks that much harder. Fear of attack is, in my opinion, the main reason for continued advancement as nations want to have information superiority. This should only exacerbate the existential threat as these “weapons” will only increase in usage, advancement, and deployment. Going back to the blurred boundaries part, what is unsettling is that in most cases we won’t know where an attack may be coming from. It could be from domestic or foreign sources which thus blurs the line of crime versus warfare. How can we differentiate between the two and respond accordingly? Right now, our response and uncertainty to information warfare are unsettling and need to improve. As we move forward, there needs to be consensus on information infrastructure, strong leadership, security strategies, and military strategies to be adequately prepared for attacks such that the existential part of this threat is eliminated. In order to differentiate between the two, cyber defense and our society's involvement in cyber defense needs to continue to grow. If more resources are allocated to cyber defense, we will be better equipped at first determining whether or not a cyber-attack is a domestic or foreign threat and thus act accordingly.
References:
Herbert Lin (2019) “The existential threat from cyber-enabled information warfare,” Bulletin of the Atomic Scientists, 75:4,187-196. Herbert Lin and Jaclyn Kerr (2019) “OnCyber-Enabled Information Warfare and Influence Operations,” Oxford Handbook of Cybersecurity. https://www.rand.org/pubs/monograph_reports/MR661.html Picture:
Efficiency is not the answer, we must give in to the metastable security of our monopolistic overlords!
The Unites States retains a near-monopolistic control over the general access of individuals and states to the global economy - through its navy, and influence over international bodies and allies. This gives the US the ability to isolate so-called “rogue states” from this global economy. This “monopoly” is meant to ensure security to the society - and thus long term prosperity, however actors in an economy will naturally tend towards the maximally efficient opportunity, whereas the monopoly only exists as a metastability with regard to economic efficiency.
Now it is time to put the terms “metastability” and “maximal efficiency” within the context of our world. Maximal efficiency is anonymity. The internet offers anonymity. Metastability is enforced by a monopoly - aka the “US government”, retaining a large amount of control over the system which enables “anonymity”, aka “the internet”, and can thus attempt to prevent “anonymity” in order to ensure stability.
However, when a near-monopoly does not have complete control over its intended sector, this metastability is not absolutely pervasive throughout said sector, and thus isolated pockets can emerge which seek this maximally efficient alternative. In the aforementioned description of our modern society, this alternative itself allows these isolated pockets to penetrate into the sector under the guise of monopolistic control. As an example, “rogue states” (as defined by the US) such as Iran and North Korea are largely isolated from most aspects of global trade due to either direct US sanctions, or sanctions from bodies influenced by the US. The result, Iran and North Korea are incentivized to invest in cryptocurrency such as bitcoin, hacking software, and other internet-based assets that allow for anonymity. The US seeks to prevent anonymity, because anonymity IS the destruction of its control. If people can freely conduct “illicit activities” without the knowledge of the US government, then its control over society is lost, and with it society’s intended stability in the hands of the US government. The advent of the internet has thus offered a great path for these “rogue states” to truly delve into the wonders of maximal efficiency at the cost of those within the control of the monopoly.
The investments of Iran and North Korea into bitcoin, as well as bitcoin-hacking software, have proved incredibly lucrative. Not only is the extent of this incentive to invest in these areas not as strong for those within the metastable society offered by the monopoly, a second burden arises in the context of specifically the US. In an ideal, maximally efficient US system of research on hacking offense and defense (as described in Lin and Zegart’s “Bites, Bombs, and Spies”) is to have researchers collaborate fluidly with the federal government. The authors describe the state of nuclear research information access during the Cold War, and how university researchers and think tanks were incredibly valuable to the US nuclear effort. However, a monopolistic metastability is formed when the US government severely restricts access to information on US cyber projects. The monopolistic control is access to information on these projects. The purpose is to ensure leaked information doesn’t get into the hands of state enemies. Maximal efficiency of this system is one in which researchers freely interact with the US government on all relevant information, however the metastability is created to ensure the aforementioned purpose. As a result, the US cyber effort is even further hampered by this same phenomenon described throughout this memo.
The internet created an incentive system which has forced “rogue states” to invest in cyber infrastructure, particularly offense, as well as cryptocurrencies due to the anonymity they provide, which have proved incredibly lucrative. The US government's self-declared position of overseer of control and security within many aspects of global society and economy have created barriers to incentivise anonymity, and thus cyber defense and pushback against cryptocurrencies pervade US investments and future potential in these sectors.
Doxxing
This week’s readings made me contemplate doxing/doxxing, and I’m curious what our guest speaker Herbert Lin thinks about it.
For those who don’t know, doxing/doxxing is the act of publishing/revealing private information about a person or organization. It’s carried out out for varying reasons, including hacktivism, online shaming, online vigilantes, extortion, law enforcement aiding etc. While the phrase is thrown around by internet users both as powerful threats “I will dox you” to opponents and as calls to action “Dox them!” (I frequently saw people commenting this on videos of US Capital protestors), the act of doxxing someone or the state of being doxxed oneself is no small thing. And as I was reflecting on cyber space, cyber security, and the threats of the internet age, doxxing assuredly came to mind. In my post I’m going to go through some brief history related and examples of it.
Originally, doxxing comes from the idea that an opposing group has “docs”, as in documents, on another that they have taken which are revealing. In the early 90’s, “dropping dox” on someone was a form of personal revenge. It’s important to note that doxxing isn’t merely revealing someone’s public profiles, such as linking a Neo-nazi to their Facebook or Twitter. Doxxing can involve hacking into private files to retrieve physical addresses, email addresses, family member’s profiles, schools, phone numbers etc. As we can imagine the consequences of misidentification by a doxxer can be devastating for the doxxed. During the Boston Marathon, Reddit users incorrectly identified several suspects one of which was Sunil Tripathi. Tripathi later committed suicide and Reddit’s GM criticized the “online witch hunts” that had taken place online. This is clearly a case where the majority of people will conclude that doxxing was incredibly detrimental to the investigative process and also extremely harmful to those who were misidentified. However, it should also be noted that there are forms of doxxing that are more widely accepted, praised, or supported, such as the doxxing of Neo Nazis. At a 2017 counter protest in San Francisco, protestors sang “Dox a Nazi all day, every day”. Anonymous released thousands of identities of the Ku Klux Klan in 2014, a group that popularized the term “dox” itself through its actions. When I interned in Hong Kong in 2019, there was a massive tech war between protestors and police during the HK protests. The doxxing of both protestors and police was rampant; the doxxing of police escalated when officers removed their badge numbers on their uniforms. There are proponents for and against doxxing, whose allegiances are largely circumstantial and dependent on who exactly is being “doxxed”. A Neo Nazi garners significantly less sympathy than a misidentified teenager for example. There are also staunch opponents to doxxing regardless of circumstance who emphasize the criminality of the act, the unfair consequences, and the jeopardization of privacy/information/security. Others see something in doxxing that is more democratic than they actual justice system and a method through which justice can be enacted where it hasn’t (this makes me think things like the MeToo movement etc.). An SF counter protestor against Neo Nazis in 2017 said, “Are you really doxxing them if they are marching on a public street, face revealed, and apparently proud? It’s not as though they are hiding their identities.” These are some of the arguments at play in regards to the issue.
Last year I took a signature course called Ethics of the Digital Age, where we addressed online shaming. There’s a lot to discuss with doxxing and cyber security. It’s an interesting modern phenomenon with huge consequences.
Sources: https://www.nytimes.com/2017/08/30/technology/doxxing-protests.html https://www.theguardian.com/world/2019/sep/20/hong-kong-protests-tech-war-opens-up-with-doxxing-of-protesters-and-police https://www.bloomberg.com/news/articles/2020-07-30/where-doxxing-came-from-and-why-it-keeps-popping-up-quicktake
Information chaos is an apt descriptor of the situation regarding information today. We’ve seen some steps to regulate the spreading of false information following the election of 2020 and Trump’s banning from Twitter. In the past weeks, we’ve talked about how partially and blatantly false information is very common, despite the danger it poses. From climate denialism to refusing to admit Covid is real, it is easy to see how false information is easily grasped and defended.
It’s harder to grasp how cyber threats are a possible existential crisis on the scale of climate change and nuclear annihilation, as both of those have much more physical repercussions. However, the massive amount of information in cyber space makes it a target. Herbert Lin provides examples of cyber-attacks in the past to convey the reality of the threat, before going into how the US’s Department of Defense has also taken action to be prepared in case of a cyber-attack, as well being prepared to take the offensive.
But how is one to know what is real and what is fake in today’s world? With social media as a fast lane into the opinions of people and a google algorithm that will show the searcher not the most reliable hits but rather the hits that coordinate with previous searches, distinguishing real news from fake news is difficult. Beyond the massive amount of information available, there is also bias to account for, especially in the news. In an ideal world, bias might influence the wording used to communicate information, but alas, this world is not perfect. Rather, biased writers can communicate false information. In the US at least, the first amendment protects the rights of freedom of speech and freedom of the press. While this is very important, one must wonder how to protect these while also protecting the integrity of information.
The National Academies Press article describes threats to cybersecurity to consist of three main items: people with evil intentions in cyberspace, the overwhelming reliance on IT for society to function, and unavoidable vulnerabilities in IT systems that can be exploited. Solving these issues is already complicated enough, but the Covid-19 pandemic brought to light yet another vulnerability related to an overwhelming reliance on IT that could affect cybersecurity and the policies enacted by countries worldwide to confront the technology revolution.
2020 saw a global shortage of semiconductors used in nearly every technology we have become accustomed to today. More concerning to policymakers, these chips are vital components of equipment like the American F-35 fighter jet and crucial elements of any network that supports societal function, from healthcare to finance. This shortage arose from semiconductor foundries (highly concentrated in Asia) and their corresponding supply chains being majorly disrupted due to Covid-19. What became shockingly clear to the world was how reliant western countries are on China, Malaysia, Japan, Taiwan, South Korea, and Singapore for the semiconductors crucial to protecting national interest. In fact, in 2019, these four countries accounted for nearly 70% of global semiconductor exports, with China alone accounting for 35%. This had largely flown under the radar for western policymakers given the prominence of US technology companies like Apple, Google, Facebook, Amazon, and others, but over the past decades, these companies had saved billions by outsourcing semiconductor production to Asia and investing near zero in domestic semiconductor production. Further complicating the issue, 2020 saw China begin coordinating with private companies to stockpile semiconductors. In 2020, Chinese imports of semiconductors rose by $380 billion, representing 20% of the country’s total imports. This has left numerous US companies, from car manufacturers like GM and Ford to tech companies like Apple struggling to match output to consumer demand. Some have described these actions as a semiconductor arms race. With so much societal reliance on these semiconductors, countries face a bad tradeoff between self-sufficiency and globalization.
This issue has been recognized as of late by the US government, especially given ongoing tensions with China and its attempt to be the global technology leader. The Department of Defense has taken steps to combat this national security risk. For one, they provided incentives and agreed with Taiwan Semiconductor (TSMC), one of the world’s largest semiconductor foundries, to build a $12 billion US chip plant in Arizona. Still, this only raises concerns about how the world can move forward with an ever-increasing reliance on semiconductors in almost every area of life. Is the solution to be self-sufficient semiconductor-wise country-by-country? If so, are we willing to sacrifice progress by stifling global cooperation and knowledge exchange? One thing is clear; the solution cannot involve stepping back reliance on IT and semiconductors. Policymakers will be challenged to find solutions that prevent a 2020-like shortage but allows for continued global progress in the semiconductor and cyberspace.
https://www.marketwatch.com/story/shortage-in-chips-puts-u-s-national-security-at-risk-11612373012
This week’s readings made me think about the limits of encryption and the necessary physical infrastructure that countries need to be cybersecure. The National Academies Press reading states that the problems to cybersecurity will not have a decisive solution within the foreseeable future. The picture this paints for me is an ever-increasing arsenal of defensive and offensive cybersecurity tools being built up, with countries making differential progress. The question this leads me to is what are the limitations of encryption and other major cyber defense strategies? Will there always be an even more impressive cyber threat that can overcome the latest encryption? This is a highly technical question, one far beyond my personal understanding of cybersecurity, but to me, the general notion of an adversary’s cyber-attacking competencies outpacing my own country’s defensive capabilities is quite troublesome. This leads me to my solution, which is to ensure that certain critical internet infrastructure, including technology that controls power grids, nuclear arsenals, and other essential state information, needs to be physically inaccessible to foreign adversaries. My basic assertion is that there is physical infrastructure that allows cyberspace to exist and physical interactions that occur to facilitate cyber interactions, and it is necessary to build out systems that intentionally seclude cyberinfrastructure for essential government functions and ensure that the physical mechanisms by which someone can gain control of them are completely inaccessible to foreign entities. There have already been stories of countries like [Russia hacking the U.S. power grid, including nuclear power plants](https://www.nytimes.com/2020/10/23/us/politics/energetic-bear-russian-hackers.html) ([although the U.S. has certainly returned in-kind](https://www.nytimes.com/2019/06/15/us/politics/trump-cyber-russia-grid.html)). There was even a story recently about [Russia successfully testing unplugging its internet](https://www.npr.org/2018/03/23/596044821/russia-hacked-u-s-power-grid-so-what-will-the-trump-administration-do-about-it). The reason this is such an effective solution is that it protects against further advancements in cyber attacking technologies because it isn’t dependent on potentially vulnerable cybersecurity systems. To put this in simpler terms, a hacker in Russia should not even have the option to hack into and access United States nuclear power plant facilities, and America needs to invest in preventing that from being possible. These resources are far too valuable and potentially calamity inducing to the point that infrastructure needs to be invested in to ensure that these systems are physically inaccessible by adversaries, at home and abroad. [Cut the chord ](https://images.idgesg.net/images/article/2018/09/gettyimages-138072901-100771855-large.jpg)and build a secure one.
In this memo, I will argue the stance that privacy is no longer a reality people can expect or protect in our day and age because we are engulfed by data collection and surveillance. Companies and countries alike have significant amounts of data about individuals based on information technology and resources that they have developed in the recent decades. Businesses are highly dependent on information and data about their customers, as they collect this information to the best of their capabilities through algorithms and other data analysis to create advertisements for targeted markets, improve products and services, and ultimately increase profits. The moment that a user searches something on the Internet or uses a social media app, for example, their usage and information is tracked, stored, and put to use. Thus, I contend that it is not possible to maintain digital privacy in this era of cyberspace, as all information will be tracked regardless of the user’s consent for it to be saved and used for company growth strategy. While privacy may be at stake, it is also worth noting that there may be some positive outcomes from the tradeoff with information and data collection. Companies are able to improve their product and services and narrow their target markets; it is a benefit for customers to have products that work better and/or run more efficiently. At the same time, a byproduct of improvement of technology through information collection about users also has another side effect – it increases the sheer volume and speed at which information can be spread. Specifically, one good example is that search engines show results for searches based on the popularity of the result and “inferred desire of the user for specific information” over the actual importance of the searches. The consequence of this is that it makes it significantly easier for people to do research on things that are reinforced by the search platform itself, which engulfs the user in confirmation bias and enables them to selectively search information that one already believes – an echo chamber of sorts. Ultimately, in the era of cyberspace, it is nearly impossible to maintain digital privacy for the aforementioned reasons, and while there are some benefits such as product improvement, there is also the consequence of confirmation bias.
The threats to international security posed by offensive cyber capabilities, cyber-enabled information warfare, and related risks are abundant given the “newness” of the technologies and their capacity to disrupt traditional warfighting between state actors, but one could argue this risk isn’t necessarily “existential” to humanity unless taken one step further.
As described by Herbert Lin in the Bulletin piece, cyber-enabled information warfare is an existential threat alongside such risks as nuclear conflict and climate change as its use poses “the realistic possibility of a global information dystopia, in which the pillars of modern democratic self-government - logic, truth, and reality - are shattered, and anti-Enlightenment values undermine civilization around the world.” Such a threat appears to fulfill the latter half of Nick Bostrom’s oft-cited definition of existential risk (“one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential”) as a “global information dystopia” could theoretically permanently curtail human flourishing under certain circumstances. This threat of a “global information dystopia” could presumably intersect with a similar threat proposed by Bostrom of a “repressive totalitarian global regime” (ibid) that is common in dystopian science fiction where a group uses a technology-enabled, repressive state apparatus to control humanity or otherwise limit its ultimate potential.
In addition to this threat of a “global information dystopia” is the threat of cyber-enabled information warfare contributing to pre-existing existential threats including nuclear conflict. For example, Herbert Lin and Amy Zegart point out in the introduction to "Bytes, Bombs, and Spies" that “in cyberspace, instruments used to gather intelligence and inflict damage are difficult to distinguish. Because the same techniques are usually used to gain access to an adversary’s systems and networks for intelligence gathering and for causing harm, an adversary that detects a penetration cannot be certain of the penetrator’s intent and therefore may misperceive an attempted intelligence operation as an attack.” This threat has serious implications for crises and conflict between nuclear-armed states. For example, a growing body of literature explores the threat non-nuclear weapons (including cyber capabilities) pose to nuclear weapons and their associated command, control, communications, and information (C3I) systems (for more on this risk, see "Entanglement: Chinese and Russian Perspectives on Non-nuclear Weapons and Nuclear Risks" here). Efforts to surveil adversary nuclear C3I systems using cyber capabilities could be perceived as efforts to undermine their effectiveness, a risk that could quickly escalate a crisis from sub-conventional all the way to the nuclear threshold.
Taken together, these two risks (permanent global information dystopia and entanglement) are both plausible existential risks related to cyber capabilities and policymakers and scholars alike should take steps to better understand and mitigate such risks moving forward.
Below: a graphic from a report I co-authored showing risks associated with entanglement and situational awareness technologies (such as cyber capabilities). (For more, see On the Radar)
The internet, as we all know, can be a wild place. For however many good things it is capable of achieving, there are equally as many ways for its power to be abused by trolls and hackers. This idea extends to a national level, where as we read in the articles, governments across the world have been gaining interest at an increasing rate in everything cyber related. While I believe it is a good thing America strives to be at the forefront of the cyber arms race, I find the discussion of this technology interesting because cyber-enabled misinformation was originally a concern for its potentially magnifying effects on the two long-acknowledged existential threats to humanity – climate change and nuclear warfare – but now with the rapid development of technology, cyber-enabled warfare can now be considered an existential threat of its own.
It’s difficult to articulate what kind of existential threats this new technology poses as it is still developing. With so much uncertainty on the future of AI and other cyber-technologies, it is hard to internalize and understand the risks at hand. Due to this, I would argue that the potential effect cyber-enabled misinformation may have on other more present existential threats is far greater and more realistic. It is already too easy to spread misinformation on the internet, carefully placed misinformation has the power to be extremely detrimental to society especially when informing the public is one of our main strategies to combat existential matters such as on as global climate change and nuclear weapons. At the same time, I agree that America should develop technology in anticipation for the future of cyber-warfare. As mentioned in Herbert Lin’s and Amy Zegart’s article, cyber-warfare is increasingly becoming an area for concern with a series of attacks already commencing within the last few years. I only expect the frequency and intensity of these attacks to increase as hackers continue to better their technology as well.
#novel #salience #framing
For this week I reread Ray Bradbury’s novel “Fahrenheit 451”. In this book we follow protagonist Guy Montag, a fireman working in some future dystopian American city. Only in this reality, firemen start fires, specifically tasked with burning any and all books. As we follow Guy throughout the book we see him falter, secretly stashing books away that he was meant to burn, dissatisfied with his life. Eventually this leads him to a breaking point, where he starts reading these books, after which he has a confrontation with his boss, and is eventually forced to flee society altogether, ultimately finding refuge with a group of “intellectual” runaways like him.
I think it’s very telling of the boomer old man Ray Bradbury is by his framing the advancement of technology and the coming of the information age as some unadulterated evil force which will attempt to supplant knowledge (books) in favor of easy to consume, meaningless content (in the book entertainment takes the form of walls of televisions playing 24/7 and headphones people wear all day long through which they listen to radio broadcasts). While I think there is some merit to the idea that the sheer volume and speed at which information flows today can be difficult to parse, particularly with so much false information spreading across the internet, I think the core ideas of this book are overblown and dangerous. In particular I think the idea that one can shake off the shackles of a literal lifetime of propaganda in a matter of days just by picking up and reading some random book is super dangerous. As some of the articles we read for this week talk about, the reason that disinformation and fake news campaigns are so successful is that when done “well” they are unnoticeable and hard to debunk.
I also want to mention the fact that Ray Bradbury’s in-universe explanation of why the government requires all books to be burnt essentially boils down to “minorities were too sensitive” and didn’t want to be “criticized”, and that in interviews since the release of the book has doubled down on these ideas: “[Fahrenheit 451] works even better because we have political correctness now. Political correctness is the real enemy these days. The black groups want to control our thinking and you can't say certain things. The homosexual groups don’t want you to criticize them. It's thought control and freedom of speech control” (Source, page 104). I think this framing of censorship with regards to “PC culture” is a deliberate smokescreen to cover up the harm which ideas like this can breed, especially today with the prevalence of conservative and alt-right grifters crying out constantly about “cancel culture”. It is not lost on me how stories like this are so often used to criticize minorities and marginalized groups a-la “PC culture” rather than the power structures which actually enable censorship and the dissemination of false and misleading information. As always, we have to ask who benefits from the reframing of these issues? In this case (at let’s be honest in most cases) it is those who hold power: mainly old white conservative men.
-- Ray Bradbury, I assume (The original image is a still from the 1966 film adaptation of "Fahrenheit 451", with text added by me)
Beware of how many you share your information with. It's probably more than you think.
In reading the cyber security readings, it became clear that many governments and companies have a long way to go with truly finding methods of keeping themselves and the information they store safe. There is a lot of controversy and conflict surrounding the topic because it is such a new threat that is difficult to gain the exact depth of. Right now there is significant lack of an established way to deal with these threats, so governments and companies are taking matters into their own hands. They are using their resources to find solutions that save themselves. In many cases these have not been successful because of the difficulty of evaluating the threat but also because many of these use third party software or companies.
Now you may be asking yourself, what is a third party and what does it mean for my information? A third party is a company or software that a company will use to supplement something it is missing or use to reduce costs. It can serve many purposes and often times has deeper access to a company's important information than many are aware of. Hackers have realized that although companies and governments are working very hard to protect themselves with numerous resources, many of these third parties are not. Cyber security services can be very expensive and still be ineffective, making them not available to all. This has often led, hackers to have a direct access point to the private information of a company and its clients or a government and its people.
This past year, the well known retailer Saks Fifth Ave has been hacked via one of the third party companies that had access to their client's information. This was a massive scandal as over five million credit and debit card numbers were obtained by the hackers. The problem was that they first had to be made aware of the problem and then gauge the depth of it. Often times it is not initially clear how much information the hackers have. The problem with information is that once it is in the hands of the hackers, there is nothing that can really be done. They can say they will give it back for a ransom, however, how will you ever know if they truly delete all the files? In this case, to apologize to the clients, Saks gave free personal information security services to those impacted. This is only a bandaid on a much larger problem. The hacker group, known as Fin7 or JokerStash, that attacked Saks is known to have attacked other retailers in the past. This is a recurring problem that has no easy solution. It will continue to happen and at greater levels, which reinforces the importance of needing better cyber security for everyone in the future.
Source [image]: https://www.idagent.com/blog/how-dangerous-is-a-third-party-data-breach-in-2020/ Source: https://www.cybergrx.com/resources/research-and-insights/blog/top-11-third-party-breaches-of-2018-so-far-data-breach-report Source: https://www.nytimes.com/2018/04/01/technology/saks-lord-taylor-credit-cards.html
One of the most salient uses of cyber warfare recently has been a war over the information disseminated to citizens of democratic nations. This issue has been brought to the forefront of the American consciousness by the 2016 election, where it was found that Russia had, to some extent, meddled with the information available to the American public by posting misinformation on various social media platforms. The growth of the anti-vaxxer stance is another example of the growth of misinformation. How can democratic countries combat misinformation on platforms? The immediate answer would be to place more stringent barriers on what can be posted and what can't be posted. In other words, place infrastructure on things that the American public can and can't see. Obviously, this is a system that can be abused. If a sitting president was in control of this body, state-sponsored propaganda can essentially win any vote/election necessary. So, this solution seems extremely tricky to carry out effectively. Furthermore, the less people you put in charge of decisions, the more influence each source of media THEY have access to has on their decisions. In other words, the “scope” problem of democracies discussed here still has the advantage over authoritarian governments, in that misinformation to a single person in authoritarian governments could take down a regime. The scale need not be large. This could be solved by appointing an independent body that is completely immune to political pressure that sifts through misinformation on social media. Hypothetically, this could be the social media platform themselves. This produces a couple of contradictions, however. If they were a completely independent body, they wouldn't be able to receive government funding - so how would they get funding? Surely donations wouldn't cover the costs completely, and if a social media platform itself took it upon themselves to delete misinformation posts, it runs the risk of losing a large portion of its users, who will surely find another platform in the constantly growing market of social media. Out of these options, it seems a private watchdog funded by donors/a somewhat policing stance taken by social media platforms altruistically are among two of the best options. From an individual's perspective, it seems in the modern sphere any information found online cannot be trusted. This is an ironic twist on the information age of today – the more information available online, the trickier it to find objective facts. On platforms where information is democratized, misinformation may be just as prevalent as information. On platforms where information is centralized, the interests of the central party will be prioritized over truth (Tianenman Square). Either way, a healthy amount of mistrust to ANY information in the cyberspace seems to be healthy.
Gaining Allies in Cyber Warfare
The popular movie “Catch Me If You Can” was based off the real life events of the con artist Frank Abagnale who was hired by the FBI to investigate crimes committed by frauds and scam artists. Why not enlist the help of hackers to aid in the defense against cyber attackers?
There is no doubt that the United States needs to be more prepared to protect itself against cyber-attacks in a world that is becoming increasingly proficient in information warfare. We have already seen from the 2016 presidential election Russia’s involvement in releasing private documents and data from notable organizations like the Democratic National Committee. Beyond America’s disruptive relationship with Russia, its “aggressive relationship with Iran…and aggressive trade with China” poises the country to be the center of cyber-attacks [1]. Currently, the U.S. is ranked 17th most “cyber-secure” country, the U.S. trails due to its inability to keep up with countries like Denmark, Ireland, and Sweden who have “showed greater improvements” in protecting against cyber-attacks. How then can the U.S. better be prepared to face these attacks?
Although not the first place one may look for help, allying with international activist hacker groups like “Anonymous” may be one way through which the U.S. can better prepare itself for cyber-attacks. Anonymous first gained national attention in 2008 when they “hack the Church of Scientology Web site with a distributed denial-of-service attack, in which multiple computers bombarded the victim’s server…and shut it down".[2] Other targets have included the Ku Klux Klan, Minneapolis Police Department, Saudi Arabia, and United Nations, to name a few. From a CNN interview with one member from the group, he claims that “We hack because we can. The government needs to know it is not in total control. And we need to know that too”. [3].
In the past, the government has denounced and arrested hackers associated with “Anonymous.” However, I believe that an alliance between the U.S. and the group may prove useful in fighting cyber-attacks from abroad. Although “Anonymous” “does not have the capability to do the kind of things that a nation-state could do”, its “less-sophisticated” hackings still has the power to devastate organizations. [4] From their ability to leads cyber-attacks both domestically and internationally, there is no doubt “Anonymous” has the resources and talent to employ devastating cyber-attacks. Creating some alliance does not necessarily point towards a dependence on the group to carry out America’s cyber warfare. Instead, a close alliance may make America immune from future cyber-attacks from these very groups. For instance, cybersecurity experts have stated that, "If the group's [Anonymous] members" gained greater capability of cyber attacked " an attack on the power grid would become far more likely". [5] With the group having such a wide spread influence, a successful alliance may protect the U.S. from powerful attacks. Additionally "Anonymous" can aid in international attacks, considering the group consists of members located around the world.
Of course, alliances with such a large-scale international group are certainly risky. But we should not jump to declaring these hacker groups cyber terrorists without carefully considering the extent of damage their attacks have caused especially in relation to other countries. These hackers may be the very individuals we want to keep by our sides to fight cyber wars.
Sources [1] Lin, Herbert and Kerr, Jaclyn. "On Cyber-Enabled Information Warfare and Information Operations." Oxford Handbook of Cyber Security. May 2019.
[2] Sands, Geneva. "What to Know About the Worldwide Hacker Group ‘Anonymous’". ABC News. March, 2016. https://abcnews.go.com/US/worldwide-hacker-group-anonymous/story?id=37761302
[3] Segall, Laurie. "One on One with Anonymous." _CNN Business. January 2016. https://money.cnn.com/2015/12/06/technology/one-on-one-with-anonymous/
[4] Carey, Bjorn. "Stanford cybersecurity expert analyzes Anonymous' hacking attacks on ISIS." Stanford News. November 2015. https://news.stanford.edu/2015/11/18/lin-anonymous-isis-111815/
[5] Madrigal, Alexis. "Who Do You Trust Less: The NSA or Anonymous?" The Atlantic. February, 2012. https://www.theatlantic.com/technology/archive/2012/02/who-do-you-trust-less-the-nsa-or-anonymous/253399/
In The existential threat from cyber-enabled information warfare, Lin makes a point to bring up the psychological factors that played a role in developing the type of information environment we currently live in on the internet. He mentions the system 1 and System 2 ways of processing information and decision making, and how system 1 is the primary mode of thinking that has allowed social media to turn the internet into a less trustworthy information ecosystem, despite it being the fastest way to collect information with easy access for most people in developed countries. While reading this, I couldn’t help but to think of how the internet played one of the biggest roles in the spread of false information about COVID-19 . The spread of fake news, and denialists undermining scientific research may have been to blame for the uncontrolled spread of the virus in a developed country like The United States. Conspiracy theories manipulated the system 1 thinking of the public and had people believing that the virus was just a conspiracy by the government to control its citizens. People protested wearing masks because they felt it was the government's first step in stripping their “basic human rights'' away. The fear of being controlled by a possibly corrupt government was bigger than the fear of contracting a deadly virus, and the system 1 thinking motivated hundreds of thousands of people to not take the virus seriously, ultimately leading to the uncontrolled spread of it. The United States at one point was leading the number of COVID cases in the world, and hundreds of thousands of people were hospitalized and died as a consequence. Was the internet to blame for this? I wonder if COVID 19 had happened in a time before widespread access to social media and unchecked independent news sources if the unfounded conspiracy about COVID being a hoax would have ever taken off as it did in 2020. I believe there still would have been plenty of conspiracy theorists who would have come to that conclusion but they would not have been given the platform to preach to the vulnerable masses, who are already fearful for their lives from the virus. This vulnerability already made people more likely to use their system 1 for making decisions during this pandemic, but added stress about being given false information may have only made judgement even more clouded. Now as vaccines are beginning to roll out, conspiracy theorists and false information spreading social media users have begun to plant seeds of doubt about the effectiveness and safety surrounding the vaccine. It is worrying that even in situations as concerning as a global pandemic, the internet has the power to undermine the seriousness of such events and worsen the consequences. More than ever we must start taking steps to ensure that during times of global panic and unprecedented crisis that the information being spread throughout the public is the closest it can be to the truth, hopefully led by experts who may be better trained to use their system 2 in these situations. (Below is a fake-news infographic that was debunked on NPR.org)
DO NOT respond with ‘Yes’ when answering the phone!! The latest phone scammers are now able to get everything they need simply by recording you saying the word yes. This very simple trick attempts to get the victim to respond with yes by asking a yes or no question, in which the victim will most likely respond with yes. A common question asked by scammers is something like “Can you hear me?”. To make matters worse, these scammers are also able to use local area codes to make the user more likely to answer the phone. Once the scammer has a voice recording of you saying “yes” they have your verbal confirmation, and, as they most likely have more information about you like your name and your phone number, they can use this in a number of ways including bypassing credit card security or even working their way into your bank account. If this seems to you like a major flaw in our current security system, that’s because it is. People today have decided to trade true security for ease and convenience. However, with this ease and convenience it appears that loop holes and back doors open up that can be exploited. Is a convenient method of storing information securely worth the risk of having it leaked? The massive amounts of vital information protected by a system that seems to be so fragile it can be infiltrated by one simple word is truly frightening, and makes me question not only my own sense of security, but everyone’s security. If seemingly secure information can be obtained so easily, how truly secure is any piece of information. If the most valuable information were to fall into the wrong hands, who knows how catastrophic the outcome could be.
Link to ‘Yes’ scam article: How to Avoid the "Say Yes" Phone Scam - Triada Networks
I want to talk about cybersecurity in China and the western biases on Chinese cybersecurity. In recent 10 years, the Chinese government has been aware of the surveillance of foreign tech companies such as Apple and Google since those companies have been proven not only once that constantly share their user information with the US government. Since the PRISM, China has built a comprehensive system of independent internet technology. In June 2017, China implemented a new cybersecurity law which now acts as the baseline for China’s present-day guidelines. Initially passed in 2016, the law was created to provide guidelines for maintaining network security, protecting the rights and interests of individuals and organizations, and promoting the secure development of technology. The law requires that data is stored within China and that organizations and network operators submit to government-conducted security checks. The law stated that "In order to protect cybersecurity, safeguard cyberspace sovereignty and national security, social public interests, protect the legitimate rights and interests of citizens, legal persons, and other organizations, and promote the healthy development of economic and social informatization, this law is formulated." It was a very normal activity for a national government. The law does not only protect Chinese people but also the new-born Chinese technology companies that hope to compete with other old giants in the electronic tech market. However, those normal terms became demonized again by some western media. some said that "The CCP’s use of surveillance and personal data to discriminate against ethnic minorities demonstrates the extent to which the government will exploit privacy in favor of control over its citizens.", and some even used the law to over-criticize the censorship system in China. However, what happened in the US last year recalled some media companies that censorship is beneficial for social media. They cheered that MAGA was banned by social media while ironizing people in China had no human rights. Even more ironically, this month, the Chinese government fined five companies for using AI and Big Data to set unfair prices for different user groups. It seems that companies like Expedia or Amazon would never agree with a government that does this.
When juxtaposed with the risks of nuclear armageddon and climate change, those of cyber warfare are much less widely addressed; this is concerning due to the fact that technology is essential to the average person’s daily life, rendering the dangers posed by its misuse equally as pressing as those of the aforementioned issues, not because it transcends them in severity, but because it contributes to them through the propagation of misinformation. The average person is unfortunately inclined to seek out information that validates his/her own beliefs, or to cling onto the first bit of information that pops onto the screen without caring much to question it. Therefore, Herbert Lin’s argument regarding the risks presented by the corruption of the information ecosystem in his article “The Existential Threat From Cyber-Enabled Information Warfare” proves exceedingly alarming. “The misuse of social media,” he said, “has...made rational responses to the threat of climate change more difficult for national governments to reach, as companies and groups with financial and ideological interest in creating the appearance of doubt sow misinformation about consensus scientific view,” (page 3). This misuse also exacerbates the risk of nuclear-related tensions. What is most concerning is the fact that there are so many sources of information, each with the power to publish false information; the public is thus at high risk of being misled and of not knowing whom to trust; this may make individuals even more likely to seek out misinformation that confirms their adopted beliefs, simply because it is the easier choice, the more reassuring alternative to uncertainty or to the destabilizing feeling of being incorrect. In addition, the power of propaganda has multiplied exponentially, as cyber information warfare allows for “perpetrators...to exacerbate prejudices, biases, and ideological differences,” and to worsen social tensions. So, even if cyber warfare is unlikely to be a standalone destroyer of civilization, the manner in which it contributes to other pressing issues (climate change denial, nuclear warfare, social tension) could inflate to such a point that it could indeed result in catastrophe. The question is, how do we avoid this “coming information dystopia”? How can we navigate a corrupted system that has become so essential to daily life?
Leave below as comments your memos that grapple with the topic of cyber inspired by the readings, movies & novels (at least one per quarter), your research, experiences, and imagination! Also add a thumbs up to the 5 memos you find most awesome, challenging, and discussion-worthy!
Recall the following instructions: Memos: Every week students will post one memo in response to the readings and associated topic. The memo should be 300–500 words + 1 visual element (e.g., figure, image, hand-drawn picture, art, etc. that complements or is suggestive of your argument). The memo should be tagged with one or more of the following:
origin: How did we get here? Reflection on the historical, technological, political and other origins of this existential crisis that help us better understand and place it in context.
risk: Qualitative and quantitative analysis of the risk associated with this challenge. This risk analysis could be locally in a particular place and time, or globally over a much longer period, in isolation or in relation to other existential challenges (e.g., the environmental devastation that follows nuclear fallout).
policy: What individual and collective actions or policies could be (or have been) undertaken to avert the existential risk associated with this challenge? These could include a brief examination and evaluation of a historical context and policy (e.g., quarantining and plague), a comparison of existing policy options (e.g., cost-benefit analysis, ethical contrast), or design of a novel policy solution.
solutions: Suggestions of what (else) might be done. These could be personal, technical, social, artistic, or anything that might reduce existential risk.
framing: What are competing framings of this existential challenge? Are there any novel framings that could allow us to think about the challenge differently; that would make it more salient? How do different ethical, religious, political and other positions frame this challenge and its consequences (e.g., “End of the Times”).
salience: Why is it hard to think and talk about or ultimately mobilize around this existential challenge? Are there agencies in society with an interest in downplaying the risks associated with this challenge? Are there ideologies that are inconsistent with this risk that make it hard to recognize or feel responsible for?
nuclear/#climate/#bio/#cyber/#emerging: Partial list of topics of focus.
Movie/novel memo: Each week there will be a selection of films and novels. For one session over the course of the quarter, at their discretion, students will post a memo that reflects on a film or fictional rendering of an existential challenge. This should be tagged with:
movie / #novel: How did the film/novel represent the existential challenge? What did this highlight; what did it ignore? How realistic was the risk? How salient (or insignificant) did it make the challenge for you? For others (e.g., from reviews, box office/retail receipts, or contemporary commentary)?