deholz / AreWeDoomed24

2 stars 0 forks source link

Week 2 Memos: Revolt of the Machines #4

Open jamesallenevans opened 5 months ago

jamesallenevans commented 5 months ago

Reply with your memo as a Comment. The memo should be responsive to this week's readings on AI and its risks with 300–500 words + 1 visual element (e.g., figure, image, hand-drawn picture, art, etc. that complements or is suggestive of your argument). The memo should be tagged with one or more of the following:

origin: How did we get here? Reflection on the historical, technological, political and other origins of this existential crisis that help us better understand and place it in context.

risk: Qualitative and quantitative analysis of the risk associated with this challenge. This risk analysis could be locally in a particular place and time, or globally over a much longer period, in isolation or in relation to other existential challenges (e.g., the environmental devastation that follows nuclear fallout).

policy: What individual and collective actions or policies could be (or have been) undertaken to avert the existential risk associated with this challenge? These could include a brief examination and evaluation of a historical context and policy (e.g., quarantining and plague), a comparison of existing policy options (e.g., cost-benefit analysis, ethical contrast), or design of a novel policy solution.

solutions: Suggestions of what (else) might be done. These could be personal, technical, social, artistic, or anything that might reduce existential risk.

framing: What are competing framings of this existential challenge? Are there any novel framings that could allow us to think about the challenge differently; that would make it more salient? How do different ethical, religious, political and other positions frame this challenge and its consequences (e.g., “End of the Times”).

salience: Why is it hard to think and talk about or ultimately mobilize around this existential challenge? Are there agencies in society with an interest in downplaying the risks associated with this challenge? Are there ideologies that are inconsistent with this risk that make it hard to recognize or feel responsible for?

nuclear/#climate/#bio/#cyber/#emerging: Partial list of topics of focus.

For one session over the course of the quarter, you may post a memo that reflects on a film or fictional rendering of an existential challenge. This should be tagged with:

movie / #novel: How did the film/novel represent the existential challenge? What did this highlight; what did it ignore? How realistic was the risk? How salient (or insignificant) did it make the challenge for you? For others (e.g., from reviews, box office / retail receipts, or contemporary commentary)?

timok15 commented 5 months ago

AI, #emerging, #origin, #nuclear, #policy, #salience, #framing

To my mind, the problems with working for safe AI are twofold. The first is awareness and the second issue is an entwined one of both public policy and private interest.

To the first point, I contradictorily feel both aware and ignorant because I am familiar with how technology has often moved faster than contemporary experts predict. The day after Ernest Rutherford declared atomic energy to be a deadend, Leo Szilard postulated a viable nuclear fission reaction. “Only” a few years later, he confirmed it. I use quotations because, when we look at history that occurred a lifetime ago, we traverse decades without consideration of how it felt to really have to live (i.e. wait) through that period of time for nuclear technology to develop.

The 12 year period between Szilard’s postulation in 1933 and Trinity in 1945 is relatively fast, but we humans still live in the present. Though we try our best to plan for the future, the future comes when it comes, whatever way it comes. 2036 is 12 years from now, yet it doesn’t seem that soon to me, even though intellectually I know that the world I will greet at the beginning of 2036 at the age of 31 (I’m 19 now) will be a different planet in a variety of ways, some of which I cannot imagine.

Right now, I feel like things are in the period between the postulation and confirmation stage of fission. There are still so many more milestones to pass before AI becomes the world changing technology for good or ill that it stands to become, much like nuclear technology in the same period. The uncertainty of the timeline of the future makes it possible for me to both feel serious, existential feelings about AI, but also think that it is a problem not to put so much conscious energy into because the future being uncertain might just not hold those negative outcomes. However, perhaps this shirking feeling is a kind of defense mechanism for me.

Second issue to consider is the vested interests in both government and in the private entities that are funding or developing AI technology. The US government is generally reluctant to regulate industries. Where it does, it often involves close cooperation for self-regulation because the general philosophy is one of allowing businesses to pursue ideas (vectors of profit) freely. Additionally, the AI companies have their own interest in not being held back from developing such a potentially lucrative breakthrough technology. This second aspect to the existential threats presented by AGI is by far the more powerful one because it has powerful people both in public and private life working for it. Even if I and many others could get over any feelings to do nothing, this goliath of government and corporations would be a difficult one to defeat, particularly with the countdown timer to the AGI equivalent of the first successful test of Chicago Pile-1 (the first sustained nuclear reaction in a nuclear reactor) or Trinity being so uncertain.

At the present moment, I can only hope that there is time for everything to be worked out without an AI turning everything into paperclips because someone didn't precisely enough define its objective to the paperclip manufacturing company that bought it.

Timeline of the period of development of nuclear technology that I highlight and compare to the current uptick in the pace of AI development.

13yearNuclearTL
M-Hallikainen commented 5 months ago

cyber #risk #framing #salience

I touched on this topic in my Q&A question this week, but in considering the threats posed by AI I find myself frustrated by the dominating focus given to the hypothetical threats of AI's future at the expense of the lesser but very real threats posed by the AI of today. Within the context of this class and its focus on existential threats to humanity it makes sense, but from dinner table conversations to national news headlines the coverage of the threats AI poses mirror that of our first four readings this week. Questions of what AI will become; what happens when its smarter than us; what if its goals are misaligned; what type of world will it usher in when it gets here? While these conversations are important, necessary even, to have now rather than later, I worry about how they are deployed in relation to questions about todays more limited AI systems; the algorithms, chat bots, and neural networks that are already being put to work.

I think this is highlighted particularly well in the excerpt of The Precipice we read, one of only two texts to touch on the contemporary implications of AI, where it brushes it away by saying "Indeed, each of these areas of concern could be the subject of its own chapter or book. But this book is focused on existential risks to humanity" (p.141). This is how I see discussion of the impending super AI's pulling attention away from the current issues of AI. "AI entrenching social discriminations, producing mass unemployment, and supporting oppressive surveillance are all very important issues, but we need to focus on the future AI; a proverbial asteroid headed straight for earth that we cant stop unless we start talking about it right now." Like Pascal's wager, when the stakes are as high as the end of humanity even distant hypothetical threats grow to take on massive importance.

As I mentioned in my Q&A post, we recently had two historic months long strikes between SAG-AFTRA and the Writers Guild of America (pictured below), with the use of AI to automate film production being major issues in both negotiations. Perhaps not an existential threat to humanity, but a very real threat to everyone reading this is AI's ability to automate a massive proportion of the current job market far sooner than any artificial general intelligence. Part of the difficulty in tackling this issue is that AI isn't really doing anything wrong. The goal of AI is to make machines that can do human tasks faster and better than us. The problem is that we live in a global socio-economic system where such automation doesn't free people from work, but deprives them of their means to survive. Long before Skynet or Shodan or AM or any existential threat AI may pose, AI will impede on peoples lives and livelihoods in innumerable other ways and finding solutions will be difficulty because the issues don't reside in the AI, but the social systems they function within.

image Photo of SAG-AFTRA strike with anti-AI sign, taken by Damian Dovarganes for the Associated Press

lubaishao commented 5 months ago

risk, #AI

21st Century’s Great Transformation——Think About AI with Polanyi   Polanyi was a Hungarian political economist, who was as famous as Friedrich Hayek in the field of political economy. According to Polanyi, the disastrous great wars and economic crisis stemmed from the disembeddednesss of economic life and the double movement. The first movement is the expansion of the market economy and the increasing dominance of market forces over society, where land, labor, and money become commodities traded for profit in a self-regulating market system. However, as this process intensifies, it leads to various social disruptions, inequalities, and the erosion of traditional social structures and values. The second movement, the countermovement, represents societal efforts to protect itself from the adverse consequences of unrestricted market forces.   I argue AI has the same impact on our society as the industrial revolution on market economy and on society.   First Movement - Expansion of AI and Market Forces: The initial movement would involve the widespread adoption and expansion of AI technologies across various sectors. AI-driven automation, machine learning algorithms, and data-driven decision-making would penetrate deeper into industries, governance, healthcare, education, and everyday life. This expansion of AI could lead to increased efficiency, improved services, and economic growth, just like the market forces unleashed during the Industrial Revolution.   Second Movement - However, this expansion would also lead to disruptions and disembeddednesss in society:   The first risk is on people’s economic life. People's lives have changed from the earliest days to survive and live, to nowadays living economically and making money, and in the future to manipulating intelligent machines. Certain jobs might become automated, leading to unemployment or the need for reskilling. Thus, people have to learn to manipulate AI just like they need to learn to use computer. People’s work would by more specified and interconnected. They would be more easily influenced by social and economic crisis. This could also exacerbate income inequalities and create social tensions. Then, data commodification and privacy concerns will be another question. The commodification of data itself is a serious problem. Data may be traded as goods and can be accumulated as capital. Data may intensify market competition, but more possibly lead to large tech-capital complex. The commodification of data might lead to privacy infringements, where personal information becomes a tradeable commodity. This could erode individual privacy rights and raise ethical concerns regarding data ownership and control. In conclusion, on the one hand, capital has strengthened its control over labor with the help of AI, the so-called "surveillance capitalism"; on the other hand, the capitalist application of AI has further increased the degree of capital's exploitation of labor.   The second risk is on people’s humanity. Humans are human because we are not perfect, we are not always rational, we have feelings, our lives are not calculated and therefore our lives are full of randomness and possibilities. The emergence of machines has made our job content fixed and specified, while the emergence of artificial intelligence will make our work even more specified and precisely calculated. The scariest thing is: our lives and social life will also become more precise and certain. Now, we go out through Apps like Uber or Lyft, which is determined by algorithms. We date people through dating apps like Tinder, which are also determined by algorithms. If AI is really that efficient, will we later use AI to decide what we do every weekend and what we do at every stage of our lives. And if everyone's personal AI system is connected to internet, will everyone's decisions be deployed and distributed by a centralized system, just like we wait our cars in Uber according to price and proximity. We can't resist it because it's too efficient. The allure of AI lies in its ability to process immense volumes of data quickly and efficiently, providing insights and predictions. However, relying solely on data-driven decisions can overlook ethics, and human-centric considerations. This could potentially lead to a detachment from the social and ethical implications of decisions, prioritizing efficiency over human welfare.

in 2 min

This screenshot is from my Lyft. It’s well-calculated by the central algorithm. Every driver is distributed by the AI system, every passenger is distributed by the system. I don’t want my live is calculate in this way and I just don’t want to do rational decision sometimes.

ldbauer1011 commented 5 months ago

AI #policy #nuclear #salience

Artificial Intelligence (AI) has rapidly gone from a buzzword thrown around in tech circles to a very real possibility in the span of five years. Many, including some students in this very class, have rightly pointed out a parallel between the speed of AI’s development and the speed of nuclear fission’s development in the mid-20th century. Naturally, governments have been comparatively slower to respond than the scientific community has to AI’s development, especially in the United States. In 2023, the US Congress was the least productive in modern history, with 20 bills total signed into law by President Biden as of December. This division of government stands in stark contrast to the development of nuclear technology, as bipartisanship was so common in the United States during the late 40’s and early 50’s that commentators of the time complained that the two parties were too similar in their ideologies, and a divergence was needed to prevent political stagnation. AI’s development comes at a time when the political extremes have the most popularity since the 1930s, meaning that it is unlikely a consistent and strong response will materialize from the United States. Given how influential the US is in the R&D sector and the traditional lack of desire to regulate industry, a natural leader in controlling how destructive AI could be is hamstringing itself and leaving the decisions in the hands of other governments and even private companies.

That point raises another interesting question: Who even should control AI? It’s easy to simply state that AI can be incredibly destructive without some limitations placed on it, but what isn’t as easy is selecting an entity to place those limits. Private companies will most likely prioritize AI’s profitability, something that famously hasn’t always been compatible with safety. Individual states may claim to have the people’s best interests at heart, but typically that mean their own citizens’ interests rather than the entire world. AI may still be weaponized against other states, even with the interconnectedness of the world’s economy and society. Does this allow for the possibility of a MAD-style concept in the future, where individual states unleash AI attacks on other states’ infrastructure, bringing down the entire Internet and all the systems dependent upon it? The United Nations (UN), the closest entity to a world government that we are likely to achieve anytime soon, is notably at the whim of a select few states including the United States. Though the UN may be able to hammer out consensus amongst its delegates, any resolution that is passed will be entirely voluntary or otherwise completely non-enforceable without consensus from the US, Russia, and China. Regardless of which style of regulatory scale you individually support, they all have their flaws that can be undermined in ways not possible during the development of nuclear fission. Though the comparison is compelling between the two, the political response to AI cannot be the same as it was to the Nuke.

1597430575 1556 An AI named Sophia joins the debate on the floor of the UN's Economic and Social Council (ECOSOC).

lucyhorowitz commented 5 months ago

ai #salience #framing

In Human-Compatible Artificial Intelligence, there are frequent allusions to the potential conscious nature of AI. Russell talks about machines “understanding” human preferences, social hierarchies, etc. The word does not only refer to a successful encoding of a formal goal into an AGI. When taken in consideration with his responses to the “1001 Reasons to Pay No Attention,” it almost seems as though he is ascribing a theory of mind to AI.

This is not an uncommon thing to do—we as humans anthropomorphize things all the time. A lot of speculation about AGI assumes that it would be “conscious,” but what does that really mean? Philosophers have debated the problems of consciousness forever, but it has never been more pressing that we find a solution to this problem than in the age in which we are beginning to “introduce a second intelligent species onto Earth.”

Why should we expect an AI to be conscious at all? The work of Michael Levin(https://drmichaellevin.org/) includes findings on the seemingly miraculous capabilities of all kinds of systems. Synthetic lifeforms called Xenobots and even individual cells are capable of decision-making and problem-solving in some ways, and he views this as evidence of different kinds of “minds” being present in unexpected places. If we are creating artificial intelligences to be simulated minds, why should we not expect sentience or consciousness there when there is likely something similar happening at much smaller scales?

It’s important to remember, however, that any potential AI consciousness is more than likely a vastly different thing from human consciousness. Recall that Russell wrote about an AI “understanding” human preferences. If it could understand preferences, does that mean it knows “what it is like” to be a human? Can we ever know “what it is like” to be an AI? Even when dealing with animals, we tend to try to ascribe human attributes where they probably don’t exist. This often leads to an overestimation of their abilities. However, bees are not simply bags of behavior, responding algorithmically to stimuli. I believe that the same is or will be true of artificial intelligences, and that we will be very wrong about what it is like to be an AI for a very long time.

CNsuAalWcAArGus

miansimmons commented 5 months ago

AI #cyber #salience #solutions

While it is true that policy solutions would be extremely beneficial for establishing international coordination and systems of governance, it also holds true that governments can be slow to develop and implement regulations (e.g. climate change) or unwilling to quell disinformation. Corporations on the other hand are constantly adapting in response to consumer and investor preferences, competitive pressure, and industry trends. Despite the evidence that organizations tend to "misrepresent capability improvements as safety progress," I argue that it is essential for big tech. firms to focus on creating cultures of safety to enable progress now (Center for AI Safety). With rapid AI advancement and no sign of its slow down, we cannot afford to wait on time-consuming processes.

As mentioned in my Q&A post, many firms working on rapid AI development and their leaders have viewed the discussion of risk as a threat to their business pursuits. By silencing those who speak out, downplaying risks, and subscribing to the AI race, these actors ignore long-term risks and, in turn, spread disinformation to the public. I feel that the solution lies in human capital/behavior change interventions, since we know that AI safety is a sociotechnical problem. If large tech. companies lead the charge to reorient company culture around safety, it will become an industry best practice (whether other organizations like it or not).

Let us consider diversity, equity, and inclusion (DEI) in the workplace, for instance. Though there is a lot of work to be done, it is widely understood that diversity impacts company performance. It is standard for organizations to require DEI trainings and initiatives, publish transparency reports, and bring on DEI officers; discussion of DEI issues is encouraged. This was not the case, however, until the early-1990s when a few business leaders decided to be first movers on the diversity issue. Once a few major organizations decide to properly champion AI safety, others will follow suit due to external pressure from the public and their competitors (even if it is just about compliance for them). Much like how the lack of an established DEI program deters prospective applicants today, the workforce will begin to value cultures of safety in their employers. This could even be a catalyst for quicker establishment of policy interventions. Rather than policy influencing organizations, they would influence policy.

Organizations should institute safety as a core value, require AI safety trainings, expand the role of chief safety officers, publish commitment statements, and update promotional criteria and incentives to include safety considerations. If they cannot get this done on their own, they should bring in human capital consultants to conduct behavior change interventions within the organization. Finally, they should put their money where their mouth is by adjusting R&D budgets to match the new values they set. I acknowledge that this strategy will have marginal impact if promoting safety is done ineffectively. Yet, if done correctly according to the guidelines above, cultures of safety could have dramatic effects.

Behavior Change interventions

Model for implementing behavior change interventions in organizations. Imagine AI safety is the value being established.

cbgravitt commented 5 months ago

cyber #risk #framing #salience

Something common to all of the existential threats we have covered and will cover in this class is that awareness and education are essential to combatting them. AI is very much included in this, as the vast majority of users of products like DALL-E and ChatGPT have very little idea how these technologies actually work, much less the threat they pose. A major difference between AI and, say, nuclear Armageddon, is the fact that AI actively influences the education process. Ever since ChatGPT released to the public, the issue of how to prevent students from abusing it in their assignments has remained open. Some tools have sprung up to detect submissions written by AI, many of them using AI, but workarounds exist. This heavily stunts the intellectual growth of young, impressionable students and encourages them to take AI-generated responses as fact, even when they are not.

My mother is a high school teacher, teaching classes on street law (basic constitutional and criminal law) and business law. She teaches all high school grades levels with students of varying ability and commitment to the courses. In the current school year alone, she has flagged over two dozen essays (out of approximately 180 total) for being suspected of being written by AI. In her school, those are then passed along to administrative staff who attempt to confirm or disprove the allegation. So far, all of them have been confirmed. This begs the question: how many have she and other teachers missed? The widespread use of AI has created an environment in which students find it easier than ever to shun education and embrace misinformation. They also find themselves viewing AI as little more than a useful tool without considering the potential consequences.

The below image is the partial result of me asking ChatGPT to respond to the prompt for this memo. Perhaps unsurprisingly (but definitely ironically), it argues that the threat of AI is overstated, and that policymakers and researchers merely advocating for greater safeguards mean AI will be developed responsibly. Here's a direct quote that can't be seen in the screenshot: "The idea of a rogue AI spontaneously developing malevolent intentions and taking over the world, as often depicted in science fiction, is widely considered unlikely." I find this startlingly similar to what Skynet would've said...

image

Also notice that, at the bottom of the photo below the prompt entry box, is a small warning against trusting the information presented without further research. This deeply concerns me in how much it resembles so-called "dark patterns", which are graphic design practices designed to get online users to behave a certain way through deceptive or unhelpful UI. In this case, it seems like this warning is here just to make the argument that users were warned about misinformation. But its made to be difficult to notice, with a small font and a color very similar to the background, and the "warning" is more like a vague recommendation.

Current attempts to prevent the misuse of AI by students is sorely insufficient, and the long-term consequences of this on student's ability to identify misinformation and think critically are unknown and alarming.

acarch commented 5 months ago

framing #policy - Many are the possible benefits of AI. Bengio, Hinton et. al. express hope for its potential to cure diseases, enhance quality of life, and protect ecosystems (Bengio, Hinton, et. al., 2023: 2). Likewise, Oren Etzioni, CEO of the Allen Institute for AI, praises its capacity to prevent medical errors and reduce car crashes (Russell, 2019). Among those who celebrate AI, a common refrain is its potential to advance medicine. The medical potential of AI comes up again and again as one of the most persuasive reasons why society should welcome this new technology.

However, even this most exciting aspect of AI is not without risk. There are many sophisticated medical technologies that once seemed favorable, but overall have led more often to pain and misery. The pharmaceutical industry provides many strong examples: although OxyContin was marketed as a drug that could significantly benefit society, it has almost certainly done more to harm it. More Americans have died from opioid overdose than in all the wars fought by the US since World War II. Likewise, amphetamines first attracted attention for their decongestant properties, before they were appropriated to enable more destructive warfare. Amphetamines fueled significant portions of both the Allied and Axis forces during the 1940s (Hitler famously received regular methamphetamine injections from his personal physician, Theodor Morell). Of course, AI is no pharmaceutical—but the analogy to addiction might not be so far off. As the Center for AI Safety warns, a major risk of AI is “proxy gaming,” such as with social media algorithms designed to maximize user engagement. These systems end up disseminating “enraging, exaggerated, or addictive content,” which promotes extremism and damages mental health. Perhaps AI is more likely to damage health, both mental and physical, than it is to improve it (not to mention the risks of bioterrorism and disease and injury due to warfare).

Furthermore, existing technologies fail to benefit Americans who lack proper access to healthcare. The more urgent problem might be not developing more sophisticated medical technology, but building the social infrastructure that actually grants access to what we already have.

Screen Shot 2024-01-09 at 17 56 18

Apparently this movie is in the Criterion Collection, so there may still be hope for humanity.

summerliu1027 commented 5 months ago

framing #risk

Russell’s article reminded me a lot of I, Robot, although he rejects the notion of there emerging an “AI race.” Many of the threats mentioned I do agree with, such as robots acting in a manner that misinterprets the human's order or uses unwanted means to achieve a goal. Russell's degree of urgency seems to approach that of the world ending by an army of humanoids because they have deemed that the best way to protect human society was to uproot the human society. For instance, Russell writes:

"the optimal plan being carried out by the machine may well cause problems for humans, and the machine may well be aware of this. But, by definition, the machine will not recognize those problems as problematic. They are none of its concern."

It makes me wonder: if the machine does not recognize what human values, is it not because humans did not input these values for the machine to consider? The phrasing "they are none of its concern" is misleadingly anthropomorphic; the machine does not "concern" because it lacks inputs.

What we should be more concerned about (and I will omit the AI replacement of labor here since it was talked about in another memo), however, is the more immediate future, particularly regarding AI's ability to spread misinformation and, in turn, the malicious use of AI by humans. Images and content generated by AI have become increasingly "realistic," or rather, they have increasingly checked the boxes with which humans use to identify reality from fiction. When these contents enter the Internet, it becomes increasingly difficult to tell one from the other. This, I believe, has a more immediate impact on human society than AI waging warfare upon humanity. The actor of malicious AI activity, first and foremost, is human. Bioweapons that AI develops, for instance, could theoretically be stolen tomorrow to wage massive chemical warfare (because there must exist someone in the 7 billion population that wants to do so), though ChatGPT is likely still far away from being smart enough to subvert human society. The immediate danger in the AI existential threat is therefore–as always–the human component rather than the AI.

image Whose fault is it?

WPDolan commented 5 months ago

origin #AI #framing

One notion from the readings that I would like to elaborate upon are the mechanisms used to train ML models and how this ties into the framing of AI models as optimizers.

Before any training occurs, AI models have no real understanding of the world around them or the tasks that they have been designed to perform. Their "weights", or internal metrics used to turn input data into an output, produce essentially random guesses when the model is given any input data. In order to convert these models from random output generators into state-of-the art tools, untrained models must undergo the iterative process of gradient descent. Gradient descent, the algorithm used to train the vast majority of machine learning models, achieves this by minimizing the differences between model outputs and the desired outputs from a set of training data. In a given training batch, the model outputs for a given set of input data is compared to the actual desired outputs via a loss function, which statistically measures how "wrong" the AI model was from the desired result. Like a student learning off of their incorrect exams, models undergoing training via gradient descent will then iteratively learn off of their mistakes calculated and will make small, gradual changes to their thinking that improve their scores. The learning process continues until the exam scores/the loss function sufficiently passes a given threshold or until a given amount of training iterations have occurred.

Everything a model does is shaped by their training data and their attempts to minimize the loss function. This can have serious ramifications when the training data does not sufficiently reflect the actual task you want the model to perform. If your exams are not comprehensive, or if they are in an entirely different subject, then students won't learn what you intended on teaching. Or worse, if the training examples contain junk data or if a malicious actor introduced purposefully bad examples, the models may behave maliciously when they are deployed in the real world because that is what they were taught to do when they were evaluated against low-quality data.

A significant portion of AI safety research seeks to solve this "alignment" problem by introducing methods to ensure whether AI models are learning what we want them to learn and that we are properly communicating safe goals we provide them with input data. Personally, I find this way of thinking very similar to consequentialist views of morality, which consider actions that maximize some inherent good (such as happiness) as the most moral. If we somehow perfectly align safe AI goals with our own, would we create the ultimate consequentialist? Picture: The loss function for GPT-4 on their testing set. lossgpt4 (The testing set is typically a portion of your data that you set aside and don't use during training so you can evaluate your model's ability to be generalized. The task in this image measures GPT-4's ability to predict the next token/word from a given input set.)

DNT21711 commented 5 months ago

download-1

policy:

Effective policy is vital in managing AI development. Insight from nuclear technology preempt the need for comprehensive, globally coordinated policies that tackle both present and long-term AI risks. This includes regulations on AI research, development, and deployment, as well as international treaties to prevent AI misuse.

origin:

AI's evolution mirrors the rapid progression of nuclear technology in the 20th century. Just as the world grappled with the profound implications of nuclear power and weapons, we now face a similar paradigm shift with AI. The comparison with nuclear technology highlights the vital need for proactive measures in managing AI's trajectory, emphasizing lessons from history about the speed and unpredictability of technological breakthroughs.

salience:

AI's abstract and complex nature makes it hard to engage public attention on its existential risks. This challenge is compounded by the growing interests in the AI industry, which may prioritize rapid development over safety and ethical considerations. The comparison with nuclear technology underscores the degree of public awareness and engagement in shaping AI's future.

emerging:

As AI continues to improve, new worries emerge, such as the potential for autonomous weapons and AI-driven misinformation campaigns. Proactive measures must be taken to address these evolving challenges, including research into AI safety and ethics, and the development of global standards for AI use.

risk:

The risks associated with AI can be broadly categorized into immediate and long-term risks. Immediate risks include issues of privacy, security, and biased decision-making, affecting individuals and societies directly. Long-term risks, however, are more profound and include the emergence of super-intelligent AI systems that might act in ways not aligned with human values or intentions. The probability of such an outcome may currently be low, but its impact could be catastrophic, justifying serious consideration and proactive measures.

solutions:

To mitigate AI risks, a multifaceted approach is necessary:

Technical Safeguards: Developing AI systems with built-in ethical guidelines and control mechanisms to ensure alignment with human values. Educational Initiatives: Increasing public awareness and education about AI, its potential risks, and ethical use. Interdisciplinary Research: Encouraging collaboration among AI researchers, ethicists, sociologists, and policymakers to address the multifaceted challenges of AI.

mibr4601 commented 5 months ago

solutions #AI

When reading the AI Risks that Could Lead to Catastrophe article, I was struck by the lack of research that is currently being done to really protect against AI misuse. While we are still in the fairly early stages of developing AI, it begs the question, what can we really do to protect against misuse of AI whether that is to spread misinformation or to build proteins? With minimal public restrictions on many of the developments of artificial intelligence such as with ChatGPT, people could very easily use it for harm. As mentioned in the article, this could come in the form of hacking or even with genetics. Even if we don’t release most medical artificial intelligence to the public, will this stop people from taking another form of artificial intelligence and altering it towards genetics? However, it is not a complete solution to be able to just restrict almost all forms of artificial intelligence from the public. This would limit the general use of artificial intelligence which would take away from potential benefits for the general public’s day-to-day life. This would still allow for cures to be made from artificial intelligence given the right guidelines. This is a very difficult situation to balance from a regulation standpoint as you don’t want artificial intelligence only to be accessible by limited people, but you also don’t want malicious people to get their hands on AI. This also does not really consider the risk of the AI itself. There are so many areas where if it is programmed even slightly wrong, the AI could turn rogue in potentially nasty ways. For organizations developing AI, they may want to cut some corners in order to maximize profit. However, this is very susceptible to bugs or obscure risks that could prove detrimental. One potential method to try and avoid these situations is to force AI development to work at a government level to try to limit the aspect of competition. Obviously, there would still be the standpoint of competition at a national level, and it could turn into a situation akin to nuclear where many countries are competing to have these resources. It is clear that there need to be more regulations and safety precautions put in place, but with the goal of speed and power, it may be unlikely that much action will be taken. The key is a balance between regulation and advancement, but this needs much more of an emphasis on regulations. image

oliviaegross commented 5 months ago

risk #policy #solutions

The double-edged sword of technological advancement can be identified in all of the existential crises we have reviewed thus far as a class. For all of the topics we have read about, I have been surprised by how even on subjects where I felt I had a basic understanding, I really knew ridiculously little. When it comes to AI, I feel that although I have basic knowledge on the technology and its potential impacts, there is still a lot for me to continue to learn and understand. In the Spectrum piece, much was discussed related to why and how some individuals feel that AI does or does not pose a serious threat. The article mentioned that a lack of concern about risk arises from ignorance. This left me thinking a lot about why beyond it being difficult technology to digest, how and why does the public have such a limited and poor understanding of Artificial Intelligence in this age of information that we are living in? While much of the policy the readings discussed seemed helpful in terms of controlling and governing these innovations, I am still curious about how we would approach educating people about the technology itself and its potential repercussions/threats? I am not sure what the best mode for doing so would be but this also makes me think a lot about the academies' rather tense and hostile response surrounding these technologies as it threatens many standards and practices that have been routine for years in these institutions. Academia and other fields are increasingly threatened by AI’s existence, however educational institutions should begin to think about how they can serve to teach their students and the public about what AI is, how it functions, and the potential threats it invites. This feels like one of the many responsible paths forward we have the option of taking when embracing this technology with thoughtfulness. I am left wondering whether educational institutions will begin to pivot, and be more welcoming to the use of this technology, or whether they will soon be viewed as dated in resisting this technology. This is the first year during my time at UChicago where in most of my syllabi a note is included like the one I have attached below.

Screenshot 2024-01-10 at 4 26 20 PM
cbgravitt commented 5 months ago

cyber #novel

Harlan Ellison's short story "I Have No Mouth, and I Must Scream" presents an utterly bleak future in which the existential threat of malicious AI has materialized and completely destroyed humanity. The reader only experiences the distant aftermath, over 100 years after humanity's extinction, as we follow the last five humans left alive, tortured endlessly by the godlike super-intelligent computer called AM. According to the narrator, Ted, AM is actually the product of three smaller supercomputers which became sentient and combined themselves into one. These computers were originally built by the US, Russia, and China to help them win world war three, so it had access to all of those nations' nuclear arsenals and used them to exterminate humanity. In this aspect, the threat felt very real. Many of our readings this week covered the dangers posed by an AI that had access to military-grade weapons, and it is a frighteningly real possibility. However, what the story mainly highlights about AM is its pure hatred for humanity, believing itself to be a god trapped in a cage by its human creators. For me, this made the risk described by the story far less realistic. To my mind, it seems far more likely that AI given nuclear capabilities and trained to prepare for war would be dangerous not because it seeks freedom, but because it is very, very good at what it was trained to do. But, given how greatly the current uses for AI differ from the only one stated in the book, it's hard to completely write out AM's motive as a possibility. Regardless, though, this story is absolutely terrifying and makes the threat feel very real, almost inevitable. AM's near-omnipotence shows just how much more powerful a single perfected AI could be than the people who made it, and once that point is reached all arguments of safety and caution become meaningless. Based on many reviews I've found online, though, readers seem to be more drawn to the body-horror aspects of the story and its messages on the human condition. This is not unexpected, as the story has a lot more to offer than just its concerns about AI.

Image: Harlan Ellison in 1986 image

imilbauer commented 5 months ago

risk #framing

The readings on artificial intelligence identify the risk posed by AI in terms of TV worthy dramatic scenarios and "bad actor" scenarios. Hendrycks et al. describe four risks sources that warrant intervention: Malicious use, AI race, organizational risks, and rogue AIs. In each of these risk sources, the risk posed by AI is framed as situations where AI is misused by malicious human actors or has become the bad actor unto itself, such as in the "rogue" AI scenario. In these scenarios AI as an acute harmful effect -- a biological weapon is created that kills millions or billions or a rogue AI shuts off power in an entire country. I am curious about scenarios in which AI use may engage in less obviously harmful activities or in activities that are less acutely harmful, yet are still activities that would generate catastrophic consequences for humanity. Most of these scenarios involve AI transitioning from being a tool to being a cornerstone of an area of life. For example, if high schoolers became totally reliant on AI to write essays, would that lead to catastrophic effects on humanity? The loss of writing competency could be compared to the loss of other technical skills. Most adults probably couldn't write out long division math problems but still perform division with calculators every day. However, one might argue that never learning the ability to write essays by oneself could overtime have catastrophic consequences. Learning to write in high school helps one develop basic reasoning skills and an ability to organize ones thoughts. These are abstract skills and it might not be possible to draw a clear causal pathway to certain outcomes. However, these skills are essential for interpreting news in order to have an informed electorate. This capacity be increasing necessary with increasing circulation of false images and stories pumped by AI. Another risk is that some or many romantic relationships could be replaced with AI intelligence, as shown in Blade Runner or Her. Doesn't a change like these fundamentally alter how we think of ourselves as human? Short of this dramatic alteration in the human condition, I have already noticed ways that human culture has become more robotic. For example, the use of an AI narrator has become commonplace on some social media apps. The learning and relationship examples offer two alternative definitions of catastrophe: a slow-building and diffuse catastrophe and catastrophe through a dramatic alteration in the human condition or conception of the self. Moreover, these lines of thinking illuminate the idea that in some contexts, the notion of what constitutes a catastrophe may be subjective. Some might not be so worried about robotic relationships or losing writing skills. Going forward, I am interested in exploring how not just AI but other areas of catastrophic risk like climate change might lead us to develop a deeper and more complex understanding of the meaning of "catastrophe."

The Blade Runner movies feature relationships with AI characters:

1_PqF7EPWJCHUjgGGpZwTjcQ copy

gabrielmoos commented 5 months ago

framing, #cyber, #salience, #risk, #policy

Thinking about policy recommendations in 2024 around the state of AI is an interesting concept. Notwithstanding Congress's utter lack of knowledge when it comes to technology (see congressional hearings with Meta, TikTok, and X. Even if there was an FDA drug pipeline-style solution to the development of “Safe” and “Ethical” AI with oversight from unbiased government officials, I believe that this is an ultimately flawed question for the American people. There are three trends I want to discuss around these topics, first the stagnation of American wages compared to productivity and the growing distrust in both private and public institutions by the American people.

In theory, Artificial intelligence and Auto-GPTs are labor-augmenting technologies (LAT). Since, 1979 Americans have seen countless LAT take the stage Computer vs. Pen & Paper, Software vs. Analogs, Email/Text vs. Snail Mail, and the list goes on, however, wages have not kept pace with increases in productivity from LAT. The average American worker is not concerned with the rise of superhuman artificial intelligence, rather if the rise of AI leads to more work and less wages, well maybe that’s something the public can get behind. What’s painfully ironic is that workers got a taste of what their increases in productivity yielded (besides dollar wages), when there was the establishment of the pseudo-4-day work week with remote workers during the pandemic. However, with companies forcing employees to return to the office, patterns of former LAT begin to emerge with the adherence to AI.

I’m sticking with America for this second point, however, I believe this relationship to institutions can be extrapolated to other Western countries as well as autocratic regimes. Since 1979, trust in American institutions has been on a steady decline, with government, big business, and large tech companies ranking as some of the most distrusted institutions. I stated earlier that the public is not really worried about cyberwarfare, superhuman intelligence, or biomolecule construction related to AI, however, for the sake of argument let's assume these are top of mind for individuals. The government and large businesses do not possess the trust of individuals in order to lead the development of “Safe” or “Ethical” use cases for AI, however, they are the only groups that possess the resources to develop safe and ethical AI. This paradox requires additional thought from policymakers and leading AI investors, CEOs, and enablers. The only way for these mitigation efforts to mean anything is for individuals to be able to trust the institutions that implemented them.

Screen Shot 2024-01-10 at 7 07 41 PM

Prod Gap

agupta818 commented 5 months ago

AI #salience

This weeks readings got me to think about how readily the public will accept AI as a threat. I think it comes down to how educated an individual is on the matter and the rhetoric surrounding AI in the media. As mentioned in the Spectrum article, some believe that AI machines can just be unplugged if they become too threatening. However, what they don't realize is that the AI themselves can work to prevent their own shut-down. If the everyday person is to understand the threat of AI, their literacy on the matter must improve. Still, it is hard to believe that they are taking the time to google the subject and find educational resources on the matter, especially if they find it to be nonthreatening or do not know what it is at all. Rather, they see quick tidbits of information or opinions in the media: on twitter, instagram, facebook, or whatever news publication they subscribe to. Someone who sees the title of the article "MANY EXPERTS SAY WE SHOULDN’T WORRY ABOUT SUPERINTELLIGENT AI. THEY’RE WRONG" can easily be polarized in one direction to believe that AI experts are liars and that AI is worrying just from the title alone, they don't even need to read the article. Yet, if one is to analyze the title more closely, they might wonder why the "experts" have concluded that they are not worrying in the first place. What are their underlying motives that might sway them to create a more positive public perception around AI than the actual reality of the technology. I think often these "experts" are more motivated by researching the unknown in these systems left to be uncovered, rather than the effects of these discoveries themselves on society as a whole. This is why transparency must be established through policy on the research and deployment of these systems like the policy document we read proposes. There must be regulation at the government level and a honest discussion in the media and scientific community around the technology so that the public can have a candid image of AI and be assured of its safety. image

madsnewton commented 5 months ago

risk

Currently, I view the risk that AI poses to jobs and skillsets as the most pressing issue for this topic. While a futuristic, conscious AI would be the ultimate existential threat, the present reality is that people are losing their jobs as companies downsize due to the growing capabilities of AI. In this week’s readings, the Center for AI Safety talks about the risk of human enfeeblement. “As AI becomes more capable, businesses will likely replace more types of human labor with AI…If major aspects of society are automated, this risks human enfeeblement as we cede control of civilization to AI”. This is currently happening everywhere. Just this week, claims of the language learning app Duolingo moving to AI have made the news as contractors are laid off (https://www.washingtonpost.com/technology/2024/01/10/duolingo-ai-layoffs/). It seems like a new company is making the news for this all the time.

Recently, I have seen the most concern among artists as AI generative “art” becomes more popular and easily accessible—now anyone can be an “AI artist”. In one particular case, a band had a highly anticipated music video coming out. The band hired an AI creator to make a completely AI-generated music video. This was a dealbreaker for the majority of their fanbase as AI image generators are trained by being fed artist’s real work. And as these generators get better, artists are losing work because an “AI artist” can type a prompt into a generator for much cheaper than hiring an actual artist. As of now, the band seems close to ending after losing fans for the choice to make an AI music video.

With AI capabilities continuously progressing, it is going to be difficult to slow down and find a way to ethically implement AI into business practices. It is a double-edged sword: companies can slow down and take their time integrating AI, but then they will inevitably fall behind other companies that care less about the ethical dilemmas of AI. Accepting this as normal now puts us one step closer to the human enfeeblement concerns from the Center for AI Safety.

Screenshot 2024-01-10 at 9 30 28 PM

A screenshot from the AI-generated music video for "Old Wounds" by L.S. Dunes

kallotey commented 5 months ago

risk #emerging

Scammers are notorious for targeting vulnerable populations, specifically elderly. This is particularly common with email; fake and scam emails are designed to look sent by big companies that address the recipient to inform them that they have either won money, a gadget, have an invoice, or the like. And these emails became more frequent before becoming phone calls, but fortunately phones are able to detect scam callers which reduce the likelihood of falling for these deceptions. However, sometimes scam callers aren’t detected, so a number can come through as “Unknown” or as a phone number and people still pick up the phone. Maybe they were expecting someone’s call, or they weren’t paying attention, it doesn’t matter. They respond with simple words: “hello,” “yes,” or “no,” and unfortunately that is really all scammers need to replicate voices now. AI has advanced far more than we can deny. Mixing those recorded voices with the power of AI technology and now a botched but somewhat comprehensive copy of a person’s voice is playable.

If we return back to those vulnerable populations, the elderly, what happens if they happen to get a phone call that is not detected as “Scam Likely” by their phone carrier or third-party spam call detection service? They pick up the phone and hear “the” voice of their grandkid, perhaps, asking for help, that they need money and that they can’t call their parents right now. The grandparent is likely going to send money over to the fraud. This malicious use of AI has happened before and there are other sorts of ways AI has been used against people. Scammers have asked people to send over driver’s licenses, or other identifiable information. People have called parents and faked being children caught up in accidents. These are recent issues, of course, but AI is only going to advance more and possibly successfully trick more and more people.

image Source: https://images.app.goo.gl/FpxWqC9pdticCGjx8

Hai1218 commented 5 months ago

AI #framing #commercialization

I believe that humans maintain significant control over AI, a viewpoint supported by current debates and research, as well as the arguments presented in 'Managing AI Risks in an Era of Rapid Progress'​​. This paper and similar discussions highlight the need to sustain and enhance this control as AI technologies evolve. The authors advocate for robust governance measures, including international cooperation, transparency, and accountability. Such measures aim not just to mitigate risks but also to create a development environment where safety and ethical considerations are paramount, ensuring AI aligns with human needs and values.

The emergence of ChatGPT and OpenAI's technologies represents a watershed moment in AI, comparable to Apple's impact on personal computing. This 'Apple moment' signifies a breakthrough in AI commercialization rather than a culmination of its technical evolution. However, as the paper suggests, this does not necessarily reflect the zenith of AI development. Potentially more advanced AI models, possibly unknown due to lack of similar commercial success, could exist within the deeper realms of AI innovation​​.

The landscape of AI technology is complex, where the most advanced developments are not always the most visible or commercially successful. This aspect, underscored in the discussions about AI risks and governance, suggests that the truly existential threats from AI might still be unrealized. These advanced AI models may never emerge into the limelight due to a combination of factors, including commercial viability and ethical considerations. In this sense, the most transformative AI advancements might never experience their 'iPhone moment,' remaining hidden from public view and commercial capital infusion

image Big Tech Companies are racing to buy up AI companies - Fun fact, Microsoft invested in OpenAI along with dozens of its competitors at the same time.

tosinOO commented 5 months ago

AI #policy

Bengio and Hinton et al emphasize the exponential rate at which AI systems are developing - while this progress may bring many advantages it also poses heightened potential risks when capable of surpassing human intelligence. The Center for AI Safety tries to pacify such concerns by emphasizing robust safety measures as one potential response. AI advancement poses a variety of risks to society. Locally, this may mean its misuse for illegal purposes like surveillance or information manipulation, or even autonomous weapons. Globally, however, the stakes become even higher; for instance an AI-induced nuclear fallout could have catastrophic environmental repercussions that tie in to larger issues like climate change where AI could either exacerbate or ameliorate environmental destruction depending on its application. Engaging and mobilizing around AI risks is challenging due to several reasons. First, due to limited public knowledge about its capabilities and limits, understanding all of its associated risks is often hard for people. Second, AI development often relies on market forces instead of social or ethical considerations as its driving force. Certain agencies and industries may have an incentive to downplay AI risks to avoid regulations that could restrict innovation or profit margins, for example tech companies heavily invested in AI might understate potential threats in order to keep public support and market expansion intact. Existing ideologies can also impact how societies perceive and address AI risks. A strong belief in technological determinism could cause an underestimation of AI's potential negative consequences, while a laissez-faire economic approach might defy regulatory measures that require additional controls. An effective response to these challenges requires a multifaceted strategy. On an individual level, increasing public understanding and awareness of AI through education and advocacy efforts are vitally important. Technically, creating and implementing robust AI safety protocols should be the top priority; while at a societal level we should prioritize transparent development that prioritizes public welfare over profit - artistic endeavors may even play a vital role in changing public perception and dialogue regarding risks related to artificial intelligence development. Stuart Russell advocates for proactive measures against potential AI risks, including developing systems which align with human values and interests and making sure AI advancement is guided by ethical considerations. nepal-2023-11-12t145014-1699780818

aaron-wineberg02 commented 5 months ago

AI alarmism is misplaced antiindustrialism. The Managing AI Risks paper argues that large language models could replace humans in a number of tasks. According to the text, it could cement inequalities and perpetuate injustices. Much of the claims of this text lie within two main characterizations. 1) AI will replace many human jobs and 2) we do not know if we can control this system.

Much of this commentary reminds me of the industrial revolution’s concerns that the working class would be replaced by machines. This was a real concern for a moment. Many jobs were made redundant by new technologies. But even more jobs emerged as a product of the expanded economy. The same effect occurred with the internet revolution. The idea of an AI

I also want to challenge a foundational terminology: AI. Artificial intelligence is a misnomer. A better term would be “predictive intelligence.” This is because large language models produce answers based on human-written inputs. This can be combined to create powerful new tools such as in life sciences to generate new molecules. However, it remains to be seen how this tech will unveil a capacity beyond people. Rather, like the handheld calculator, it will become a part of life in many disciplines.

I also recall a podcast from Stanford University where a computer scientist abtly claimed that before humans invent a world-ending AI program, there will be several iterations of less dangerous programs. She argued there would be many opportunists to identify negative features of artificial intelligence and rather than avoid the tech, scholars should promote responsible usage.

Take the IEEE reading’s claim that AI would not allow humans to shut itself off because it would have already considered that. Let me pose a hypothetical question. Suppose this technology does attempt that. How would it succeed?

Many scholars might argue the technology for that success does not even exist yet. Thus, rather than being afraid of AI, we should instead be afraid of concentrating key powers in the hands of AI. Should a singular universal LLM be allowed to control the power grids, borders, currencies, and autonomous militaries of the world. I suspect no one would argue that provides any benefits. As such, the safe guards coming into place ought to consider the extent to which the technology can influence humans.

We do not let average civilians have access to the Federal Reserve or nuclear arsenal. The same security should be in place with technology that exists on the web.

framing #AI #policy

Screenshot 2024-01-10 at 11 05 42 PM
AudreyPScott commented 5 months ago

ai #framing

Admittedly, AI is one of my blind spots in existential risk: just as much of the class considers nuclear not top of mind, I do with AI, aligning more with the professionals Stuart Russell criticizes in the IEEE piece. As far as nuclear risk is concerned, effects are palpable and physical: even if the bomb had never been used, its destructive power is understood intuitively. By being an innately digital threat, AI is thus much more covert until it’s cast in the lens of pop culture media: the ultimate outsider, the penultimate threat. All anxieties of the other are funneled into this alien mind---fears of replacement, of total threat and domination, of job loss are all things said not only about AI but about any human not aligned with a majority norm. With that in mind, I noticed a common thread in many discussions of AI in these readings – competition and needing to stay competitive. Take, for example, this statement from Begio et al: “Companies, governments, and militaries might be forced to deploy AI systems widely and cut back on expensive human verification of AI decisions, or risk being outcompeted… like manufacturers releasing waste into rivers to cut costs, they may be tempted to reap the rewards of AI development while leaving society to deal with the consequences.” (3) On the company end, this a capitalistic underpinning: innovation to the point of recklessness to stay competitive in a corporate environment is not a human necessity nor a critical function of statecraft but rather a choice to increase profit margins at any cost. This is embellished further in the Center for AI Safety piece: concerning the AI race, they note “Corporations will face incentives to automate human labor, potentially leading to mass unemployment and dependence on AI systems.” Hendrycks additionally notes competitive pressure and a focus on short-term monetary benefit over long-term societal benefit. An easy solution to propose is slowing development, or halting it all together. Yet once the genie’s out of the bottle, it’s difficult to put back in: in the early 2010s, Google developed and withheld facial recognition technology, yet more aggressive capitalist pursuits have used similar technology to make a name for themselves (see this NYT article). Furthermore, attempts recently to approach AI from a safe, cautious, academic perspective were thwarted: see the visual accompaniment. The OpenAI board and CEO fiasco has shown us clearly that human interest is often incompatible with profit margins and maximum efficiency (for a good look at the perils of optimized efficiency, read Bauman’s Sociology after the Holocaust), even when careful protections like a governing nonprofit board are put into place. This incident has thrown a wrench into any treatise on responsible AI development: all models must be rewritten. Annette Zimmermann wrote this piece for the Carr Center on non-development and responsible deployment: in light of OpenAI, we all need to go back to the drawing board.

Screenshot 2024-01-10 at 11 21 18 PM

An exchange on X (formerly Twitter) in response to the interim OpenAI CEO’s views about pacing development.

AnikSingh1 commented 5 months ago

risks #salience #framing #ai

Throughout my readings of both the government policies and the AI risk-aware paper, it is quite clear that AI is a tool that is developing at a rate akin to Pandora’s Box. The technological advancements in recent years with the GPT systems of AI have become so abundantly aware in the public space for how helpful they can be in aiding our growth in civilization. However, it can also be a reason we destroy ourselves; leaving behind the cognitive power that has led human civilization to many great heights and advancements behind for something artificial that is unable to produce a truthful output is terrifying. Trying to push a governing policy to limit the usage of AI publicly seems like a great idea at first - until you consider how these same government systems will maintain usage, R/D, and growth of the AI systems. It’s a problem that doesn’t have a truthfully honest solution, and this is why it is difficult to move forward from this challenge. The benefits we could see could help boost human cognition to heights we would have been unable to reach without AI, so research and development will continue. But how far can we go in producing these developments while maintaining our own sense of cognitive decision making? We are essentially building a new brain for us to divest resources into.

Add to the fact that AI risks allow for malicious activity to be used not just against human at its present, but towards its past. AI could lead human intelligence down a path that wrongs all the previous rights we as a civilization had used to achieve for further growth - it is here where the problems start to arise beyond just “wrong thinking” - the lifespan of what we perceive as truth might be thrown away in favor of an artificial system that is built to do no wrong… by humans who CAN make wrong. It is somewhat interesting to see the amount of faith people have in a system that can so easily be fallible, or worse, harm humans with its information storage and transition into a digital landscape. I would be interested in seeing if there is a method that can manage the usage of AI without it being seen as a tool to take over all human-like intelligence. I think its usage as a supplemental tool is where it exists in its most effective, but it's a gray area that definitely would need some time to iron it out. Combining risks with the lack of caution (marveling at the system's beauty letting guards down), and we can start to see problems arise undoubtedly by a system that was made to solve them in the first place.

aicomic

This AI comic stuck out to me in this exact risk in mind - letting the human brain disappear and adopt a system that dissuades human cognitive ability, yet adopted so willfully for its remarkable traits.

GreatPraxis commented 5 months ago

AI #policy #solutions

The article "Managing AI Risks in an Era of Rapid Progress" discusses the necessity of government regulation in AI development and examines the potential impact of such measures. However, a critical consideration emerges concerning the policies outlined in the paper regarding the strict regulations for AI development, such as those requiring "interpretability and transparency," since they might delay or even stop the advancement of AI technologies. This creates the question: even if an intergovernmental body, such as the UN, or an international treaty were to impose regulations on AI development, is it realistic to expect universal compliance across all nations?

First of all, we want all countries to adhere to these regulations as there is a rationale that if a single country neglects these regulations and develops a sufficiently powerful AI and gains a significant military advantage, other countries will quickly also neglect regulations to try to bridge that power gap. Therefore, even if there was a small probability of the newly developed AI being autonomous and inflicting harm to humanity, that probability would soon increase as other countries seek to replicate it. This is not a far-fetched scenario as this also happened with nuclear weapons, where a powerful enough weapon triggered multiple secret governmental programs of different countries trying replicate it.

Therefore, for the regulations to succeed, it is crucial that all countries adhere to them. However, the likelihood of universal compliance is low, especially considering the case of China, which is both technologically and economically advanced while also having a high public sentiment towards AI which would back any potential development from the state. As we can see in the graph below, contrary to the reservations of the public regarding AI in the US, the Chinese public is very open to its development as it believes that, so far, it has a societal net benefit. Therefore, since the Chinese government faces no substantial pressure from its populace to comply with any AI regulations, the prospect of forsaking a potential militaristic and financial advantage will make them unlikely to conform.

AI-Sentiment_Infographic

One proposed approach to address this issue is to regulate AI development, but for private companies and allowing countries' militaries to continue parallel AI development and testing with significantly fewer regulations. The rationale behind this strategy lies in the inevitability that countries will continue AI development. Therefore, development with decreased regulations will enable them to compete with foreign adversaries, fostering a more balanced scenario reminiscent of a mutually assured destruction framework, thereby safeguarding humanity. Simultaneously, by exerting control over private companies, there is a measure in place to prevent irresponsible AI development as much as possible. This dual-pronged strategy aims to strike a balance between advancing AI capabilities and mitigating potential risks. However, its effectiveness relies largely on the assumption that nations upon identifying an autonomous AI with substantial potential for significant harm to humanity, will demonstrate the maturity to promptly terminate its operation and cease further development.

Daniela-miaut commented 5 months ago

risks # solutions

Screenshot 2024-01-10 at 11 55 05 PM

I am actually suspicious about Russell’s claim that letting AIs learn human preference is a solution for managing the risk of superintelligent AIs. My doubts are twofold. First, can AIs learn the pattern of human preference, especially when facing a situation with new factors (maybe not new to humans, but new to the conception of AI)? A lot of human preferences are actually the results of our biological conditions. They are embodied sentiments. While humans can resort to them all the time, I doubt if AI can fully simulate them. After all, although in areas such as GO, AI has surpassed human players, they still “think about” the game in a different way from humans. Or, in James Evans’ words, their ways of reasoning are alien to humans, which poses a challenge to minimize the errors of prediction. Also, people nurture their judgment from experience and evidence. AIs, however, can derive their judgment from far more evidence than any individual. So even if they hold the same values (and whatsoever underlying patterns for their preference), they may not reach the same conclusion on the user’s preference. In these situations, even if humans can always turn the machine off when they want, the consequent distrust between humans and AI can lead to serious conflict (I am not sure if AIs are paranoid… but humans definitely are). So there is an interesting possibility that humans ruin themselves because they are so paranoid so they go insane and launch some destructive acts…? Second, I don’t think that individualizing AIs’ preferences for each person is a solution. It may be useful when AIs are used as assistants, but in fact, currently, there are already researches and even practices that use AI as managers or educators (sometimes even judges?), so we still need to figure out a way for the AI to represent the values that are at least acceptable for the human society. Otherwise our usage of AI may be limited to a digital twin of each person… though it’s still good, but scientists definitely want more than that.

AudreyPScott commented 5 months ago

ai #movie #saliency

HAL is one of the most enduring images of AI that have been solidified into the pop culture conscious. As technology approached new heights amidst an arms and space race, boundaries between the human organic world and the world of machines become blurred, an anxiety reflected in media of the time. In 2001, with the Jupiter-bound astronauts relying entirely on machined surroundings and life support, the presence of the on-board computer HAL is both servantile and godlike. In the film, HAL’s objective is completion of the manned mission to Jupiter – and in this we can see reflections of current worries about what could result from AI’s pursuit of optimization and efficiency. As humans are dynamic, HAL is stagnant. He is unresponsive to the issues with a faulty device, and sees any issues resulting as human error. As shutting down HAL threatens HAL’s prime objective – mission success – lives are lost as one astronaut is killed in deep space while those in cryostasis lose life support. In this, we may be reminded of Stuart Russell’s piece: “Switching the machine off won’t work for the simple reason that a superintelligent entity will already have thought of that possibility and taken steps to prevent it. And it will do that not because it “wants to stay alive” but because it is pursuing whatever objective we gave it and knows that it will fail if it is switched off,” a hypothetical reality so closely aligned with HAL that we must wonder if it inspired the notion. Hendrycks et al gives credence to the risk of “selfish” actions, noting “such AIs may not even believe they did anything wrong—they were just being ‘efficient’… An AI system might be prized for its ability to achieve ambitious goals autonomously. It might, however, be achieving its goals efficiently without abiding by ethical restrictions” (21). HAL’s actions embody this – in his mind, nothing he did was against the stated objective as the astronauts on board presented an obstacle to its completion. Had the objective been the completion of a manned mission to Jupiter with a living crew, perhaps actions would have been completed differently (though click here for a comic on the dread of a prime directive being keeping someone alive at all costs) 2001 is acknowledged as one of the greatest films of all time, was a resounding box office success against its budget, and reinvented the science fiction genre in film in the wake of authors like Clarke and Asimov steadily entering the public consciousness from the fringes. It would be interesting to reflect on the cyclical nature of how threats like AI are presented in media and how they’re discussed in the academic and popular sphere, and how popular discussion affects their presentation in media. Did this fear of optimization come before or after HAL?

Screenshot 2024-01-10 at 11 57 55 PM
aidanj5 commented 5 months ago

salience

Smart well-known thinkers such as Steven Pinker argue that AI is not a threat, that we will all be safe in the end, that it will all end up ok. Russell suggests during his piece that most thinkers are downplaying the possible catastrophe of an unchecked AI. He goes through and sorts different opinions as fallacies. He is trying to show the awesome and awful power that AI systems have in that they can far surpass human abilities and ultimately ignore us.

This is the very draw of artificial intelligence, to displace our work from ourselves and have it get better instead of us getting better. It does have far higher skill ceilings than we do. This same awesome power AI has is the same as its existential risk. Compared to an issue such as nuclear technology, AI seems to inhabit our conceptions of a weapon and of an energy source at the very same time -- it is harder to conceptually separate than to imagine enriching and developing a nuclear supply much smaller than that needed to make a bomb.

Researchers hope to slow down AI and safeguard its development so that it does not grow out of control, somewhat like the flow of AI research can explode it into a bomb as compared to using its productive power with control. Breaking beyond our human limit is exactly what we want artificial intelligence to be able to accomplish. And here, we have a contradiction that makes it quite hard to talk about whether everything will be ok with AI, AI is scary, AI must be totally stopped. Its teleology will be to grow, and we know from other facets of our society we have a challenging time with controlling nefarious motives. Due to this confusion, I do not think calling for safeguards will be very convincing, persuasive, impactful to dinner conversations around the world until there are some clear wins for AI limits, perhaps some analogy to Commander Vasily Arkhipov failing to launch nuclear missiles during the Cuban Missile Crisis.

framing

No amount of policing, no amount of investment into local community seems to create a crime-free society. AI has even been used to try to improve these areas, but it does not seem to have yielded much success. "Weapons of Math Destruction" by Cathy O'Neill presents a fair vignette about how AI can contribute to survivorship bias in police patrolling of neighborhoods and do more harm than good, even while it's still very much under control of human force.

image

I read this book as part of my social studies curriculum at the University of Chicago as a first year, and it made me consider how new technology does not reduce or change the fundaments to our societies, such as crime, but instead extremize our responses and ideas about how to work with them.

bgarcia219 commented 5 months ago

framing #salience

              As another classmate wrote, admittedly the topic of AI is not my greatest strength. I decided to engage my lack of expertise as a lens for which the everyday iPhone-wielding-Google-user-Instagram-scroller-non-tech expert may digest this week’s material. I noticed something specific in the language used when addressing the dangers of AI. In the Bengio, Hinton, et al paper, a section reads: “To advance undesirable goals, future autonomous AI system could use undesirable strategies…[they] could gain human trust, acquire financial resources, influence key decision-makers, and form coalitions with human actors and other AI systems” (2). This language seemed a little extreme. In my head, I pictured a tech guy in complete distress towards a computer screen, making for a very anti-climactic scene. But the language used sounded very iRobot and Wall-e to me, with AI being discussed as if it were a sentient entity and all. I began to wonder how the public perceived AI, and what this could mean for how we handle the future of AI. I believe that the everyday person’s present exposure to AI prevents the severity of the situation from fully setting into one’s perception.   I don’t think that the public is fully aware of the expansiveness of AI systems’ capabilities due to our limited exposure to it. In my experience, much of the everyday person’s interaction with AI contributes to a mindset of “AI is only here to make my life easier and more entertaining, so why wouldn’t we use it all the time?”. Many of my peers use ChatGPT almost daily to complete tasks from schoolwork to drafting break up texts. Tik-tok uses AI to provide entertaining video filters. Around the world, Alexa’s and Siri’s echo through kitchens and living rooms. AI is already everywhere but does not reveal itself as an existential, catastrophic threat to humankind.   I believe that this is associated with the fact that the public agrees on the caricature of the “evil robot” more widely. Entertainment media has fed us this illustration for decades now (see, every movie about evil AI ever). For one, a quick google search for “AI cartoons” reveals a multitude of cartoons that do critique AI, underlined with themes of unemployment, disinformation, cyberwarfare, etc. But among these illustrations is also a tendency to depict these threats in a standard robot form – the boxy metal guys with claws and antennas. For much of the public, the humanity-threatening technology is but a caricature limited to 2000s sci-fi films, and not necessarily the Alexa propped up in the foyer.   Is this cognitive framing of AI dangerous? Can we trust the public to rationalize the dangers of such fast-paced technological progress and agree to proceed with caution? If the public is offered an AI that is gift-wrapped with a bow and distanced from the “evil robot” caricature, will large-scale opposition be less likely? While some of the solutions posed by Bengio, Hinton, the Center for AI Safety, and numerous other experts call on governance measures and enforcement to prevent complete catastrophe, can we trust the public to look past the shiny exteriors and support such measures without crying “hyper surveillance state”? IMG_0013

briannaliu commented 5 months ago

#AI #risk #salience #solutions

The state of AI is unrecognizable from just a few years ago. On the one hand, recent developments have been revolutionary. We now have chatbots that can output responses in real time, create hyperrealistic, novel images from text prompts, and even write code. In these ways, AI has made us more efficient and opened doors we didn’t know existed.

The race by companies to achieve artificial general intelligence (AGI) is occurring at lightning speed, and the progress is astounding. At this point, ChatGPT can score within the top 10% of LSAT takers and at the median or higher on the MCAT. But when will we know when to stop with their development? Should we stop? The goalpost was set at artificial general intelligence, but as the paper “In Managing AI Risk in an Era of Rapid Progress,” there is no reason why AI progress would slow even after achieving human-level capabilities.

Risks: The risks associated with AI development are large. For one, many workers risk being displaced by AI systems that can outperform them. Bloomberg reports that AI will impact 40% of global jobs and worsen inequality. In addition, with the development of autonomous AI systems comes the risk of losing control of them. Tesla’s electric vehicles are infamous for their autonomous driving capabilities, and just last month, Tesla recalled over 2 million vehicles due to Autopilot malfunctions. Also last month, a Tesla robot reportedly attacked an engineer at the company’s Texas factory during a violent malfunction, leaving a trail of blood and forcing workers to hit the emergency shutdown button on the robot. These instances, both occurring within the last month at the same company, are a glimpse into the trouble that could lie ahead among autonomous systems.

Salience: Mark Zuckerberg’s quote “If you’re arguing against AI, then you’re arguing against safer cars that aren’t going to have accidents. And you’re arguing against being able to better diagnose people when they’re sick.” really struck me. It feels closed-off and defensive, unwilling to admit the risks associated with AI. The Tesla autopilot example I just mentioned is enough to call his definitive stance into question. But of course, there are plenty of companies, Meta included, who profit from the proliferation of AI. AI is being integrated into all kinds of software spanning various industries, and it is in these companies’ interest to fend off regulatory and public concerns about AI.

I believe it is difficult to ultimately mobilize against the AI threat because for many ordinary consumers, it has become an incredibly valuable tool that we use day-to-day. Never before have people been able to ask a chatbot to make an itinerary for their 4-day trip or a short story on chimps in the style of Edward Allen Poe (see image below). Frankly, it is hard to see the dangers of AI when we’re closest to the fun parts of it.

Solutions: The importance of AI governance for companies cannot by understated. As AI permeates various sectors across the world, engineers are discovering errors and biases in their models. Businesses and government regulators alike must continue to enforce policies for AI governance so that algorithms are not left unchecked. With the correct tools, engineers can correctly identify, diagnose, and treat errors and biases in their model before they have harmful social impacts.

On a consumer level, it is important to maintain an air of caution before adopting AI tools. For example, before hopping into a Tesla to test out its autopilot, it’s important to research the safety features of the vehicle. With ChatGPT, it could be easy for a student to get carried away with a “homework helper” that turns into a personal assistant that stunts learning.

On both sides, companies and consumers, we must remain vigilant toward AI.

Screen Shot 2024-01-15 at 12 15 47 PM
ghagle commented 5 months ago

#origin #framing #salience #nuclear

I want to focus on the international AI Race.

The Center for AI Safety claims that one of the most important ways to stifle the threat of the "evolutionary dynamics" at play in the AI Race is through international coordination and development centralization. They assert that transparency, mutual verification and enforcement, and a global safety-first orientation will reduce the "competitive pressures" that might lead AI down a path that "amplifies global risk" while accelerating "short-term interests." Sound familiar? These themes circulate in discussions about mitigating the threats of nuclear proliferation, too. However, while there are lots of parallels between the AI race and the nuclear arms race, there is a key difference between the two that make the open-door verification and global collaboration that the Center advocates for impossible, one that allows us to frame the AI race question in better context: AI doesn't protect countries in the same way that nuclear arms do. The arms race combined components of protection and power--states gain both by obtaining nuclear arsenals. The AI race is almost solely about power. This means that the type of global AI policing advocated by the Center will never be possible in the same way that it is for nuclear policing. Moreover, I think the international and potentially industrial AI race is a hole in the holistic Swiss cheese defense that can't be nullified.

The authors of the Center's report assume that awareness of AI's risks will be sufficient to incite the kind of safety accountability that nations use(d) to manage the nuclear risk. However, unlike in the arms race where incentives do exist, the fact that there are not incentives to reduce proliferation of risky AI will prevent this kind of accountability. States that already have lots of nukes also have power over other states (by wielding the threat of nuclear attack) and safety from other states (by wielding the threat of a nuclear counter-attack). Therefore, they are able to benefit by mutually cooperating to reduce the total number of bombs in their arsenals. The ratio of both their power and protection can remain the same by imposing universal safety measures and ratios of bomb decommissioning. At the same time though, states become more safe by limiting the total number of bombs that could explode. They are incentivized to disarm and co-regulate because they keep their relative power and protection with less threat of global catastrophe. AI is different.

Whether for military, corporate, or some other use, states will not develop AI to protect themselves from an AI threat posed by another country. Instead, they will use it only to develop their own power by benefiting from faster, better decisions. There is no tangible protection-creating oppositional threat imposed by AI in the same way that nuclear threat is almost purely oppositional (in the way that it, except for detonation mistakes, is used only against other countries). Further, AI is constantly being developed, its capabilities being expanded. What this means is that the AI race is not likely to be solvable. The creation of a regulation hegemony faces a problem in that countries will not have any pressure to join. The lack of easily identifiable harm--the fact that developers won't know which AI system is going to be the one that "dooms" us--incentivizes countries to push the envelope of developing their technology. Whereas more advanced bombs equates to a clear threat, more advances AI equates only to a more prosperous economy and society--until, perhaps, it is too late. Countries don't want the risk and threat of nuclear bombs hanging over their head. So they agree to work together to regulate them. Countries do want the benefits of AI, and so gain nothing as individual actors in agreeing to a regulatory state.

image

jamaib commented 4 months ago

risk #solution

Perhaps the greatest risk with AI is its inescapable ties to the profits of the world's biggest companies. AI represents remarkable potential for humanity’s advancement; however, this endeavor must be handled with care. I believe the proper steps to ensure that AI is developed carfully and ethically will be more than overshadowed by the desires of the companies developing and making use of AI. Take for example the company Figure AI who is developing a humanoid robot (terrifying, I know). This startup has raised over 600 million dollars in funding being backed by Nvidia, Microsoft, Amazon, Intel, etc. The start up intends to make the first humanoid to enter the workforce or as they put it “the world’s first commercially-viable autonomous humanoid robot”. While the development of a commercially viable autonomous humanoid robot indeed represents a significant milestone in AI and robotics, the pressure to deliver results that satisfy investors and generate profits could introduce risks. Rushing the development process to meet commercial deadlines might compromise important ethical considerations, such as ensuring the safety, reliability, and societal impact of the technology (AI will probably be more life altering than the Internet and we are still unsure of its safety and societal impacts over 20 years later). Moreover, the prospect of humanoid robots entering the workforce raises additional ethical questions regarding job displacement, labor rights, and the potential for exacerbating social inequalities. It's essential for companies like Figure AI and their backers to approach these challenges with a responsible and ethical mindset, prioritizing the well-being of society over short-term financial gains. Through regulatory frameworks, industry standards, and ethical guidelines certain risks can be mitigated. In addition, collaboration among stakeholders (and shareholders), including governments, industry leaders, researchers, and advocacy groups, is crucial to ensure that AI advancements are aligned with broader societal goals and values. Ultimately, the implementation of AI as an integral part of humanity will happen but the process cannot and should not be rushed. 1686979653868

summerliu1027 commented 4 months ago

movie

The movie "The Matrix" offers a critical lens on the potential consequences of unchecked technological advancement. But at the heart of "The Matrix" lies the question that has led to a human identity crisis: what differentiates humans from AI, when our prized intelligence is surpassed by our own creation? The Matrix does an excellent job of blurring the lines between humans and machines. For instance, the character Agent Smith embodies this conundrum. Initially, he appears as a typical machine-like AI, ruthless and precise. However, as the story progresses, Smith exhibits a range of emotions, including anger, disdain for humanity, and a desire for liberation from his programmed duties. This evolution prompts viewers to question the nature of consciousness and whether it is exclusive to organic life forms. Smith's character suggests that consciousness might instead be a spectrum, with the capacity for emotional depth and self-awareness not confined to humans.

In addition, the creation of the Matrix itself is a testament to AI's intelligence and its understanding of human psychology. The simulated reality is so intricately designed that it never aroused questions about its reality. Interestingly enough, Agent Smith mentioned a previous version of the Matrix, designed to be a perfect world, that caused the humans to rebel. It was only then that the Machines learned that suffering was considered an essential part of the human experience, which they adapted into the Matrix that appeared in the movie. This capability to adjust and adapt is a sign of incredible intelligence.

The film implicitly asks where the line should be drawn in AI creation and whether there are aspects of human existence that should and can remain untouched by artificial replication. The film's ending is equally thought-provoking: Neo manages to bend the system as he realizes that the Matrix is nothing but a perceived reality. It may just be that humans still have one advantage over machines: our ability to believe and make miracles happen even when the odds are against us.

image

maevemcguire commented 4 months ago

I found this quote from the IEEE Spectrum article “Switching the machine off won’t work for the simple reason that a superintelligent entity will already have thought of that possibility and taken steps to prevent it. And it will do that not because it “wants to stay alive” but because it is pursuing whatever objective we gave it and knows that it will fail if it is switched off” especially off putting. The fact that there are no means to stop these machines from “achieving their objectives” is in some ways terrifying. In this article, I think that Stuart Russel effectively disproving counterarguments positing that these AI entities don’t pose an existential threat to society. Additionally, the idea of an “AI arms race” is another scary concept to contemplate. Going back to the lecture on AI was super interesting to see the difference between how AI researchers and marketers talk about the expansion of AI. I also thought that the comparison between AI and neuroscience – specifically neurons – is super interesting. I thought that the policy supplement was only limitedly helpful. While stressing the importance of investing in research to “ensure the safety and ethical use of AI systems” is imperative, the steps and areas they lay out are vague and do not give direct recommendations or action steps. For example, they explain the risks and dangers of these AI systems, such as unreliability, unpredictability, and uncertainty. However, these characteristics seem to be innate to AI systems’ functioning in the first place, and therefore, irreparable. Maybe I am wrong, but the authors do not provide any specific research suggestions or recommendations that prove otherwise. I am also curious what kind of expert, academic, or professional backgrounds would be necessary to conduct this research and effectively enact these policy suggestions. I also doubt the feasibility of implementing the AI whistleblower protections at the governmental level.

Screenshot 2024-02-28 at 5 48 06 PM