jamesallenevans / AreWeDoomed

GitHub Repo for the UChicago, Spring 2021 course *Are We Doomed? Confronting the End of the World*
11 stars 1 forks source link

April 22 - AI - Questions #13

Open deholz opened 3 years ago

deholz commented 3 years ago

Questions for Stuart Russell, inspired by the week's readings:

Questions: Every week students will post one question here of less than 150 words, addressed to our speaker by Wednesday @ midnight, the day immediately prior to our class session. These questions may take up the same angle as developed further in your weekly memo. By 2pm Thursday, each student will up-vote (“thumbs up”) what they think are the five most interesting questions for that session. Some of the top voted questions will be asked by students to the speakers during class.

AlexandraN1 commented 3 years ago

Given the speed of technological innovation, the capability of governments to understand and manage these technologies is sometimes limited. How do you think we can increase expertise in government on AI issues systematically and rapidly?

starmz123 commented 3 years ago

If it is too early to regulate superintelligent AI, what kind of policy would be beneficial for protecting against catastrophic AGI? Could we make our governing institutions generally more resilient, such as improving global coordination or the valuation of future human lives? Or should we consider regulating the development of AI, perhaps even setting up advance treaties (a la nuclear treaties)?

jane-uc21 commented 3 years ago

You express concern that superintelligent AI will lead to a society of "lotus eaters." In this vein, there is also concern that dependence on "computational creativity," the use of algorithms to combine known ideas in a novel and valuable way [1], and specifically on co-creative human-AI interfaces, will reduce our own creative capacity [2]. I am inclined to push back with the interaction theory perspective that our biological drive for human empathy, connectivity, authenticity, and trust (e-CAT) will allow humans to find purpose in roles requiring e-CAT [3]- eg. artists whose work embodies their emotional and physical creative process, or doctors who deliver prognoses with empathy.

How can policy/society hold space and purpose for humans when superintelligent AI so drastically outskills us? Could the capabilities of superintelligent AI and human comfort with AI converge to a place where interactions with AI meet our e-CAT needs in sensitive situations?

References: [1] Russell, S. (2020). Human compatible: Artificial intelligence and the problem of control. Penguin Group. [2] Llano, M. T., & Mc Cormack, J. (n.d.). Existential risks of co-creative systems. Retrieved April 20, 2021, from Computationalcreativity.net website: https://computationalcreativity.net/workshops/cocreative-iccc20/papers/Future_of_co-creative_systems_184.pdf [3] Wiley, C. (n.d.). Empathy, connectivity, authenticity, and trust: A rhetorical framework for creating and evaluating interaction design. Retrieved April 20, 2021, from Iastate.edu website: https://lib.dr.iastate.edu/cgi/viewcontent.cgi?article=3715&context=etd

janet-clare commented 3 years ago

It’s understandable that a binary versus a unary view of AI is more sensible as machines as entities should not be “entitled” to pursuing “their own” objectives; they are machines, any objectives at the core would be a result of human generation. A binary view engages check steps along the way, always a good idea. However, a binary view depends on human interaction on another level beyond objective formation and starting point input. How are we to address this existential aspect of who is responsible for not only designing, but maybe more importantly redesigning these objectives. How can we trust that original objectives are not construed and remain consistent, safe or beneficial and also in alignment? How would it be possible to regulate this input? How would we prevent interference or intervention, nefarious or otherwise?

bdelnegro commented 3 years ago

How do fictional and non-fictional narratives alike shape the public perception and policies around existential threats? For instance, how has the dramatization of AI in science fiction and Hollywood film franchises affected the research, reception, and regulation of AI? What role do individuals like yourself play in either reinforcing these narratives or reshaping them?

seankoons commented 3 years ago

In the last century, we’ve seen as a society the mechanized take over of jobs. Many industrial jobs, such as assembly lines, are now mostly all computer and robot based. For physically demanding jobs such as construction, and landscaping, as well as dangerous and risky jobs such as mining, and lumberjacking, do you think AI has the potential to incorporate programing that will allow it to do unpredictable physical work?

dramlochun commented 3 years ago

We have seen so much discussion of the consequences of AI if humans are unable to control it. But what if we are able to control it, and because of its inevitable price, the wealthy in particular control and possess AI capabilities? It seems that the consequences of this could be as immense as the consequences of any other risks of AI. Inequality will inevitably become an issue greater than it has ever been. Moreover, how will society be able to adjust such that equal opportunity as a concept still persists? Ignoring the wealthy for now, what if a singular country possesses the technology first. What would that mean for other countries, their economies, cyber warfare, and many other issues that we cannot even begin to comprehend without a full understanding of AI's future capabilities?

dillanprasad commented 3 years ago

This course has a film and literature element, with a selection of suggested movies and novels accompanying each week's existential concern. At the same time, very few people in the country are even qualified to talk about AI from a technical, computer scientific background. How do you believe the (often) dramatized portrayal of super artificial intelligence in film and literature has contributed to a "collective consciousness" on the topic? Does industry have a bullish stance, while the public is bearish? How do you see this fictional/actual balance affecting the way that we approach regulation in the long term?

Junker24 commented 3 years ago

How do movie interpretations of AI technology taking over the world give AI technology a bad reputation in our world today? Like discussed in a few of the articles i read for this week, it seems as though many movies have an theme of "Robot Takeover" due to the advancement of AI technology. I think this is interesting and i feel as though very relevant and gives a bad persona to the advancement of AI.

vtnightingale commented 3 years ago

I am very convinced by the argument you make that the danger in AI is not in its malevolence, but rather that there are too many variables that can't be all taken into account, making the threat of AIs less of a Terminator situation, but rather a genie who is a stickler for clarity. However, you also present at least one way in which the AIs goals can be made to either ask for permission or ultimately allow itself to be shut off in cases of uncertainty. Given that, to borrow your analogy, the containment for the nuclear chain-reaction is currently being thought up by individuals like yourself, can we really say that AI still holds an existential risk to humanity?

madisonchoi commented 3 years ago

Your argument for a game-theoretic approach to AI design in which machines maximize human future-life preferences—such that human objectives and those of robots do not oppose one another—is very compelling. You mention that in order for this to be possible, “all of the human choices ever made” must be the basis of evidence for the machine. However, human preferences on an individual level inevitably change over time as one lives and experiences more things. So, how can we expect a machine to have a good grasp on what our preferences really are if they are ever-changing for ourselves, and to what extent could a machine know which preferences are actually good for us? In other words, how might a machine reconcile human opinion and that which is actually good for a human?

brettriegler commented 3 years ago

Is there a possibility of creating AI with the intention or purpose of keeping other AI from harming humans? This idea could be the fail-safe that many scientists are looking for when pushing the boundaries of developing AI. One problem that I could see with this the AI could change its objective. That leads me to my second question, can AI change their objective on their own? If anyone understands AI and super intelligence better and in more detail than the readings I would love to learn more.

smichel11 commented 3 years ago

How would we go about getting global legislation regulating AI technology? Do you foresee these efforts as being successful? Where might we fall short?

TimGranzow7 commented 3 years ago

The reading this week made it quite clear that superintelligent AI stands to provide seemingly limitless benefits, but also poses significant existential threats, especially over the long term. Naturally, AI serves to benefit us through quality of life improvements (otherwise there would not be active research into its development). It has been shown however that those working on AI also tend to be the least worried about existential threats arising from it. This should be reassuring, but it appears that this is often an oversight, and one that arises from competition and an oversimplification of how the risks can be mitigated. Where do you see the line between benefits us and misalignment? Is there a visible event or series of events at which this happens? Or is the shift gradual and ultimately invisible until it is too late, as in the Life 3.0 “Omega Team” though experiment?

ydeng117 commented 3 years ago

It seems one of the major problems for the AI crisis is that super-intelligent machines cannot fully understand human order, as we cannot fully describe our intentions. As a result, the maximization logic of the AI could eventually lead to existential threats to human beings. Nonetheless, with the development of neuroscience, brain science, and cognitive psychology, can we develop an algorithm for AIs to directly read our minds and fully understand what we want and what we won't desire? and what might be the potential issue for letting a super-intelligent AI read our minds?

shanekim23 commented 3 years ago

The readings this week put a huge emphasis on mankind's inability to articulate specific objectives to superintelligent machines. In your 2017 video "Slaughterbots", you hint at the possibility of catastrophe when we use autonomous weapons; and by doing so, you alert them of this issue through spreading fear. And, for AI more than any other issue, our societal awareness is much lower than for other existential crises (nuclear warheads, climate change etc.). Do you think that fear-mongering is the most effective way to combat an issue with such little awareness? And if not, how do you think that we should frame this issue to convince the people who disagree?

vitosmolyak commented 3 years ago

Given that any super computer or any super-intelligence AI machine still has to abide by the laws of nature and laws of physics AND every single AI machine has been thoughtfully created by a human, why is there any sort of risk of any sort of threat or danger behind AI? It's not like these super-intelligent machines can just learn how to fly or learn how to wipe out the world while preserving its own survival; only a human being who designs a specific form of AI can input commands or objectives for it and it would simply be impossible for this machine to go against its command or go against any physical impossibilities.

Samcorey1234 commented 3 years ago

How, as a society, should we decide to constrain, regulate, or align our desires with that of AI? That is, what is the role of government, nonprofits, corporations, and concerned citizens in preventing AI from causing mass extinction or mass suffering?

blakekushner commented 3 years ago

Do you believe that the dramatizations of AI in movies such as iRobot of The Terminator, or even in novels or short stories like "The Last Question" are beneficial in the way that it connects the general public to ideas about technology and artificial intelligence? Or do these fictional stories detract and input a false idea of AI in the minds of the people? If the directors and writers of these AI stories were to change their storytelling, would that be better for the people or worse because it results in less interest?

louisjlevin commented 3 years ago

How do we push past the bickering of exactly what the pro's and con's of AI are (which in my mind amounts to a large guessing game) and reach the point of recognising that the mere fact that some of those however-unlikely-to-come-to-pass cons are terrifying enough that they ought to warrant a little bit more pause and thought?

fdioum commented 3 years ago

Do you truly believe that it is possible to program AI in a way that they in all aspects understand the desires and the priorities of a human being, especially when people don’t always understand themselves and are oftentimes unsure what to decide due to change in desires and even impulsivity?

ZeyangPan commented 3 years ago

According to the film, I, Robot, I realized some bad consequences that could happen if we didn't deal with AI smartly and there are three rules that AI must obey in order to keep humans safe. My question is that what is the development tendency of AI in the future? The AI "brain" becomes smarter in stages, evolving from machine learning to deep learning, and then to autonomous learning. Therefore, is it possible that AI becomes smarter than humans and dominates the world someday? How can we prevent this from happening?

laszler commented 3 years ago

You mention that machines will not learn to copy human actions, but rather human preferences (with one example being the passport official taking bribes in order send his children to school). Additionally, in order to avoid replicating these bad actions, you mention that an aggregation scheme could help reduce potential harm desired by bad actors.

In the past, we have seen aggregations of human preferences that do not necessarily entail doing the most good for society, ranging from the benign (such as people voting to name a ship 'Boaty McBoatface'), to more malicious actions, such as persecution of religious groups.

Do you believe it would be possible for an aggregation scheme to take into account the fact that collective human preferences may not always be objectively 'good' or what's best for society? If an aggregation scheme is intended as a safeguard, what protects this scheme from manipulation?

chakrabortya commented 3 years ago

Regulations have, for the most part, been retrospective. We have been discussing problems where we cannot afford to experience problems before regulating to prevent them. Who should be responsible for regulating the development of super-intelligent machines? Who should the regulations be directed towards (i.e. who is responsible for these developments)?

omarh4 commented 3 years ago

This week's final reading by Max Tegmark offers an interesting story of the Omegas and how superintelligent AI named Prometheus under the control of well-intentioned people can create a global utopia. In the instance that an AI outside of human control also arrived at this stage and humanity was no longer in control of its own destiny, should we still consider turning the AI off or should we continue to reap the benefits of an altruistic super intelligent program like Prometheus?

aj-wu commented 3 years ago

It seems like it'll take a lot more effort to make sure any future AGI is appropriately designed to benefit and coexist with humans, such that all it would take is for one bad actor to ruin it for everybody. What's to stop one entity from figuring out the technology and unleashing it on the rest of the world?

a-bosko commented 3 years ago

In the article “Human-Compatible Artificial Intelligence”, one of the questions raised is “What if machines learn from evil people?" An answer to this question is that “a machine observing humans killing each other will not learn that killing is good.”

In this case, how do we avoid machine learning in the wrong direction? How can we guarantee that a machine will not learn that killing is good?

Is there some sort of safety mechanism or a “moral standard” that a machine can be taught? Also, if a super-intelligent machine does end up learning harmful mechanisms, how do we stop it?

slrothschild commented 3 years ago

I think the discussion on AI has been predominantly about something that is implied to be entirely out of our control. The argument also still concerns that AI would be taught the wrong way, and learn evil. How do we stop improper machine learning in the first place? Additionally, how is AI genuinely uncontrollable in the inevitable future when it is still created by humans (who are fairly fallible and limited)?

nobro011235 commented 3 years ago

An increasingly popular theory is the simulation theory - that we are all caught in a simulation made by A.I for some purpose or another. We are not "real" people, rather simulations of real people, caught in whatever conditions are convenient for the simulation. What do you think of this theory, and does such a theory impose ethical restrictions on how we run simulations with A. I in the future?

c-krantz commented 3 years ago

From what I gathered through this week’s readings is that one of the most central problems regarding artificial intelligence is that we tend to push off the threat it poses merely because it is “too far away”. This mentality is one that we have seen through other imminent threats such as climate change. With that said, what do you believe is the most plausible way to change this mentality held by society, especially when there is even doubt by some in the AI community?

scicerom commented 3 years ago

AI research is advancing in leaps and bounds, and public access to massive amounts of computing power has increased and continues to increase drastically from ten years ago to the present. Do you believe there is significant danger in the possibility of an inadequately safety-focused amateur's being the first (and possibly then only) originator of greater-than-human intelligence? If not, do you believe that the public nature of much of the field's research might pose a danger once it becomes sufficiently close to producing such an AI? What might that mean for the openness of the research in, say, 50 years?

apolissky commented 3 years ago

How can we balance further off concerns about Machine Learning and AI and more imminent ones, such as super intelligent AI versus racist AI and ML algorithms (or algorithms that learn or find increasingly horrible human traits)? What role might the latter issue play in the former?

EmaanMohsin commented 3 years ago

One proposed solution, by researcher Ben Boertzel, to control potential threats of AI is to create an AI nanny which is "a powerful yet limited AGI (Artificial General Intelligence) system, with the explicit goal of keeping things on the planet under control." Is the creation of a gatekeeper AI a sensible solution? Of course it would be difficult to develop such an advanced friendly AI system. Yet, if the advancing direction of AI research is the production of machinery that is able to perform tasks better than humans, should we leave the regulation of future threats to machines themselves?

BuffDawg commented 3 years ago

Do the threats that true artificial intelligence poses outweigh the benefits? Wouldn't it be more likely that this AI solve our clean energy problems or discover lifesaving medications and procedures rather than destroy civilization as we know it?

jasonshepp6 commented 3 years ago

As I spoke on in my memo for this week, I am curious about the gradual development of AI's capabilities and the increasing control that machine learning will have on our "intellectual" activities like deploying capital into the economy.

My question for this discussion is: will humans ever become truly obsolete in the decision making? While it can be of focus and examine deploying capital, this discussion extends far beyond it to ask if AI will end up rendering humans obsolete in our own world.

ishaanpatel22 commented 3 years ago

In your articles, you mention how the world should be worried about a super-intelligent, salient AI: Currently, there are many companies at the forefront of the AI industry; companies varying from social media platforms to robotics companies to autonomous driving companies. Do you think one of these types of companies will be at the forefront of advanced AI in the future/the creator of this super-intelligent AI you mention, or will it be companies that are founded solely for creating advanced AI that will lead the race?

cjcampo commented 3 years ago

What metrics do we have to assess the potential danger of new AI-based technologies, or use-cases where AI could be applied? This question was inspired by the discussion of different types of intelligence in Stuart Russell's Many Experts Say We Shouldn't Worry About Superintelligent AI. They're Wrong..

From a Deep Learning course at TTIC last quarter, I learned that tech firms and other institutions have research teams that compete to solve open problems in AI/Deep learning. These are often ranked on a yearly basis for each specific problem they attempt to solve, or for their overall research. Do we have any rankings that attempt to dig out a (for lack of a better phrase) pound-for-pound rating of the overall intelligence of a new technology? Are there any objective, quantifiable benchmarks for AI researchers to consider in terms of making their networks safe?

ghost commented 3 years ago

Since you write that the world should be worried about superintelligent AI, must the response to this technology also be global? I am thinking about international law or policy regulating the use of AI versus domestic law or policy aimed at the same thing. Does this problem require a coordinated global response, since anyone could develop this technology?

chasedenholm commented 3 years ago

AI has already proven to be helpful in India and Norway with regards to climate impact as in India it has helped farmers get higher crop yields and in Norway, it has helped in integrating an autonomous electric grid implementing renewable energy. It has also helped researchers in identifying weather events. With that said, how might we better integrate AI in our cities to improve living conditions? How might AI play a predictive role in ecosystems such that we can track and predict invasive species and how much do you think we will rely on it in the future? And finally, how can we better sure up AI security such that it doesn’t become a cyber threat?

sosuna22 commented 3 years ago

There are many major problems disrupting humanity right now. Often times humanity waits until last minute to deal with them or until it is too late and we are scrambling for a solution. The Russell Article talks about needing to find a way to advance AI but also find checks to ensure security. This would make it so that any potential catastrophic problems could be fixed. What are some potential safety checks that could be implemented for AI?

ChivLiu commented 3 years ago

Many companies such as Facebook, Alibaba, and Google are chasing each other on the track of AI algorithm development, and Alibaba recently was criticized by the government because of their pricing algorithm against frequent customers. If someday the social media and online shopping sites were fully controlled by AI, would they be able to develop their own independent thinking to control "free-speech" and the markets? Or could they have their own political or economic perspectives and use them against humans?

Aiden-Reynolds commented 3 years ago

While a superintelligent AI may possibly gain the capability to completely subvert all of humanity, what level of responsibility would an AI program have to be given to actually do serious harm to humanity? If that level of responsibility is simply any, then could that issue be countered by only using less intelligent AI to actually execute any task, while superintelligent AI is only assigned an advisory role?

nicholas-rose commented 3 years ago

In his infamous 2018 interview with Joe Rogan, Elon Musk made this fairly pessimistic comment: “I tried to convince people to slow down...slow down AI, regulate AI...this was futile. I tried for years. Nobody listened. Nobody listened.” [1]

Are you equally pessimistic about the possibility slowing down or otherwise regulating AI? What is the probability, if you had to assign one, to humanity surviving the singularity?

[1] https://youtu.be/Ra3fv8gl6NE?t=643

jcrary711 commented 3 years ago

What is your largest fear in regards to AI, whether it be outcompeting and having a larger skill set than human workers, obtaining sentience, or some other fear, and how plausible is this fear compared to other lesser fears you may have? Additionally, how soon do you believe the fear could become a reality?

atzavala commented 3 years ago

After reading The Tale of the Omega Team, I began to wonder if we were already in the midst of a team like that already in the first steps of world domination. Could it be possible that an AI would be so efficient and fast that it goes undetected as well as Prometheus did for as long as it did? What are the current efforts being made to avoid technology like that from getting into the hands of a far less ‘virtuous’ team like Omega? Should then research on risks and preventative actions be made towards the technology or against organizations looking to hold that kind of power at a global scale?

meghanlong commented 3 years ago

The first edition of Human Compatible reportedly has a section titled "What if we you succeed?," where you discuss the fact that humanity is pushing towards AI development and improvement goals that may end with major unintended consequences, including the destruction of humanity itself. So...what if we succeed? In your mind based on what you know at this moment, what is the most likely future outcome if we do succeed in creating something that is more intelligent than we are? Can you describe what this world would be like?

abertodano commented 3 years ago

Reading about the Omega Team and some of the readings' comments on web algorithms made me think about Herb Lin's information dystopia. Are we not already hitting a mini-singularity in the ability of AI to feed humans addictive, polarizing, misleading information?

brettkatz commented 3 years ago

Given the difficulties with centrally regulating AI, given that non-state actors can potentially independently conduct in AI research outside the scope of regulatory oversight, given our rate of technological advancement and individual access to information, is there any long term solution to the risk of AI other than a human-AI fusion (similar to the described goal of Neuralink)?

bbroner commented 3 years ago

While it has always seemed to be science fiction, it seems to me that the technology for robot soldiers and police officers is not too far away. Do you think we could every get to a point where we have ai algorithms and not individuals making life or death decisions in police or military settings[example: a fully ai soldier having free control over who it shoots in combat]?

nikereid commented 3 years ago

What would the creation and implementation of AI throughout the world look like and how would this impact the world's economy? Millions of jobs would be lost to automation and what would happen at that point?