deholz / AreWeDoomed24

2 stars 0 forks source link

Week 2 Questions: Revolt of the Machines #3

Open jamesallenevans opened 10 months ago

jamesallenevans commented 10 months ago

Questions for Geoffrey Hinton, about and inspired by (especially relevant to the first paper and policy document he co-authored):

Yoshua Bengio, Geoffrey Hinton, et al, the "Managing AI Risks in an Era of Rapid Progress"; both the paperLinks to an external site. and policy supplementLinks to an external site.. Stuart Russell, “Human-Compatible Artificial IntelligenceLinks to an external site..” In Stephen Muggleton and Nick Chater (eds.), Human-Like Machine Intelligence, Oxford University Press, 2021. (Summary of HC main argument) Stuart Russell, Many Experts Say We Shouldn't Worry About Superintelligent AI. They're WrongLinks to an external site., IEEE Spectrum, October, 2019. (Summary of HC Chapter 6) Toby Ord, “Future Risks: Unaligned Artificial Intelligence Download Future Risks: Unaligned Artificial Intelligence.” The Precipice. Max Tegmark, “Prelude. Download Prelude.” Life 3.0.

timok15 commented 10 months ago

Large Language Models (LLMs) and the various image/sound/music generating AIs are controversially trained on copyrighted data. Now, these AIs are perhaps not the kind which will usurp humanity, however, these kinds of AI are receiving the most investment right now.

How do you think the controversy and lawsuits around AI will be handled with the US government having such an interest in the rapid development of advanced AI systems? Conversely, would the kinds of AI that the US government is interested in not be affected by copyrighted training data crack downs?

If it is true that governmentally oriented AIs would be unaffected, then: What kinds of systems are great power governments investing in? How do they differ from the publicly available AIs that get the news coverage (to your knowledge)?

lucyhorowitz commented 10 months ago

Assuming the policy recommendations are the best course of action, why should we trust governments to correctly implement them? While it definitely makes sense that "to protect low-risk use and academic research, [national institutions] should avoid undue bureaucratic hurdles for small and predictable AI models," I think it unlikely that a government agency would actually act this way. Moreover, who decides what behavior is "hazardous?" What is the standard of reasonableness for "reasonably foreseen and prevented harms?"

M-Hallikainen commented 10 months ago

This last year we saw two historic strikes from SAG-AFTRA and the Writers Guild of America, with one of the major issues under negotiation being the use of AI by studios to automate the jobs of writers and actors in the film industry. This was not a future hypothetical of super intelligent general AI with misaligned objectives, but contemporary AI systems functioning as intended to reduce studio overheads by reducing the number of employees needed to create films. While this is an isolated example, similar issues in almost every industry that could be described as a "desk job" are knocking at the door, far sooner than the existential threats most often mentioned when we discuss the risks of AI. Why is the imminent danger AI poses to labor and the livelihoods of workers so infrequently discussed compared to the potential hazards of more powerful future AI, and what can we do in the present to address the issues created by AI automation?

lubaishao commented 10 months ago

The question I'm most curious about is exactly what impact AI will have on our society? What are the specific forms of human coexistence with machines and artificial intelligence in the future?I think AI has subversive impact on the society.   Just as the industrial revolution changed the relations of production in every society around the world, AI will change all relations between people and society. AI will not only improve the productivity of human but also plays a subversive role in society. Polanyi once spoke of how the great changes of the 20th century were due to the fact that the economic life of mankind became too detached from social life, and people began to live for money. AI plays a same role. People will rely more and more on machines, algorithm, and data. Just as the rules of society, including laws and customs, will favor capitalism when the market economy prevails, AI will have a huge impact on our existing social rules. Data will become a production component just like capital and population. Human will need to coexist with non-human algorithms and data, most humans will be required to master algorithmic tools, but almost everyone will become algorithms and data in the system. Big data companies will emerge as prominent as nowadays capital-owner companies. Data may be traded like goods and stocks. The most unsettling feeling for (ordinary) humans would stem from insurmountable destiny and beginning to coexist with another intelligent species.   Actually, I am shocked by this sentence in the reading, “The Economist magazine’s review of Bostrom’s book ended with: ‘The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking.’”

miansimmons commented 10 months ago

Many prominent researchers and firms working on rapid AI development have viewed the discussion of risk as threatening to their business pursuits. As a result, those raising potential AI risks are often condemned as being "against" its growth. Given that this is something you have expressed concerns about, how optimistic are you regarding intra-organizational risk mitigation?

Further, in your paper on managing AI risks, you highlighted the need for international governance to enforce standards and prevent misuse. Do you believe that widespread implementation of effective cultures of safety will only occur when policy is imposed on firms? What would have to change in big tech. organizational culture today for firms to champion safety without government intervention?

cbgravitt commented 10 months ago

Even if world governments are convinced of the AI threat and act accordingly, it will also be essential to educate the general public to prevent massive backlash. But, with the widespread and popular use of LLMs by the general public, ranging from ChatGPT to the myriad AI "personalities" available on social media platforms, how can the public be convinced that AI poses a real threat and must be regulated? Is the public already sufficiently aware? Can we teach the public to use AI responsibly? If so, how? Do you find it possible that AI companies could combat these efforts with AI specifically built to promote AI R&D (including through the use of mis- or dis-information), and how could that be prevented/combatted?

ldbauer1011 commented 10 months ago

Stuart Russell explains in his piece Human Compatible Artificial Intelligence that one of the great obstacles to ensuring AI can remain safe is to have AI understand the natural fallibility and irrationality that human decision-making can have. How is this accomplished, when an AI's goal is to solve a problem as completely and efficiently as possible? This question tends to be the twist in many Sci-Fi movies, as an AI determines that humanity's flaws are what is causing the problem it is trying to solve, so resolves to remove humanity. How can programmers ensure Russell's step is taken given how opaque deep learning code can be, and how does one even test for this during AI development?

summerliu1027 commented 10 months ago

The paper proposes several potential mitigations to the AI threat, such as industry standards, government regulation, data documentation & report, etc. Of all the possible methods, which one(s) should we aim to implement first? Which one(s) are more likely to be realized in the next 5-10 years?

WPDolan commented 10 months ago

The policy supplement advocates for the creation of a closed AI ecosystem where only a select few organizations have access to the internals of frontier models. By restricting access to a small number of licensed organizations with a stated commitment to AI safety, regulators should hopefully be more capable of ensuring that breakthrough models are ethically aligned and that they cannot be exploited for malicious purposes.

However, without access to model weights or the training code/data, how can the machine learning community and the general public verify that these frontier models are actually properly aligned? Further, in a closed ecosystem, how can less established researchers or companies develop their own models that compete with work done by larger organizations that are licensed?

DNT21711 commented 10 months ago

Your policy document, "Managing AI Risks in an Era of Rapid Progress," addresses AI risks. During this current stage of AI development, what are the most immediate and significant risks that we face in order to progress forward with AI, and how can the AI community deal with the risks in a proactive manner? According to your perspective, how valid is Stuart Russell's concern concerning superintelligent AI in "Many Experts Say We Shouldn't Worry About Superintelligent AI. They're Wrong."? How do we strike a balance between being cautious about the future potential of artificial intelligence and being paranoid about it?

acarch commented 10 months ago

In your paper “Managing AI Risks in an Era of Rapid Progress” (2023), you call for specific measures governments can take to encourage safer and more ethical development of AI. For instance, you and your coauthors recommend registration of frontier systems projects, protections for whistleblowers, and oversight into access controls. You also address companies themselves, calling for careful planning of if-then safety measures to follow if dangerous capabilities start to emerge. From your perspective, which governments or companies are already doing the most to implement these regulations? Are there any organizations that seem like they are actually developing this technology responsibly—or at least more responsibly than the others? If so, can you please say more about what sets them apart?

agupta818 commented 10 months ago

In your paper, you mention that autonomous AI systems are becoming increasingly faster and more cost effective than human workers. How do you think this is going to impact young adults as they enter college and decide what majors and careers they want to pursue? What careers do you see being essentially eliminated by the integration of AI in companies and businesses within the next 5-10 years? Can we create policy that restricts companies from replacing certain human jobs if there is too great of an impact on the job market/a major spike in unemployment from AI system integration?

mibr4601 commented 10 months ago

In your paper, one of the recommendations that you gave was for more government regulation. Do you think that it is likely that there will be government regulation in any form in the near future? With this, how much do you think the government would intervene with fear of falling behind other nations?

oliviaegross commented 10 months ago

I am really interested in the role that ignorance has on individuals' belief on whether AI poses a serious threat. There is a lot of media that attempts to portray artificial intelligence and reflect what AI’s relationships with humans can potentially evolve into. What do you think about the increasing media (both in Hollywood or in the Atlantic) that AI is receiving, and do you think it is being done accurately/well? Do you find any of this media helpful or productive in increasing awareness and inviting conversations around AI, or do you also see it as posing problems?

Why do you think the public has a poor understanding of Artificial Intelligence? How would you propose we begin to educate people about the technology itself and its potential repercussions/threats? What do you think would be the best mode for doing so?

oliviaegross commented 10 months ago

I am really interested in the role that ignorance has on individuals' belief on whether AI poses a serious threat. There is a lot of media that attempts to portray artificial intelligence and reflect what AI’s relationships with humans can potentially evolve into. What do you think about the increasing media (both in Hollywood or in the Atlantic) that AI is receiving, and do you think it is being done accurately/well? Do you find any of this media helpful or productive in increasing awareness and inviting conversations around AI, or do you also see it as posing problems?

madsnewton commented 10 months ago

The policy recommendation proposes that one third of a company’s AI research and development resources should be put into researching safe and ethical AI. What standards would dictate what safe and ethical AI actually is? With AI already posing a threat to certain jobs and skillsets, this conflicts with an ethical AI model. Can AI ever be considered ethical if it is outcompeting the work force causing downsizing and layoffs, and potentially even eliminating specific jobs?

tosinOO commented 10 months ago

I think in the coming years as AI development exponentially increases, it will be fascinating to see the amount of incorrect information and misguided theories begin to circulate regarding AI. I could clearly see a situation in which AI because a politicized issue as a result of growing public concern in which case AI development begins to be regulated and de regulated election cycle after election cycle. What are Hinton's views on the public perception of AI risks and benefits? How can education and outreach be improved to foster a more informed and nuanced understanding of AI among the general public?

imilbauer commented 10 months ago

Are transformer architectures, and maybe even more broadly, “stateless” neural architectures, going to be sufficient to match or eventually surpass human cognition? If human cognition is matched or surpassed, it is possible to envision the sorts of problems and opportunities that AI will generate? The policy paper circulated suggests "red-lines" companies should create for AI programs, however, realistically what are the chances these will be created and implemented? Is it possible to put the "genie" back in the bottle and how much harder might that be when a general artificial intelligence is created?

gabrielmoos commented 10 months ago

What kind of government oversight is necessary for AI? Do you think a panel of AI research experts, akin to the FDA’s drug approval process, is necessary to develop AI more safely and ethically?

Moreover, in regards to the concerns around cyberwarfare/hacking. Isn’t there enough endpoint security, predictive analytics, and rapidly developing AI-powered cybersecurity to prevent threats from malicious actors? Not to mention sensitive financial information and government data are still stored in server rooms, not on the cloud. Do you see threats from malicious actors onto consumers and less so onto governments/enterprises?

GreatPraxis commented 10 months ago

In "Managing AI Risks in an Era of Rapid Progress" you discuss the importance of Interpretability and transparency in the decision-making process of AI. Considering that one of the primary advantages of machine learning models is their capacity to identify patterns imperceptible to humans, is it possible to achieve interpretability and transparency without compromising this intrinsic capability? Moreover, even if attainable, should we proceed with interpretability and transparency measures if they significantly impede the algorithm's performance and handicap the overall progress of AI development ?

kallotey commented 10 months ago

Even with the established risks of the advancement of AI, what is the guarantee that governments will adhere to concerns of amplified social injustice and instability and still act accordingly? As mentioned, many projects, if they go well, will lead to furthering the development of government’s competitive edge with foreign states. What could be done to prepare for these risks now, especially if one of them could widen global inequalities?

Hai1218 commented 10 months ago

Given that humans, who are often biased and influenced by political and social factors, will be responsible for regulating AI, how can we ensure that this human bias does not negatively impact AI regulation? In the context of potential biases, both in AI systems and human regulators, which is more ethically concerning: a biased AI system or biased human regulators? Considering the inherent biases in both humans and AI, how can we strike a balance to ensure ethical and effective regulation of AI? Should we trust a biased human with a biased AI system, or should we rely solely on human judgment despite its flaws?

aaron-wineberg02 commented 10 months ago

How should we categorize AI in our technology vocabulary? I argue that artificial intelligence is a misnomer. Rather, it should be called predictive intelligence. It does not come up with original ideas, but rather estimates what outputs would be best received from pre-constructed inputs. How can we evaluate a technology that is fundamentally built out of inputs we provide? Is it original in any sense?

AudreyPScott commented 10 months ago

The OpenAI Board and succession crisis showed us very clearly that institutions, even when they have the oversight of a body ostensibly not motivated by profit, will chase development for development's sake and reject any calls for transparency, ethical oversight, or slowed growth. In light of the capital, organizational, and social response to Sam Altman's firing and reinstatement - including the interim CEO being lambasted for suggesting slowed growth -- how must we reframe our approach to AI regulation and responsible development? To what extent is our economic system to blame, or am I overreaching?

AnikSingh1 commented 10 months ago

I'm quite curious about governments stepping in to manage AI - what makes the government a better candidate to manage a system of intelligence that is most understood by people who have created, fixed, and troubleshot the system? Letting a body of power dictate something that they are unaware of sounds like an easy avenue to confusion, risk, and trouble. In a situation where people revolt against eventual political policies regarding AI, is there a possibility we could see a "black market" involving the usage of AI, gatekeeping and ensuring its use for specific parties of people?

Daniela-miaut commented 10 months ago

How can we think of AIs in terms of political philosophy? What kind of entities are they? I feel that now we are thinking of the social role of AIs in terms of traditional entities -- individuals, governors, institutions, or machines. But maybe they are so different from all of them that a new category of political (or societal) entity is needed for us to include AIs into our framework of political thinking and planning?

aidanj5 commented 10 months ago

How much investment is currently going into AI? If companies such as OpenAI and Google were to slowdown their development, and add safeguards, would it have a measurable effect? How long would it take for new startups to be created that would not listen to the calls for safety that bigger companies might?

briannaliu commented 10 months ago

Governments are generally slow to adopt new technologies, but the US government is already leveraging AI to better serve the public across a wide array of use cases, including in healthcare, transportation, the environment, and benefits delivery. Is this fast adoption of AI by the government alarming to you? Do you think that the government’s interest in AI will interfere with its ability to implement policies that “check” AI?

jamaib commented 10 months ago

(Joined class late)

I believe AI poses a unique governmental issue in the sense that creating an air of secrecy around it (such as the government does with national defense) would be detrimental instead of beneficial (not sure i they would agree). This I think is largely attributed to the fact that AI is both rapidly developing and highly accessible. Although I agree that the government should (and should have authority to) enforce restrictions and laws surrounding usage and development of AI, should the government be the sole regulator/manager of AI?

maevemcguire commented 8 months ago

What specific measures do you believe are necessary for effective government oversight of AI development to ensure safety and ethical practices? Can a panel of AI research experts, similar to the FDA’s drug approval process, strike a balance between innovation and safety in the AI field?