StampyAI / stampy-ui

AI Safety Q&A web frontend
https://aisafety.info
MIT License
34 stars 9 forks source link

redo (shorten) article titles (copy) #503

Closed melissasamworth closed 3 months ago

melissasamworth commented 3 months ago

Acceptance criteria

If an article title is particularly long, shorten it. I would say if it's three lines in the left nav it's probably too long

Image

LeMurphant commented 3 months ago

This looks like a pretty long task, are we OK having it done after Friday?

LeMurphant commented 3 months ago

Did a first pass. Changes I have done (can be reverted or updated)

Is AI safety about systems becoming malevolent or conscious and turning on us? -> Is AI safety about systems becoming malevolent? What actions can I take in under five minutes to contribute to the cause of AI safety? -> What actions can I take in under five minutes? Might an aligned superintelligence force people to have better lives and change more quickly than they want? -> Might an aligned superintelligence force people to change? Is there a danger in anthropomorphizing AIs and trying to understand them in human terms? -> Is there a danger in anthropomorphizing AIs? Is it possible to code into an AI to avoid all the ways a given task could go wrong, and would it be dangerous to try that? -> Is it possible to code into an AI to avoid all bad things? Is it possible to block an AI from doing certain things on the Internet? -> Is it possible to limit an AI's actions on the Internet? Is expecting large returns from AI self-improvement just following an exponential trend line off a cliff? -> Is expecting large returns from AI self-improvement realistic? I want to help out AI alignment without necessarily making major life changes. What are some simple things I can do to contribute? -> What are some simple things I can do to contribute? How quickly could an AI go from the first indications of problems to an unrecoverable disaster? -> How quickly could an AI go from harmless to existentially dangerous? How much resources did the processes of biological evolution use to evolve intelligent creatures? -> How much computing power did evolution use to create the human brain? How might things go wrong with AI even without an agentic superintelligence? -> How might things go wrong even without an agentic AI? How can progress in GPT-style non-agentic AI lead to capable AI agents? -> How can progress in LLMs lead to capable AI agents? Could we program an AI to automatically shut down if it starts doing things we don’t want it to? -> Could we program an AI to automatically shut down? Can we test an AI to make sure that it’s not going to take over and do harmful things after it achieves superintelligence? -> Can we test an AI to make sure that it’s not going to take over? At a high level, what is the challenge of alignment that we must meet to secure a good future? -> At a high level, what must we do to secure a good future? Aren't robots the real problem? How can AI cause harm if it has no ability to directly manipulate the physical world? -> How can AI cause harm if it has no ability to directly manipulate the physical world? Are there any AI alignment projects which governments could usefully put a very large amount of resources into? -> Could governmental investments help with AI alignment? Would it improve the safety of quantilizers to cut off the top few percent of the distribution? -> Could we cut off the top few percent of a quantilizer's distribution? Would donating small amounts to AI safety organizations make any significant difference? -> Would donating small amounts to AI safety organizations help? Why should we prepare for human-level AI technology now rather than decades down the line when it’s closer? -> Why should we prepare for human-level AI now rather than when it’s closer? Why might people try to build AGI rather than stronger and stronger narrow AIs? -> Why might people build AGI rather than improve narrow AIs? What subjects should I study at university to prepare myself for alignment research? -> What subjects should I study to prepare for alignment research? What are the leading theories in moral philosophy and which of them might be technically the easiest to encode into an AI? -> Which leading theories in moral philosophy would be the easiest to encode into an AI? Wouldn't a superintelligence be slowed down by the need to do experiments in the physical world? -> Wouldn't a superintelligence be limited by the need to do physical experiments? How can I work on helping AI alignment researchers be more effective, e.g. as a coach? -> How can I help AI alignment researchers be more effective? I’m interested in providing significant financial support to AI alignment. How should I go about this? -> How can I provide significant financial support to AI alignment? I would like to focus on AI alignment, but it might be best to prioritize improving my life situation first. What should I do? -> Should I improve my life situation before I work on AI alignment? I want to take big steps to contribute to AI alignment (e.g. making it my career). What should I do? -> How can I orient my career towards AI alignment? But is smarter-than-human AI even a realistic prospect? -> Is smarter-than-human AI a realistic prospect?

A lot of these articles are old and needed love, I sometimes suggested modifications to the text to bridge the gap with the old title.

LeMurphant commented 3 months ago

A few notes:

  1. Older articles had longer titles, we are getting better at keeping them short!
  2. Articles such as "what is the difference between X, Y and..." are inherently short to shorten, I don't plan on doing anything with them.
  3. Some articles have a planned merge, I did not attempt to shorten these since the title is due to change anyways
LeMurphant commented 3 months ago

Second pass based on this thread https://discord.com/channels/677546901339504640/1216242391057563768/1216615650601078844

Is AI safety about systems becoming malevolent? -> Is AI safety about systems becoming malevolent or conscious? What actions can I take in under five minutes? -> How can I help in under five minutes? Is it possible to limit an AI's actions on the Internet? -> Is it possible to limit an AI's interactions with the Internet? What are some simple things I can do to contribute? -> What are some simple things I can do to contribute to AI safety? How can progress in LLMs lead to capable AI agents -> How can progress in non-agentic LLMs lead to capable AI agents? Can we test an AI to make sure that it’s not going to take over? -> Can we test an AI to make sure it won't misbehave if it becomes superintelligent? At a high level, what must we do to secure a good future? -> At a high level, what is the challenge of AI alignment? How can AI cause harm if it has no ability to directly manipulate the physical world? -> How can AI cause harm if if it can't manipulate the physical world? Could we cut off the top few percent of a quantilizer's distribution? -> Would it help to cut off the top few percent of a quantilizer's distribution? Why might people build AGI rather than improve narrow AIs? -> Why might people build AGI rather than better narrow AIs? Which leading theories in moral philosophy would be the easiest to encode into an AI? -> Which moral theories would be easiest to encode into an AI? Wouldn't a superintelligence be limited by the need to do physical experiments? -> Wouldn't a superintelligence be slowed down by the need to do physical experiments? Should I improve my life situation before I work on AI alignment? -> How can I improve my life situation before working on AI alignment? How can I orient my career towards AI alignment? -> How can I build a career in AI alignment?

Comments on things I did not change:

Might an aligned superintelligence force people to change?: I understand the concern, I'm worried that "intrude on people's lives" sounds more like snooping than what we actually mention

Is it possible to code into an AI to avoid all bad things?: I think somebody asking this question has not thought very long of the complexity of value, and I'm not sure how to correctly reflect this in the title

Is expecting large returns from AI self-improvement realistic?: Overall I think this article addresses too specific of an objection, and since the objection takes 3 lines to specify, I'm not sure what to do

How quickly could an AI go from harmless to existentially dangerous?: This article looks weak to me overall, and it's not clear what direction it is going in. I'm ok with changing it to "How soon could irreversible disaster follow after the first warning signs", but IMO we don't really answer either question

Could we program an AI to automatically shut down?: Yes this article needs love, I have noted that we wanted to do a merge. I don't think it's worth putting too much effort in the titles of such articles.

Could governmental investments help with AI alignment?: I don't associate "government investing" to financial returns for e.g. building roadways. Regarding the second point, I'm not sure I see what is meant by "the distribution of projects that governments might actually put money toward"

What subjects should I study to prepare for alignment research?: I'm conflicted about the presence of "university" since it's only mentioned twice in the article, and there's plenty of learning that can happen outside of universities. If we add it as an alternate phrasing just to catch somebody typing in "university", would that work?

How can I help AI alignment researchers be more effective?: To me, "work on" does not necessarily suggest employment or career over a thing that is being strived for. I agree the mention of coaching was helpful, but it just does not fit in the title length we want, and it does not seem important enough to me to make an exception.

LeMurphant commented 3 months ago

Take 3:

Is it possible to code into an AI to avoid all bad things? -> Can we list the ways a task could go disastrously wrong and tell an AI to avoid them? Is expecting large returns from AI self-improvement realistic? -> Are AI self-improvement projections extrapolating an exponential trend too far? What subjects should I study to prepare for alignment research? -> What subjects should I study in university to prepare for alignment research?

Comments: Could governmental investments help with AI alignment?: How about "Are there AI alignment projects that would benefit from governmental investments?" ?