Open utterances-bot opened 4 years ago
In nature many systems and processes which feature slow scaling also have phase transitions, points at which a dramatic change in the behavior of the system occurs. Arguably something in this spirit happened in evolutionary history on earth: Over millions of years the effects of intelligence scaled slowly through many generations of animals, until a dramatic change occurred in early humans. Something similar could occur in our artificial systems, and I think that is what Bostrop & Co argue.
I agree with your point though: The proponents of super-capable AIs take too much of a "free lunch" perspective when it comes to what these AIs should be able to do. It's likely that superhuman AIs, if they can be made, will still be constrained in what they can accomplish by the laws of physics and so on. If they don't have the information needed to solve a problem, and they can't get the information, then they won't be able to solve it.
The Case Against the Singularity
Posts and writings by Julian Schrittwieser
http://www.furidamu.org/blog/2020/05/03/the-case-against-the-singularity/