SDSUCan We Avoid a Hard Takeoff: Speculations on Issues in AI and IA

to home page
email: vinge@cs.sdsu.edu
This page last updated Mon Sep 12 12:18:04 US/Pacific 2005

----------


Talk for Accelerating Change 2005, September 2005

At the moment, I have these html "slides" available at: www.rohan.sdsu.edu/faculty/vinge/misc/ac2005


Some common growth patterns

Specializing to the case of recent technological growth, we see hardware improvements as lying atop the curves of individual technologies.


If this goes on ...

Assume the exponential improvement in computation continues for another few decades. Then what is the killer app?

The exact form is not clear (or rather, there are a number of plausible forms -- many of which I expect will be discussed here at AC2005!), but the essential change/app/technology is:

the development of creativity and intellect that surpasses present-day humans.


Why call this transition the "Technological Singularity"?


Comparison with other radical changes


What if the Singularity doesn't happen?

My own conclusion: while the Technological Singularity is not at all a sure thing, it is the most likely non-catastrophic scenario on the horizon.

Of course, the Singularity itself could be catastrophic. What can we do to make the bad versions less likely?


Singularity futures

Possible paths to the Singularity

Accelerating Change 2005!


Soft takeoffs versus hard takeoffs

How long will the transition through the Singularity take?


Hard takeoff as a Very Bad Thing

While there is plenty of reason to be nervous about changes as big as the Singularity (consider the closest analogies!), I think there are many reasons to be hopeful about such a thing -- if it happens as a soft takeoff (see Ray Kurzweil and Hans Moravec references above).

On the other hand, it's very difficult to muster optimism about a hard takeoff:


Trying for a soft takeoff

There are very good arguments that banning forms of research is an exercise in futility. At the same time, there is an intuitive attractiveness in IA, both by itself and in conjunction with the other possible paths to the Singularity:


Gotchas

My friend Mike Gannis has made a good case for fearing IA (and I paraphrase): ~"We humans are naturally evolved creatures. We carry around in the back our brains millions of years of bloody baggage. That cargo may be unnecessary and suicidal in our present circumstances, but it is there. Machines, designed de novo, could be much less destructively inclined. In fact, if we go ahead with turning ourselves into gods, there is only one person I would trust to be first.~" [At this point, Mike pats himself on the chest.]

I think this is a valid gotcha (except for the last sentence :-). This danger should figure in any analysis of IA. Two possibilities for ameliorating this danger:


References



----------