Cancel
Advanced Search
KEYWORDS
CATEGORY
AUTHOR
MONTH

Please click here to take a brief survey

Tough Guide to the Singularity
Jamais Cascio, 28 May 05

The Singularity -- the point in the future where machines get smarter than people, after which all bets are off -- is, for some people, a deeply-desired goal, and for others, little more than the "rapture of the nerds (PDF)" (a deliciously pointed phrase thought up by Ken Macleod). For a growing handful of science fiction writers, it's their bread-and-butter. Charlie Stross, WorldChanging ally and science fiction storyteller, tends to fall more in the Macleod-Doctorow school of Singularity Skepticism, but that hasn't stopped him from writing one of the most engaging tales of humankind falling into the Great Technological Unknown I've ever read: the Accelerando series of short stories.

Accelerando is due out soon as a novel (and, as Stross has just revealed, there will be a Creative Commons electronic version). In the run-up to the release, Stross has crafted "Singularity! A Tough Guide to the Rapture of the Nerds," a snark-filled, flippant and altogether terrific mini-wikipedia of Singularitanism. While I'm not quite as dismissive of the Singularity concept as some, I still found myself laughing sufficiently loudly while reading the site that I scared the cat.

A sample definition, one of particular interest to many of our readers:

BruceSterling
Bruce Sterling is one of the former cyberpunk ScienceFictionWriters. He is believed to have become one of the first PostHumans some time around 1996. He now writes historical novels and teaches design.If you believe you are living in a universe created by BruceSterling, you are advised to pursue one of the following strategies:
  • cultivate an overwhelming, dry sense of ironic detachment
  • flee screaming

As some of you will recall, I've done a bit of role-playing game design in the recent past, so I was particularly tickled to find that a number of entries (for BushRobots, GreyGoo, and UtilityFog) are written up as old-style Monster Manual pages. (Which led me in turn to a particularly pleasing discovery: Charlie Stross invented the Githyanki. About 3 of you will know what I'm referring to, but those 3 should be rather amused to learn this.)

The only real disappointment about this Tough Guide is that the funky javascript engine used to display the text makes linking to individual entries sufficiently difficult that I never found a way to do so. Still, whether the Singularity will leave you Transcendent or PostHumous, Charlie's Tough Guide is a fun way to spend a weekend afternoon.

Bookmark and Share


Comments

How did you find out he invented the Githyanki? That's freaky.


Posted by: Al on 28 May 05

I can't be as concise as Mr.Robinson's one-word advice to Dustin Hoffman in The Graduate, but I have two words for singularitans: "combinatorial explosion".

Combinatorial explosion means that the complexity of any model grows as the number of combinations of components, generally as the factorial function. Factorial is commonly denoted by ! for good reason. (Technically, it's still an exponential function, but it's a big one.)

The closer a model tries to come to emulating the real world, the more difficult it becomes to make progress improving the model.

Serious students of quantum computation will be aware that even these mysterious (hence wishfully magical) devices cannot reduce exponentially complex problems to even polynomial complexity, much less the constant-time complexity that singularitans want to believe is just around the corner.

For Christians, the apocalypse is a thousand years overdue, but the faithful need to be ready for it to arrive "any day now". Artificial Intelligence has been "only ten years away" for the past forty years. I expect the Singularity to exhibit such an ever-receding arrival, too.


Posted by: Alton Naur on 28 May 05

I'm a total singularity skeptic, but if it comes, I hope we'll still have that most critical of human inventions, the circuit breaker.


Posted by: Jon Lebkowsky on 29 May 05

The rapture of the nerds will never happen. Human agumenation will proced in lockstep with the devlopment of robotics and AI. The capablities of both will procede at the same rate.


Posted by: Graham on 29 May 05

I'm also skeptical the techno-rapture but not for the reasons some might think.

I don't think Vinge, and other science fiction writers in recent stories, ever imagined this evolutionary step as the arrival of paradise. Maybe they did in the early days, in the Eighties, but since they've given things more serious thought. In recent interviews Vinge has been very ambivalent about it, thinking it will be a very mixed bag.

Since the concept has slowly filtered out to the general public, a lot of overenthusiastic people have glossed over many questions and painted things in rosey optimism (Kurzweil, Tipler or Moravec) or gothic horror (The Matrix or The Terminator).

But in its most basic form, the concept really doesn't say anything about how this will be good or bad for humanity or the successors, if any, of humanity--or the environment of Earth for that matter.

The singularity in two sentences:

  1. Superhuman intelligence is possible, realizable and will accelerate scientific and technological progress, which will, in turn, feed back into greater levels of intelligence.
  2. Creatures with superhuman intelligence are hard to understand which implies that any culture in which they participate will begin to get very strange.

To me, it seems hard to pin any historical endgame (Techno-heaven or Techno-hell.) on either of those two statements.

But of course the whole thing hinges on whether superhuman levels of intelligence are possible. There may be undiscovered restrictions in physics or systems theory which place machines like the human brain at the top of a sigma curve of complexity. Beyond that point, we only get diminishing returns.

I, as a vaguely informed layman, think superhuman intelligence is possible. I don't think Alton's citation of computational intractability is relevent to the question of superhuman intelligence but, it may place limits on other areas so I won't dismiss it entirely.

I argee with Graham superhuman intelligence will arise first in the fyborg interface (Look towards the bottom for my comment.). This means that people who can afford the wearable computers of the near future won't really notice anything until they compare themselves to people without such devices. Think of eyeglasses that record everything you see and hear throughout the day as an aid to short and long-term memory for example. (The tricky bit is how to organize and sort all this data in an intuitive and ideosyncratic way.)

By the way, if I admit to knowing about githyanki what do I win? Is this some sort of masonic handshake of the nerds?


Posted by: Mr. Farlops on 30 May 05



EMAIL THIS ENTRY TO:

YOUR EMAIL ADDRESS:


MESSAGE (optional):


Search Worldchanging

Worldchanging Newsletter Get good news for a change —
Click here to sign up!


Worldchanging2.0


Website Design by Eben Design | Logo Design by Egg Hosting | Hosted by Amazon AWS | Problems with the site? Send email to tech /at/ worldchanging.com
©2012
Architecture for Humanity - all rights reserved except where otherwise indicated.

Find_us_on_facebook_badge.gif twitter-logo.jpg