Cancel
Advanced Search
KEYWORDS
CATEGORY
AUTHOR
MONTH

Please click here to take a brief survey

Metaverse, Democracy, and Singularity
Jon Lebkowsky, 17 Sep 07
Article Photo

I couldn't attend this year's Singularity Summit, but I had an opportunity to read the advance text for Worldchanging co-founder Jamais Cascio's "metaverse" presentation. I thought it was pretty rich, though (as I commented to Jamais) I think the Singularity is about as likely to appear as the golem, "an animated being created entirely from inanimate matter." I'll say more about that further on.

In his talk, Jamais usefully addressed the concept of the metaverse, or virtual worlds. I have issues with some virtual worlds proponents: those who in the past would spend 3-6 hours a day watching television, and now spend those hours co-creating virtual worlds like Second Life. If you thought it was healthy that people were spending less time watching commercial television in favor of sitting before more interactive, 'net-enabled computers and other devices, you should think twice. Advertisers are on to the amount of time people are spending in places like Second Life. They're looking for ways to create ambient ads in virtual environments, and in hybrid real/virtual environments -- a la the interactive "holographic" ads Tom Cruise's police detective character encounted while chasing a suspect through a mall in "Minority Report."

I'm not saying virtural worlds are inherently horrible: hanging out in Second Life can be pretty cool. However, we need to consider who's who's pouring money into them, and how they expect to get a return on their investment.

Then again, I know people who want to use virtual worlds to create environmental simulations that will clarify the impacts of various future scenarios. Specifically, they want to show why we need to transform our thinking about resources, and how sustainable systems would look. In some conversations, these folks have mentioned Second Life (or that kind of system) as a potential platform for this kind of modeling, but based on the thinking behind the Metaverse Road Map, I can see that a system like Second Life isn't the right fit. Sustainability modeling would likely be more effective with a "mirror worlds" approach, not in an immersive virtual world. Where Second Life has worked well is as a platform for serious games and simulations, such as the "virtual hallucinations" schizophrenia simulation, which is useful for medical professionals and families dealing with schizophrenics.

I could go on for many paragraphs about the potential issues with virtual worlds, but Jamais brought up the metaverse in the context of thinking about the Singularity -- so I'll follow his lead.

In case you don't know, Singularity in this context refers to an imagined point in our evolution where machines become "smarter than humans," or self-aware, and boost technological progrss beyond our puny human ability to keep up.

The concept of singularity is interesting in the way science fiction is interesting: we can use it to build models and parables that are relevant to right now, even if we don't accept that machines will ever have anything like human awareness. To me this talk about singularity is like suggesting that if we throw light switches in intelligent patterns over time, eventually the electrical grid will become self-aware. Computers are, after all, just complex sets of switches, patterns of ones and zeros, and if there's an awareness associated with computer processes, it's the generative human awareness. I think I understand the thinking behind singularity: if human awareness is a product of evolution that began with a single cell, then complexity beyond our understanding can evolve from simple origins. That doesn't necessarily mean that machines would evolve as humans have evolved, and become self-aware. (For that matter, how many humans are self aware?)

Smarter thinking about the technological future speculates that machines may seem intelligent and aware, but they won't have intelligence or awareness in a human sense. In fact, Jamais spoke about the degree to which evidently intelligent systems are still products of human-authored computer code, created with human biases that guide choices made within the development process. In the metaverse, you'll meet computer-operated avatars that will be difficult to distinguish from those guided by humans. And the environments themselves will be programmed with a simulated intelligence "that analyze our life logs, that monitor our every step and word, that track our behavior online so as to offer us the safest possible society," says Jamais, "or best possible spam.

"Imagine the risks," he goes on, "associated with trusting that when the creators of emerging self-aware systems say that they have our best interests in mind, they mean the same thing by that phrase that we do."

Jamais says the solution to this quandry is clear: "trust requires transparency." In the manner of open-source code, there needs to be an Open Singularity, which at the very least means opening the conversation to more, and more diverse, participants -- just as, in the Open Source world, we open projects to communities of participants and find ways to deal with the complexity that may result. In the process, all the problems of democracy can arise: amateurs mixed with professionals, many voices, contentiousness and disagreements, increased process overhead, leadership vacuums, and more. Sure, democracy is hard, but we do it because we think it's necessary.

Jamais has posted
the text
of his talk, as well as some reactions, such as that of Dan Farber of ZDNet: "How a democratic, open process can be applied to a complex idea like Singularity," said Dan, "and the right choices made, remains a mystery."

How an undemocratic, closed process can be applied to a complex idea like Singularity, and the right choices made, is the mystery to me.

Bookmark and Share


Comments

Hi Jon,

I don't think you'll buy a lot of points by the light-switch analogy there! I'm not a believer in the imminent rise of non-human sentient AI for the simple reason that there is still far too large a gap between what computers are capable of and what we humans do (and Ray Kurzweil's super-exponential extrapolations are simply not warranted as I've written elsewhere), but I have little doubt it will eventually happen, perhaps by the end of this century.

Vernor Vinge is at least one of the originators of the concept of a "singularity", central to several of his novels. I just finished his most recent book, "Rainbows End", and found the near-future there (he puts it in 2025) pretty plausible - and very interesting in the way it deals with the future of the "metaverse" and the possible rise of a sentient AI (or is it artificial?)

First of all, there's no "second life" or separate virtual world. Real life and virtual life are completely enmeshed in Vinge's vision here, and where people play or work together, they do so using a real physical location in real time, even if some or all of the participants are not physically present. Anybody who is physically present in that location can see what's going on by selecting an appropriate view using their own wearable computing devices. The same location (the UCSD library at one point in the story) can host several wildly different virtual worlds at once. Physical artifacts make the simulated world more real through "haptic" technologies...

Of course all this is only reasonable workable based on a "secure hardware environment" controlled (and over-ridable) by government agencies - one of Vinge's themes is the issue of freedom and openness against the rapidly increasing power of the individual to cause massive destruction...

Anyway, the other interesting thing was the nature of the "AI" - it's not clear it (or they, there may have been more than one) was entirely computer-based, although it had some amazing capabilities. With everybody constantly inter-networked and virtual worlds everywhere, Vinge introduces a concept of "affiliances", where humans (or other entities) can join a project to do a certain piece of intellectual work or service of some sort; work for hire almost instantly without necessarily knowing the bigger picture. So a given entity may be composed of both a large number of computer resources and a large number of humans doing analysis of various sorts - more than human, but also more than computer. What do we call an entity like that?

At one point one of the human participants is in a similar position, at the top of a real-time tree of tens of thousands of internetworked analysts trying to understand the situation and decide how to react. Which of these two - human-headed or computer (?) - headed, is closer to the "singularity" concept, or are both equivalent routes?


Posted by: Arthur Smith on 17 Sep 07

Arthur, if you think the light switch analogy is bogus, why don't you refute it? I appreciate your description of Vinge's latest work, but science fiction is generally not especially accurate in its predictions for the future. I think the concepts that are most persistent in sci fi (robot intelligences, a universe populated by human variations, widespread space travel, time travel, etc.) are too widely accepted as "real," just because they were so powerfully imagined.

Yes, we have robots today - but we have no Robby and no Gort. We have artificial intelligence, but it's not and never will be any more than a simulation of human intelligence, built and directed by humans.

I can by into the transhuman, a machine-enhanced human evolution, as suggested by my friend Max More, but I suspect we haven't quite grasped the nature and character of that evolution.

Singularity thinking is probably useful in the way sci fi is generally useful, in stimulating the imagination and occasionally driving smaller but significant developments, e.g. cellphone developers influenced by Star Trek's communicator (and probably Dick Tracy's 'wrist radio,' and forward thinkers about reputation systems influenced by Cory's 'whuffie.' There's definitely cool development in the AI category, even if AI isn't what some think, an actually pathway to a machine version of human awareness.

Back to my analogy: can you refute that computers are, in effect, just complex switching mechanisms processing instructions encoded as switches on (1) and off (0)? Can you offer evidence that is not science fiction? Is there some clear roadmap to singularity, other than the assumption that it 'must' happen because we can conceive it?


Posted by: Jon Lebkowsky on 18 Sep 07

Just saw your response - it would be nice if the blog here had an easier way to track comments and responses (maybe I'm missing something?) anyway...

On the light switches question - of course computers are digital and their processing is completely mechanistic - but can you prove humans are not? This is obviously not a new question for AI research. Another book I just finished reading (I'd long planned to, but this was the first time I'd had a chance) is Douglas Hofstadter's ~1980 book "Godel Escher Bach" which delves into the whole business of self-referentiality, what he calls "strange loops", and the vast gulf between pure logic and what seems to us to be common sense.

John Searle's "Chinese Room" argument against AI is the same question as your light switch statement. The "awareness" of a complex self-referential system lies in itself in some holistic fashion, not in one or other of its component parts - such as the man in the room in Searle's case, or the underlying electric grid in yours. The "awareness" is something that derives from very high levels of complex interrelationships and capabilities for self-introspection and even self-modification in response to both internal and external stimuli; not only is there layer upon layer of complexity, but the layers are interwoven. You just can't see the self-awareness at the lower layers.

Of course, if we understood human consciousness to any real degree then we might have a real path to AI rather than these best-guess assumptions...

Anyway, if you assume AI of any degree is possible, then the question is - how fast can it think? Does the continued march of hardware improvements mean that next year's model will be able to think twice as fast? Or is there some externally imposed constraint that limits speed of thought no matter the speed of the underlying hardware? If there is no such constraint, then AI smarter than humans seems pretty much inevitable... But obviously there are a couple of (I think very reasonable) assumptions lurking behind this.


Posted by: Arthur Smith on 28 Sep 07

By the way, we seem to have conflicting definitions of "AI". There's a lot of stuff that's come out of AI research over the years that is somewhat useful - expert systems, neural networks, etc. But those are still far too low a layer (in my opinion) relative to what's actually needed to create human-scale intelligence and, yes, self-awareness (it's hard to imagine what one would mean without the other) - the "real" AI I was talking about.


Posted by: Arthur Smith on 28 Sep 07



EMAIL THIS ENTRY TO:

YOUR EMAIL ADDRESS:


MESSAGE (optional):


Search Worldchanging

Worldchanging Newsletter Get good news for a change —
Click here to sign up!


Worldchanging2.0


Website Design by Eben Design | Logo Design by Egg Hosting | Hosted by Amazon AWS | Problems with the site? Send email to tech /at/ worldchanging.com
©2012
Architecture for Humanity - all rights reserved except where otherwise indicated.

Find_us_on_facebook_badge.gif twitter-logo.jpg