Cancel
Advanced Search
KEYWORDS
CATEGORY
AUTHOR
MONTH

Please click here to take a brief survey

Open the Future
Jamais Cascio, 19 Feb 04

(This essay was one of the pieces of the last, unpublished issue of Whole Earth Review. I initially wrote it in November of 2002, revised it in February of 2003, and lightly edited it today. The issue focused on the "singularity" -- a point in the near future where change happens so fast that pre-singularity people simply couldn't comprehend the lives of post-singularity people -- and this essay looked to ways to make such an event as safe and democratic as possible. Even if you set aside its references to a "singularity," however, I think the arguments it makes are still quite relevant to the issues we confront today. Warning: it's over 2600 words, a bit longer than our usual post length. --Jamais)

Open the Future

by Jamais Cascio

Very soon, sooner than we may wish, we will see the onset of a process of social and technological transformation that will utterly reshape our lives -- a process that some have termed a "singularity." While some embrace this possibility, others fear its potential. Aggressive steps to deflect this wave of global transformation will not save us, and would likely make things worse. Powerful interests desire drastic technological change; powerful cultural forces drive it. The seemingly common-sense approach of limiting access to emerging technologies simply further concentrates power in the hands of a few, while leaving us ultimately no safer. If we want a future that benefits us all, we'll have to try something radically different.

Many argue that we are powerless in the face of such massive change. Much of the memetic baggage accompanying the singularity concept emphasizes the inevitability of technological change and our utter impotence in confronting it. Some proponents would have you believe that the universe itself is structured to produce a singularity, so that resistance is not simply futile, it's ridiculous. Fortunately, they say, the post-singularity era will be one of abundance and freedom, an opportunity for all that is possible to become real. Those who try to divert or prevent a singularity aren't just fighting the inevitable, they're trying to deny themselves Heaven.

Others who take the idea of a singularity seriously are terrified. For many deep ecologists, religious fundamentalists, and other "rebels against the future," the technologies underlying a singularity -- artificial intelligence, nanotechnology, and especially biotechnology -- are inherently cataclysmic and should be controlled (at least) or eliminated (at best), as soon as possible. The potential benefits (long healthy lives, the end of material scarcity) do not outweigh the potential drawbacks (the possible destruction of humanity). Although they do not claim that resistance is useless, they too presume that we are merely victims of momentous change.

Proponents and opponents alike both forget that technological development isn't a law of the universe, nor are we slaves to its relentless power. The future doesn't just happen to us. We can -- and should -- choose the future we want, and work to make it so.

Direct attempts to prevent a singularity are mistaken, even dangerous. History has shown time and again that well-meaning efforts to control the creation of powerful technologies serve only to drive their development underground into the hands of secretive government agencies, sophisticated political movements, and global-scale corporations, few of whom have demonstrated a consistent willingness to act in the best interests of the planet as a whole. These organizations -- whether the National Security Agency, Aum Shinri Kyo, or Monsanto -- act without concern for popular oversight, discussion, or guidance. When the process is secret and the goal is power, the public is the victim. The world is not a safer place with treaty-defying surreptitious bioweapons labs and hidden nuclear proliferation. Those who would save humanity by restricting transformative technologies to a power elite make a tragic mistake.

But centralized control isn't our only option. In April of 2000, I had an opportunity to debate Bill Joy, the Sun Microsystems engineer and author of the provocative Wired article, "Why The Future Doesn't Need Us." He argued that these transformative technologies could doom civilization, and we need to do everything possible to prevent such a disaster. He called for a top-down, international regime controlling the development of what he termed "knowledge-enabled" technologies like AI and nanotech, drawing explicit parallels between these emerging systems and nuclear weapon technology. I strongly disagreed. Rather than looking to some rusty models of world order for solutions, I suggested we seek a more dynamic, if counter-intuitive approach: openness.

This is no longer the Cold War world of massive military-industrial complexes girding for battle; this is a world where even moderately sophisticated groups can and do engage in cutting-edge development. Those who wish to do harm will get their hands on these technologies, no matter how much we try to restrict access. But if the dangerous uses are "knowledge-enabled," so too are the defenses. Opening the books on emerging technologies, making the information about how they work widely available and easily accessible, in turn creates the possibility of a global defense against accidents or the inevitable depredations of a few. Openness speaks to our long traditions of democracy, free expression, and the scientific method, even as it harnesses one of the newest and best forces in our culture: the power of networks and the distributed-collaboration tools they evolve.

Broad access to singularity-related tools and knowledge would help millions of people examine and analyze emerging information, nano- and biotechnologies, looking for errors and flaws that could lead to dangerous or unintended results. This concept has precedent: it already works in the world of software, with the "free software" or "open source" movement. A multitude of developers, each interested in making sure the software is as reliable and secure as possible, do a demonstrably better job at making hard-to-attack software than an office park's worth of programmers whose main concerns are market share, liability, and maintaining trade secrets.

Even non-programmers can help out with distributed efforts. The emerging world of highly-distributed network technologies (so-called "grid" or "swarm" systems) make it possible for nearly anyone to become a part-time researcher or analyst. The "Folding@Home" project, for example, enlists the aid of tens of thousands of computer users from around the world to investigate intricate models of protein folding, critical for the development of effective treatments of complex diseases. Anyone on the Internet can participate: just download the software, and boom, you're part of the search for a cure. It turns out that thousands of cheap, consumer PCs running the analysis as a background task can process the information far faster than expensive high-end mainframes or even supercomputers.

The more people participate, even in small ways, the better we get at building up our knowledge and defenses. And this openness has another, not insubstantial, benefit: transparency. It is far more difficult to obscure the implications of new technologies (or, conversely, to oversell their possibilities) when people around the world can read the plans. Monopolies are less likely to form when everyone understands the products companies make. Even cheaters and criminals have a tougher time, as any system that gets used can be checked against known archives of source code.

Opponents of this idea will claim that terrorists and dictators will take advantage of this permissive information sharing to create weapons. They will -- but this also happens now, under the old model of global restrictions and control, as has become depressingly clear. There will always be those few who wish to do others harm. But with an open approach, you also get millions of people who know how dangerous technologies work and are committed to helping to detect, defend and respond. That these are "knowledge-enabled" technologies means that knowledge also enables their control; knowledge, in turn, grows faster as it becomes more widespread.

These technologies may have the potential to be as dangerous as nuclear weapons, but they emerge like computer viruses. General ignorance of how software works does not stop viruses from spreading. Quite the opposite - it gives a knowledgeable few the power to wreak havoc in the lives of everyone else, and lets companies that keep the information private a chance to deny that virus-enabling defects exist. Letting everyone read the source code may help a few minor sociopaths develop new viruses, but also enables a multitude of users to find and repair flaws, stopping many viruses dead in their tracks. The same applies to knowledge-enabled transformative technologies: when (not if) a terrorist figures out how to do bad things with biotechnology, a swarm of people around the world looking for defenses will make us safer far faster than would an official bureaucracy worried about being blamed.

Consider: in early 2003, the U.S. Environmental Protection Agency announced that it was upgrading its nationwide system of air-quality monitoring stations across to detect certain bioterror pathogens. Now, analyzing air samples quickly requires enormous processing time; this limits both the number of monitors and the variety of germs they can detect. If one of the monitors picks up evidence of a biological agent in the air, under current conditions it will take time to analyze and confirm the finding, and then more time to determine what treatments and preventive measures -- if any -- will be effective.

Imagine, conversely, a more open system's response. EPA monitoring devices could send data not to a handful of overloaded supercomputers but to millions of PCs around the nation -- a sort of "CivilDefense@Home" project. Working faster, this distributed network could spot a wider array of potential threats. When a potential threat is found, the swarm of computers could quickly confirm the evidence, and any and all biotech researchers with the time and resources to work on countermeasures could download the data to see what we're up against. Even if there are duplicated efforts, the wider array of knowledge and experience brought to bear on the problem has a greater chance of resulting in a better solution faster - and of course answers could still be sought in the restricted labs of the CDC, FBI and USAMRIID. The more participants, the better.

Both the threat and the response could happen with today's technology. Tomorrow's developments will only produce more threats and challenges more quickly. Do we really think government bureaucracies can handle them alone?

A move to a society that embraced open technology access is possible, but it will take time. This is not a tragedy; even half-measures are an improvement over a total clamp down. The basic logic of an open approach -- we're safer when more of us can see what's going on -- will still help as we take cautious steps forward. A number of small measures suggest themselves as good places to start:

• Re-open academic discourse. Post-9/11 security fears have led to restrictions on what scholarly researchers are allowed to discuss or publish. Our ability to understand and react to significant events -- disasters or otherwise -- is hampered by these controls.

• Public institutions should use open software. The philosophy of open source software matches our traditions of free discourse. Moreover, the public should have the right to see the code underlying government activities. In some cases, such as with word processing software, the public value may be slight; in other cases, such as with electronic voting software, the public value couldn't be greater. That nearly all open source software is free means public institutions will also save serious money over time.

• Research and development paid for by the public should be placed in the public domain. Proprietary interests should not be able to use of government research grants for private gain. We all should benefit from research done with our resources.

• Our founding fathers intended intellectual property (IP) laws to promote invention, but IP laws today are used to shore up cultural monopolies. Copyright and other IP restrictions should reward innovation, not provide hundred-plus year financial windfalls for companies long after the innovators have died.

• Those who currently develop these powerful technologies should open their research to the public. Already there is a growing "open source biotechnology" movement, and key figures in the world of nanotechnology and artificial intelligence have spoken of their desire to keep their work open. This is, in many ways, the critical step. It could become a virtuous cycle, with early successes in turn bringing in more participants, more research, and greater influence over policy.

While these steps would not result in the fully-open world that would be our best hope for a safe future, they would let in much-needed cracks of light.

Fortunately, the forces of openness are gaining a greater voice around the world. The notion that self-assembling, bottom-up networks are powerful methods of adapting to ever-changing conditions has moved from the realm of academic theory into the toolbox of management consultants, military planners, and free-floating swarms of teenagers alike. Increasingly, people are coming to realize that openness is a key engine for innovation.

Centralized, hierarchical control is an effective management technique in a world of slow change and limited information -- the world in which Henry Ford built the model T, say. In such a world, when tomorrow will look pretty much the same as today, that's a reasonable system. In a world where each tomorrow could see fundamental transformation of how we work, communicate, and live, it's a fatal mistake.

A fully open, fully distributed system won't spring forth easily or quickly. Nor will the path of a singularity be smooth. There is a feedback loop between society and technology -- changes in one drive changes in the other, and vice-versa -- but there is also a pace disconnect between them. Tools change faster than culture, and there is a tension between the desire to build new devices and systems and the need for the existing technologies to be integrated into people's lives. As a singularity gets closer, this disconnect will worsen; it's not just that society lags technology, it's that technology keeps lapping society, which is unable to settle on new paradigms before having to start anew. People trying to live in fairly modern ways live shoulder-by-jowl with people desperately trying to hang on to well-understood traditions, and both are still confronted by surprising new concepts and systems.

Change lags and lurches, as rapid improvements and innovations in technology are haphazardly integrated into other aspects of our culture(s). Technologies cross-breed, and advances in one realm spur a flurry of breakthroughs in others. These new discoveries and inventions, in turn, open up new worlds of opportunities and innovation.

If there is a key driving force pushing towards a singularity, it's international competition for power. This ongoing struggle for power and security is why, in my view, attempts to prevent a singularity simply by international fiat are doomed. The potential capabilities of transformative technologies are simply staggering. No nation will risk falling behind its competitors, regardless of treaties or UN resolutions banning intelligent machines or molecular-scale tools. The uncontrolled global transformation these technologies may spark is, in strategic terms, far less of a threat than an opponent having a decided advantage in their development -- a "singularity gap," if you will. The "missile gap" that drove the early days of the nuclear arms race would pale in comparison.

Technology doesn't make the singularity inevitable; the need for power does. One of the great lessons of the 20th century was that openness -- democracy, free expression, and transparency -- is the surest way to rein in the worst excesses of power, and to spread its benefits to the greatest number of us. Time and again, we have learned that the best way to get decisions that serve us all is to let us all decide.

The greatest danger we face comes not from a singularity itself, but from those who wish us to be impotent at its arrival, those who wish to keep its power for themselves, and those who would hide its secrets from the public. Those who see the possibility of a revolutionary future of abundance and freedom are right, as are those who fear the possibility of catastrophe and extinction. But where they are both wrong is in believing that the future is out of our hands, and should be kept out of our hands. We need an open singularity, one that we can all be a part of. That kind of future is within our reach; we need to take hold of it now.

Bookmark and Share


Comments

Typical techno-utopian nonsense.
When I was a tiny little kid in the very early 60s, the papers were full of this crap. "Soon you'll have nuclear-powered vacuum cleaners! Rocket travel! Vacations on the moon!"
Never happened.
The harsh reality is that AI has hit a brick wall and is going nowhere. The entire project of creating sentient machines is misconceived and impossible at a basic level, since neuroscience research has shown that emotions (which arise from a body) prove essential in intelligent reasoning. Since emotions can never be generated or simulated by pure math, AI was and is a dead end. It's impossible. Marvin Minsky, Gelernter, and many others have pointed this out.
Nanotechnology has likewise hit a dead end and is likewise a pipe dream. Among basic physics problems preventing nanotech from working, there's the heat problem, the stickiness problem, and the self-assembly problem -- all impossible to solve unless someone changes the basic laws of physics.
So 2 of the 3 technologies mentioned in the above article are pure pseudoscientific self-delusion. That makes any alleged "singularity" a fantasy.


Posted by: laertes on 19 Feb 04

Wow, amazing and beautiful text...

Could be the Manifesto that we need at the precise moment..

Who is the autor Mr. Cascio?

Thanks,

Alfredo


Posted by: Alfredo Narváez on 19 Feb 04

I am, Alfredo. I'll edit to make that more clear.

Laertes, you are of course entitled to your opinion. However, there are numerous people who are currently working on the technologies you describe as "pipe dreams" who would disagree with you.


Posted by: Jamais Cascio on 19 Feb 04

As a human being on planet Earth, I'm in full agreement with the need for openness. Even if you buy into Laertes' cynicism and dismiss nanotech and AI, you'll find plenty of very real technologies whose potential transformative powers are difficult to overestimate.

But as a citizen of the United States I'm afraid of what that openness will do to my country.

The initiative towards openness won't come from the United States. We're at the top of the global heap right now. In the short run, openness can only reduce our standing in the world.

So openness, if it comes, will come from the people at the bottom of the global heap. From places who have much to gain and nothing to lose. And it will come at the expense of my country, as it tries (and fails) to uphold the existing brittle order.

The only thing worse for the United States than openness is the alternative. Either way, I think the United States is in for a hard century or two.


Posted by: Jeremy Lyon on 19 Feb 04

Hrm.

First, Laertes - I agree about technobabble.

However, you're dead wrong about nanotech: life has solved all of these problems well enough to be getting on with and even if that turns out to be an optimal solution, nanotech could still be the power of programmable life... Quite enough power to be getting on with, thanks.

Openness about self-replicating technologies is insanely dangerous. Let's face that up front: making a nuke requires enormous engineering infrastructure and kilos of tightly controlled isotopes.

Making a polio virus requires a requisition order and a DNA sequence: http://www.google.com/search?q=virus+made+in+lab&ie=UTF-8&oe=UTF-8

The difference betwen this, smallpox, and some genetically tweaked strain of the martian ebola flupox from Porton Down is knowledge: what to put in the damn machine.

Openness?

Sure: if you police every single one of the machines.

if not? I don't know that anybody has a good answer to this: Bill Joy doesn't, yet, he's still working on his book about this stuff.

I'm not knocking you for proposing an approach.

However, given what we know, and the problems as we understand them, I do think that conscious not-having-an-answer is important. I don't know how to make human life safe from these new threats, and I don't believe anybody else on the planet does yet. Time for creative ignorance.


Posted by: Vinay on 20 Feb 04

Well, this may not be understood, but I view the external world as a projection of consciousness, a form of externalized mind. The singularity is merely(!?!) widespread enlightenment. If you have ever had the opportunity to be with a saint, an enlightened person, you can see this in action. Be prepared to drop all of your concepts, your ego, and any other self-limiting beliefs. Mystics have always lived the singularity. It's really not such a big deal, at the same time that it is everything. Enjoy.


Posted by: ana ma roopa on 20 Feb 04

I think that a lot of this "singularity" talk should have a lot more to do with the current growth of the population. Perhaps we are forgetting about the singularity that is bound to occur once the earh's inhabitable space has reached it maximum capacity for supporting life. Eventually humans will fill up the Earth and use enough of the resources that expansion and sustinance will no longer be possible.

more on this in a bit.. i have to go on lunch break.


Posted by: Paul Hindt on 20 Feb 04

Although the focus of the WER issue -- and therefore my essay -- was the singularity, the approach I suggest has broader implications. Alex suggested that I redraft the essay to speak directly to the wider subject; I'll do that in the next week or two.

Jeremy: I suspect you're right about the US being in for a difficult century, but you may be surprised about who's on top when it comes to these emerging technologies. Expertise in bio and nano engineering is more globally distributed than one might expect, and the US is actually experiencing something of a brain-drain regarding biotech, as top scientists move to Europe (or China) to do work with stem cells, etc.

Vinay, as you note, nobody is going to be able to police all the machines or scientists around the world. Despite its idealistic/"utopian" sheen, my proposal here is actually inherently quite pragmatic. We know we aren't going to be able to stop everyone who might wish to do harm, and we know we won't be able to stop every possible accident. What approach, then, can we take that will do the best job of responding to and defending against those attacks and accidents when they do come? Open systems have a better track record against smaller-scale "knowledge-enabled" attacks than do closed/proprietary systems. I'm more willing to trust a global community of collaborating bio/nano/info-technologists than a politicized government agency or a privatized firm which has to maintain shareholder value as its first priority.


Posted by: Jamais Cascio on 20 Feb 04

It seems like we are going to have to have a lot more of an evolution of our population in general before these technologies actually start to make any real different to anyone. I agree to a certain extent on developing technologies that would help more people have a better quality of life or perhaps live a longer life, but only if the population starts to stabilize.


Posted by: Paul Hindt on 20 Feb 04

Also, it is quite apparent to me that we should prioritize our scientific resources for the time being, and work on alternative energy production.


Posted by: Paul Hindt on 20 Feb 04

These singularity outcomes are grey abstractions compared to the beauty of nature. For success, all rely on immense energy and material inputs. Proponents take no account of the law of unintended consequences. Hang on techno-optimists, for the ineveitable surprise.


Posted by: John Laumer on 21 Feb 04

Jamais:

I think what you call "Singularity" is what is generally known in philosophy as a "Paradigm Shift".

Singularity, as I understand it, is the counter motion of the Big Bang and Expansion of the Universe. In ancient philosophical terms, its the "Ying Yan" principle of existence.

I will agree with you that the historical process has speeded up - empires once existed for upwards of 500 years - by this, I mean, empire = control of the known world. The British Empire rose and fell in approximately 200 years and the waning US empire has compressed that time by half.

This observation would support the idea that a major paradigm shift will occur in the near future. But, the maddening thing is NO ONE, not even you, can predict, except in the vaguest terms,(we can however observe the after-event) when or where it will happen. That is the nature of the phenomena.

Nor will it happen overnight and it may have already happened ... we won't know until it becomes a tidal wave in human consciousness and right now, it is beyond human intelligence, as we know it, to define it.

Trust me, it is not something only the young can see - as you seem to imply in your article. Once you and so many of the human race realize there are no "chosen ones" the sooner we will make real progress.

Keep up the good work.


Posted by: Stefan Thomas on 22 Feb 04



EMAIL THIS ENTRY TO:

YOUR EMAIL ADDRESS:


MESSAGE (optional):


Search Worldchanging

Worldchanging Newsletter Get good news for a change —
Click here to sign up!


Worldchanging2.0


Website Design by Eben Design | Logo Design by Egg Hosting | Hosted by Amazon AWS | Problems with the site? Send email to tech /at/ worldchanging.com
©2012
Architecture for Humanity - all rights reserved except where otherwise indicated.

Find_us_on_facebook_badge.gif twitter-logo.jpg