There's a theory in cognitive science that suggests that one of the hallmarks of human consciousness is the ability to model another person's thoughts in one's own brain, and do so with reasonable accuracy. It's not simply being able to read expressions, although that's part of it; humans can imagine how another person's thought processes, which may differ significantly from their own, would play out in reaction to a given situation. If you think about it, this is an amazing capability, especially because we don't always do it consciously. We run sophisticated simulations of other people's minds within our own. This capacity allows us both to imagine how others would feel after we witness their circumstances -- that is, it allows us to experience empathy -- and to imagine how others would respond to our own statements and actions -- that is, it allows us to rehearse our behavior.
We are now in the process of building that same capability into the world in which we live.
We are building this capability through the intersection of three important lines of research, all of which should be familiar to WorldChanging readers:
As these technologies continue to overlap, they will have an ever-increasing importance to the idea of building an open future. One of the characteristics of an open future is flexibility across options, and one way to make choices in a situation replete with options is to rehearse and test to see which one is optimal. The capabilities we are developing will give us a chance to examine scenarios for possible outcomes in a much less formal and mechanistic way than is currently practiced as "scenario planning;" instead, scenario analysis and wind-tunneling choices could easily become a standard part of how we make our way through an increasingly complex and information-rich world.
"Augmented reality" is a mouthful term for what is gradually becoming a familiar experience: the annotation and observation of the world around us through the use of information and communication technologies. "Location-based" services such as denCity or Crunkies (or, as Mikki discussed earlier today, the "Dodgeball" service) are simple forms of augmented reality, as they provide points of connection between information networks and physical space. With the right kinds of mediating tools, we can leave notes to each other or access particular bits of information appropriate to one's current location. Who's near me? What's good here? What might I easily miss? These are the kinds of questions that location-based tools attempt to answer. "Remote data" services such as Google Earth or SeaLabs are simple forms of augmented reality as well, as they provide technology-enabled extensions of our natural senses.
As augmented reality evolves, such information-by-request will be matched by constant ambient information, allowing us to keep track of bits of information we individually find useful, but not demanding constant attention. The intent isn't to cut people off from their immediate physical experience, but to allow people to maintain non-physical contact with distant experiences -- the health of a sick relative, or weather forecasts, or traffic levels on one's blog. Although superficially this may appear to add to information overload, if done properly, it could actually be a moderating tool: rather than actively seek out bits of information that may not be useful at that particular moment (and correspondingly worry about missing something), we can allow the tools to monitor that information for us, only drawing our attention to changes that actually warrant our attention -- but still keeping the info at easy reach when we decide we want to check.
Current tools for augmented reality are fairly cumbersome, as most rely on mobile phones, which spend most of their time in our pockets unless they require our attention. This is hardly optimal for situations where we need ambient communication -- changes on the periphery of our senses, noticeable but easily ignored if need be. The utility of augmented reality is such that we will likely see substantial improvements in the physical interfaces in the near future.
Augmented reality requires a robust network of accessible information sources, as the system described would combine the ability to observe remote phenomena with the ability to provide asynchronous location-based information (that is, information transfer at a particular spot that doesn't require all parties to be there at the same time). These information sources could include both "blogjects" -- physical objects that provide rich networked information about themselves and their environments -- and participatory media, people carrying around cameras or recording devices that they allow the rest of the world to experience.
Interestingly, some "augmented reality" features are already present in virtual worlds.
Nearly all virtual world environments, from games like World of Warcraft to social networks like Second Life, provide an interface that puts important but not always demanding-of-attention information along the screen's periphery, similar to what one might experience with real-world augmented reality gear in a few years. The complexity of the interface generally reflects current activities; for example, a WoW player on a Molten Core raid may have on the screen more information about the health of teammates and more links to tools or abilities than she would during small-group play.
Virtual worlds provide an artificial manifestation of physical proximity for non-local participants. It doesn't matter if the people on the aforementioned Molten Core raid are actually located in San Francisco, US, Toronto, Canada, and Yorkshire, UK, they can interact with each other as if they were all in the same (virtual) location. To the degree that economic and social behavior has an information component, they can engage in relationships and commerce; as tools for fabricating in the real world designs from virtual space become available, these interactions can take on a more tangible aspect, too.
If augmented reality provides us with virtualized information about real-world spaces, virtual worlds provide us with immersive non-physical experiences in imagined spaces. But as the interface description above suggests, there's the potential for overlap: picture an augmented reality tool that informs the user of interesting events from fiction that took place at given locations. Projects such as ARQuake take the overlap of augmented reality and virtual worlds even further, overlaying the Quake game environment -- and opponents -- on top of physical reality.
But the intersection of virtual environments and augmented reality will get more interesting when the amount of AR data available is sufficient to build a relatively realistic model of the physical world that can be examined and navigated as if it were a virtual environment.
We've seen mapping applications that do something similar, offering 3D navigable spaces that appear more-or-less identical to real world locations. That's just the beginning, though, as these current tools are static and lifeless. A fuller combination of virtual world and augmented reality would include the location-based information for geographic points as well as the information streams from blogjects and individuals who have opened their personal recording devices to outside observation. With enough participation and information density, one could build what would amount to a SimCity version of the real world, supported by extensive real-world data on behavior and locations.
At the same time, we are building a stronger understanding of how to create simulated environments that plausibly match reality. The Sim is not the City, of course -- simulation outcomes are always the result of the intersection of limited information and designer-determined rules. Better sources of information and behavioral rules that emerge from observation instead of assumption are likely to improve the capacity of simulations to provide us useful results. Clearly, the increased density of information feeds of augmented reality and the interactive spaces of virtual worlds may be able to make this improvement happen faster.
The goal here isn't to replace reality -- few of us want The Matrix -- but to give us the tools to make the best possible choices in our current reality.
Imagine a world in which political leaders, upon the presentation of a new economic, political or social strategy, not only had to spell out the details of how it would work and its financial feasibility, but had to offer up a simulation of why they believed that this was the best course of action. That simulation would have to be open and transparent, of course, so that interested citizens could examine the underlying assumptions and rules that made it work. Critics would be forced by necessity to create countervailing simulations, equally open to examination. In principle -- and one need not be a cynic to recognize that the following is not inevitable -- citizens could make decisions informed not just by what political leaders are promising, but by the assumptions and rationales that went into the promises in the first place.
Augmented Virtual Simulated Real Worlds
Now imagine that those same decision-support tools could be easily used by the citizens themselves. It's a shift comparable to the rise of home tools for image, video and text editing that rival the best professional tools of just a few years earlier. Rather than needing a massive technical office to assemble a simulated future, people could rely on software tutorials, "wizards" and a clear interface in order to play around with possible outcomes.
"iReality." "ParadigmShop." "Google Scenario."
The combination of augmented reality information, virtual world interactive environments, and complex simulations isn't inevitable, but it is a quite possible result of the further development of these three technologies. If done right, they could provide an incredibly useful tool for navigating the increasingly difficult choices we as a global society will be required to make in the coming years. How should molecular manufacturing nanotechnology be regulated? What are the repercussions of widespread access to biotechnology? What's the best way to provide food and water to those in need? Which carbon emission reduction strategies are likely to combine the most optimal environmental and economic results?
Increased emphasis on making the right long-term decision could be one result of extended healthy lifespans. In the past, concern for future outcomes was couched in language of repercussions for one's children and grandchildren; demonstrably, the degree of worry parents have for the lives their children will live varies considerably. But if the longer-term results of current actions can harm oneself, not just one's distant progeny, choices become more personal. Given the speed with which biological research pushes us towards a world of radical longevity, many of us, much to our own surprise, perhaps, are likely to face the long term repercussions of decisions that we may have thought were something for future generations.
Thinking of longer-term outcomes is not something that many of us do on a regular basis. This may be, in part, neurological; our brains evolved in conditions when few of us lived much past 30. This may also be, in part, learned behavior -- we don't take the long view because we never had practice or assistance in doing so. That's where tools like these ones discussed here can be all the more useful. They could be training wheels for responsible decision-making, helping us get in the habit of thinking about the long-term results of today's choices.
It's wild-eyed optimism to think this, to be sure -- but imagining it is the first step to making it real.
The Open Future: Living in Multiple Worlds is a part of our month long retrospective leading up to our anniversary on Oct. 1. For the next four weeks, we'll celebrate five years of solutions-based, forward-thinking and innovative journalism by publishing the best of the Worldchanging archives.