A consortium of European computer scientists are working on a project called NEW TIES -- New and Emergent World models Through Individual, Evolutionary and Social learning. Its goal is nothing less than to evolve an entirely new culture through the use of computer "agents" cooperating, competing and reproducing with each other in a vast simulated environment. My question is, have they thought through the implications?
The NEW TIES project appears on the surface to be a slightly less-colorful but more sophisticated version of The Sims (even though, as it turns out, the underlying display engine is actually from the first-person-shooter game Counter Strike). Like The Sims, the various simulated people will have needs (such as food and sex) and capabilities (such as tool use and communication); the difference, however, is that the NEW TIES agents have the ability to learn, and to pass on their learning to subsequent generations.
New Scientist sums the project up nicely:
Each agent will be capable of various simple tasks, like moving around and building simple structures, but will also have the ability to communicate and cooperate with its cohabitants. Though simple interaction, the researchers hope to watch these characters create their very own society from scratch.
Every character in the simulated world will need to eat to survive, and will be able to learn from their environment through trial and error - learning, for example, how to cultivate edible plants with water and sunlight. In addition, characters will be able to reproduce by mating with members the opposite sex and their offspring will inherited a random collection of their parents "genetic" traits.
Ultimately, the NEW TIES project hopes to see the evolution of an entirely new culture, with its own language and rituals. While certainly ambitious, such goals are not outside the realm of possibility. Emergent behavior can have startlingly sophisticated results, and the various groups participating in the project have abundant experience in the creation of computer agents able to evolve new functions to meet (admittedly more limited) goals. They argue that it's only the development of fast distributed processing that allows this project to happen; over 50 computers are involved in the simulation. The NEW TIES software is open source, and can be downloaded from SourceForge.
It is most likely that what will result from this project will be valuable largely in the sense of seeing how difficult such an endeavor truly is, and that none of the goals of emergent culture and language will be met. At best, what might emerge are sets of behavior that superficially appear novel and "cultural," but upon examination have clear algorithmic roots.
But what happens if they succeed, and a simulated society emerges? At what point does it become unethical to turn the simulation environment off?
Is it when they develop emergent behaviors not programmed in to solve resource problems?
Is it when they develop their own language for "horizontal" transmission of novel behaviors (i.e., not passed down "genetically" from parent to child)?
Is it when they demonstrate "ritualized" behavior, which has no obvious function but is adopted by the agents as part of their "social" interactions?
To be clear, this is not to argue that this project has the potential to evolve independent machine intelligence or anything like that. The NEW TIES sims are highly unlikely to pass a Turing Test. But what could emerge if the project meets its goals is nonetheless something very new -- non-biological entities that meet the definition of life, entities that are not intelligent in any conventional sense yet have invented their own methods of symbolic communication and meaning.
Would turning the simulation off be akin to flushing an Ant Farm down the drain -- harsh, but of little ethical consequence in most moral systems? Or would it be a new form of genocide? If the latter, is it mitigated by the existence of backup files? Or is it entirely unlike death, and more akin to sleep (presuming the simulation would pick right up where it left off once restarted)?
Whether you think these questions are silly or serious, they are issues that our civilization will confront soon. Even if NEW TIES fails to reach its goals, it is quite possible that a subsequent project (with better learning algorithms and more powerful systems) will succeed. Or perhaps a version of the Aibo a generation or two down the road will have sufficient learning capability as to demonstrate entirely novel, self-directed behavior. At what point do we have an ethical responsibility to non-biological creations?
I know that some of you will say, "never, they're just machines." But that position will become increasingly tenuous as biomimetic principles further infiltrate engineering and design. If the Aibo version 2010 behaves like a puppy, including learning to fetch, trying to avoid pain, and responding to the sound and image of its human companion, of what relevance is its material composition? If it matters that we made it, how does that position change when fabrication technology becomes sufficient to produce a new one at the touch of a button? How about if the new one's behavior is based on a template combined from two (or more) existing Aibo 2010s? Or if it can learn from watching the behavior of its "parent?"
These are familiar questions to readers of science fiction, who will undoubtedly be able to offer up much more compelling and troubling scenarios. But this is another element from science fiction likely to show up in reality far sooner than anyone might have supposed. If the NEW TIES project succeeds, we will confront the ethics of shutting down a new and evolving society (that happens to live on a network of computers) within the next couple of years.
I don't have an easy answer for this. By even raising the question, I'm probably biased towards treating a successful NEW TIES society in a manner similar to how we'd like to treat higher primates, and doing everything in our power to avoid their loss. But I recognize that this is not the only "good" answer, and further that I'm not sure where the boundary between "it's just a machine" and "it's non-biological life" lies.
Perhaps I'm just projecting. A small number of philosophers argue that, if you accept the premise that a sufficiently advanced human society will inevitably run incredibly sophisticated simulations of history and societal evolution, with functionally independent computer agents interacting, learning and evolving, then it is infinitely more likely that we -- you and I and everything around us -- are actually living in one of those simulations instead of being the first "real" human society. After all, for those on the inside of the sim, there would be no way to tell.
With that in mind, developing broad cultural values that simulated societies of a certain level of sophistication should be treated as the functional equivalent of biological life, and not simply shut down when the experiment's over, would be very much in our own best interests.
David Brin wrote a short story, "Stones of Significance," that describes a society dealing with these issues. A major social issue: Whether fictional characters deserve a chance to take a turn being "reified" (given flesh-and-blood bodies) and how much computing cycles / memory get allocated to a burgeoning population of virtual beings.
The doctrine of vitalism was discredited in the early 19th century when biology became formalized. With the new ideas of evolution, genetics and biochemistry, it became easier and easier to view organisms as machines, staggeringly complex machines emerging from blind selection. There is no special substance, outside the realm of empiricism, that sharply divides the continuum between life and non-life.
Vitalism lingers still in the study of the brain and mind but as these areas became formalized in the 20th century, this last bastion of vitalism (The speculations of Penrose and Chalmers aside.) has been under increasing assault. Most serious neurologists, philosophers and psychologists believe the mind is an emergent process running within the machine that is the brain. There is no special substance, beyond the realm of empiricism, that sharply divides the continuum between mind and non-mind.
How complex, and in what way, does a machine have to be before we class it as an organism? How complex, and in what way, does it have to be before we class it as sapient? These questions are going to be asked more and more frequently as the century rolls on.
It became easy to see organisms as machines because we were at the industrial age, when everything was a machine. Now, in the digital era, it's easy to see organisms as virtual entities. We may be both things and more, but don't forget he phenomenon of subjectivity.
Many things may emerge from virtual societies (as well as from any complex dinamic system) but, will the characters have true emotions and will they experience their subjectivity?
Software is a machine. Networks are machines. I think my point stands.
Perhaps I'm being too broad with my definition of machine as any system with parts that change state but I tend this way because a lot of philosophy has suffered from a machine definition that is too narrow.
Subjectivity, motivations and emotion (Although not necessarily similar to that of humans.) will be an essential part of any brain, natural or artificial, that rivals the comlexity of mammaliam brains. Again I don't think this invalidates the premise of strong AI or strong AL.
I didn't say that it would invalidate any premise. I just made a question that I guess that we can't answer yet. It's not the same thing to act as if you have emotions that the fact to truly experience emotions. Can a machine, just by imitating the human brain, to acquire those emotions? We can't prove it nor deny it yet, I guess.
There's no reason to think that complexity guarantees consciousness. It seems likely to me that a certain kind of organization is required for consciousness, as we define it.
If consciousness is not supernatural, then it will be possible to eventually define it and replicate it, using other materials, and alternative designs to that of humans.
That will be a conscious machine.
Correction to the 2nd sentence: It seems likely to me that a certain kind of organization is required for consciousness, as we *experience* it.
Greg Egan has a great run of short stories, in the collection Axiomatic, and a few novels, particularly Permutation City and Diaspora, exploring these ideas.
I agree with Nick. It's not merely a matter of complexity. It's also a matter of organization and structure.
Something might be more complex than the human brain* in a crude quantitative way but still not experience consciousness, let alone human consciousness, because it's not organized in the right way.
In talking about emotions and consciousness, I think we need to generalize from the specific human example. Human emotional response is not exactly the same as chimp emotional response. Human self-awareness is not like the self-awareness of horses. We can verbalize and share our conscious experiences. Horses can't.
But are horses self-aware? Do they experience a kind of consciousness, if not a verbal one?
If we someday build creatures with brains organized in the right way and with sufficient complexity, will they experience emotions? I'd say they'd have to; emotions and motivations are central to conscious thought.
But would they experience emotions like people do? I doubt it. Their biology would be far different from ours. They'd have emotion and motivation but would they have analogs to human fear, humor, joy, rage, loyalty? I really don't know.
This takes us into the old philosophical problem concerning the lion who speaks. Because the lion's biology and experience is so different from ours, would we understand what he had to say?
* For example the universe is more complex than the human brain, but is the universe conscious? Maybe not because it isn't organized in the right way.
I haven't thought much about this, but here's my opinion, for what it's worth: if there were a point in the simulation when it became completely clear that these self-organizing patterns would never develop empathy or ethics, then it would be time to hit the off switch. They wouldn't be "life", they would be a game at best, "zombies" - animated corpses - at worst.
How would you go about determining that, David?
I haven't a clue, Jamais. Thinking about how to do that might be illuminating. Or perhaps I've proposed an impossible test. After all, could we ever conclude of any human being that he or she completely lacks empathy or ethics? (Many names suggest themselves, but only as lame jokes.)
It's impressive how we all in this thread seems to agree at the essential points of this issue, which generates a lot of discussion in the scientific scenario.
The main point is that the complexity of a system plus its kind of organization may be enough to make emerge some kind of self-awareness - feelings - etc. from it.
It's also interesting the point that we don't have a sure procedure to determine if some system is realy "feeling" or just "acting". Based on the impossibility to prove that was that, for years, human rights were denied to black people and women under the claim that they didn't have a soul, remember?
I guess the biosphere itself should have some kind of self-awareness, given the quality of its interconnectedness, very similar to a brain.
Not to talk about the universe, which seems very deterministic from our perspective, but who knows... who knows what may It be thinking now...