Genetic algorithms are sort of the ur-biomimetic process. While other examples of biomimicry can emulate certain aspects of natural design or the functions of particular organisms or ecosystems, genetic algorithms reproduce evolution -- arguably the core biological process.
Genetic algorithms have been used to make some pretty unusual products, and we often think of GAs as a way of coming up with solutions for otherwise intractable problems. But genetic algorithms can have some fairly down-to-earth applications. And those applications, in turn, suggest some pretty radical possibilities for the Bright Green future.
Australia-based Optimatics is one of the various companies trying to find ways to apply genetic algorithms. But Optimatics is specialized: it uses genetic algorithms to improve the efficiency of water provision.
Figuring out where to lay pipes and water routes in growing urban environments is tricky, and poorly-optimized water networks can be wasteful and costly. There are a number of traditional techniques (many involving simulations) for figuring optimal layouts, but Optimatics argues that the GA approach can provide up to 20% more efficient results than the best simulation-based system. The Optimatics website lists numerous case studies, as well as a detailed (and quite good) overview of how genetic algorithms work. It's a bit technical, but not overly so, and it gives a good feel for how the same processes that guide evolution can be applied to something as prosaic as plumbing.
But this has implications for more than plumbing. Regardless of whether Optimatics is actually good at what they do -- and this post isn't meant as an advertisement for them by any means -- the use of genetic algorithms for optimizing networks makes a great deal of sense. One of the key features that GAs have going for them is speed: thousands of generations of designs can come and go in a matter of minutes or hours. Urban networks, whether for water, power, communication, or even traffic, are not static; even if structural changes happen over the course of months or years, they do change. That means that a design considered optimal is likely to become sub-optimal in short order.
Optimization is a frequent, arguably near-constant, need. And genetic algorithms provide a good way of meeting that need. This will become increasingly important as we move towards greater use of "smart grids" and microgeneration for our power networks. With home solar, co-generation, plug-in hybrids, wind and more, each will shift between being net producers and net consumers of electricity, sometimes in relatively short order. Moreover, new sources and sinks for power will frequently alter the structure of the network. Rather than relying on established layouts and connection patterns, a GA-based smart grid could in principle come up with new optimizations as fast as the network structure changes.
Another important aspect of genetic algorithms is that they allow for evolution to be replayed. It may be that a seemingly-optimal network (water, power, etc.) is actually a "local maximum" -- actually not the most optimal structure, but the best of all broadly similar designs. A sophisticated GA approach would both push the evolution from the previous "optimized" network to fit the new one, and start all over again to see if there's a new, better-optimized layout unlike the previous structure, now more visible due to network changes.
In short, genetic algorithms are potentially useful tools for keeping up with a rapidly-changing context. As we learn more about how best to manage (and even "terraform") our fragile ecosystem, we will likely find that genetic algorithm-based environmental models are critically important to our overall success.
"One of the key features that GAs have going for them is speed"
In fact, GAs are one of the slowest methods of optimization, and this is a major weakness. They are a type of local optimization or "hill-climbing" methods, so they can get stuck in a local rut and miss the best overall solution. They also require clever/serendipitous feature selection and sophisticated allele=feature translation to make sure cross-over and mutation work reasonably.
See my Genetic Algorithm tutorial at: http://www.paraschopra.com/tutorials/ga/index.php
I agree with Chopra, but is important to remember that there are other enconding representations for GA's and not only binary strings, that could be not so useful to find a (sub-)optimal.
And as the article talks about the dynamics of a problem that someone could be trying to solve, we need to consider too that there other Evolutionary Algorithms methods that can deal with the dynamics of a optimum that is moving as the time/variables of the problem changes. One method is the so called Evolution Strategies or ES, this Evolutionary Algorithm began in Germany in 1960's and were applied very well for aerodynamics optimisation problems and other real world problems. Evolution Strategies were designed and have been developed in a very solid base, using key-concepts(global convergence, self-adaptation) to face optimisation problems and to emulate artificial evolution in a very elegant way.
I personally prefer Simulated Annealing and, more generally, Metropolis-Hastings samplers. Very simple. Nice convergence proprties.
Physical implementation is straightforward. I'll bet this is an algorithm nature uses all the time, we just haven't noticed yet. Your brain, for instance. Your brain is very Markov.
Well, there are so many algorithms to solve a big myriad of problems and I think that if there are those problems that can be solved by classical methods(Operations Research, Markov Chains, Gradient-Based Techniques, Simulated Annealing), so we do not need to try another algorithm to solve them(like in the history of "reinvent the wheel"), but, at same time, we have (a lot of) problems that those classical methods do not achieve good solutions, so here we have a great opportunity to experiment new methods, like Evolutionary Computation(Genetic Algorithms, Evolution Strategies, etc), and to verify the solutions that those new methods give us. I think that Markov Chains are a little strange because its principal definition tells us that the transition to the next state only depends on the current state, that is, the system is memoryless, a thing that can be very dangerous when applied to a system that has a high degree in its dynamics/non-linearities. So, I don't think that our brains are Markovian.