Patrick di Justo is a New York-based science journalist. He is a contributing editor at Wired, a writer for Scientific American, Popular Science, the New York Times, and has a weekly radio segment about future science on public radio in New York.
A new study published in the journal Global Ecology and Biogeography is billed as the first real-world test of global warming climate models. It manages to be both discouraging and very enlightening about our current state of understanding of the interactions between climate and ecosystems.
In order to test the current models, University of Oxford researcher Miguel Araújo went back to the early 1970s. He gathered real-world 1970s data on climate and the geographic ranges of various bird species, then used 16 climate models to "predict" the climate and habitat changes for 1991. By comparing the predictions with the real-world data from 1991, Araujo was able to judge the accuracy of each model. What his team found was startling.
Habitat change models are usually a three-step process. First, each species is mathematically linked to its present climate range. Then a climate forcast is produced for some point in the future. Then each species is mathematically re-linked with the projected climate. Scientists then forecast whether the species will grow, or shrink, or even become extinct.
Araujo discovered that no single model could accurately predict the 1991 population distribution. For 90% of species the models could not agree whether their geographic range would expand or contract. In the small minority of cases (10%) where all the models agreed about the direction of change, they only had a 50% chance of getting that direction right. It would be just as accurate and a lot less hassle just to toss a coin says one of the co-authors, Dr Richard Ladle.
But there's good news. The accuracy of the predictions drastically increases when different models are compared and used together to create a consensus projection. Using the same data set for British birds, the consensus prediction was shown to be vastly superior to any single model and could predict bird range expansion or contraction with an accuracy of over 75%.
--Patrick di Justo
In order to test the current models, University of Oxford researcher Miguel Araújo went back to the early 1970s.
He should have stayed back there and used his knowledge of coming world events to gain power and wealth, then used it to end the world's reliance on oil back when it wasn't so late.
Patrick, was each model run once? You write that the combined output of the models was far more accurate that each alone. That's not surprising, especially for models that may contain stochastic (probabilistic) features. I'm wondering if any one model, run numerous times, became more accurate in the combined average of its results?
This also points to the power of peer review, multiple research teams researching the same subject, the "wisdom of crowds" thesis, and the argument for open-source science.
Ah, so the wisdom of crowds business translates to a wisdom of multiple perspectives combined - which then also supports diversity in teams.
I honestly don't know how many times each model was run.