A rather cool thing has been happening lately in the hot world of computer server farms: product manufacturers are serving up dramatic improvements in energy efficiency -- with scarcely an activist, regulator, or other pressure group to claim credit for it.
Except for customers, that is.
The problem, for the uninitiated, is that server farms -- those massive banks of computers that manage traffic for Web sites, email hosts, and company networks, among others -- are ravenous energy consumers. There are at least nine million servers in the U.S., operating 24/7, providing the bandwidth that allows businesses, individuals, and governments to store and serve up every type of data and media imaginable, from iTunes to IRS forms.
Servers are growing at an astonishing rate -- not just in numbers, but in speed, demanding ever-greater amounts of power. According to the research firm Gartner, there has been a significant increase in the deployment of high-density servers over the past twelve months, leading to huge power and cooling challenges for data centers. The energy needed for a rack of these high-density servers can be between 10 and 15 times higher than for a traditional server environment.
Here's just one amazing factoid: According to a study last year by Lawrence Berkeley National Lab (download - PDF):
"A single high-powered rack of servers consumes enough energy in a single year to power a hybrid car across the United States 337 times."
That's not all. Additional power is needed to remove the huge quantity of heat generated by these newer machines. If the machines aren't cooled sufficiently, they can shut down, with potentially devastating consequences to affected businesses, agencies, or other organizations. All told, server farms consume many times more energy than office facilities of equivalent size.
For those who operate server farms, this has become a nontrivial issue. While energy costs represent less than 10% of a typical company's information technology (IT) budget, that could rise to more than 50% in the next few years, says Gartner. For companies like Google, whose massive computing infrastructure, by one estimate, gives it "the largest utility bill in the planet," the push to make servers run cooler and more efficiently has taken on added urgency. According to Google engineer Luiz André Barroso, writing in the September issue of the Association for Computing Machinery's Queue:
The possibility of computer equipment power consumption spiraling out of control could have serious consequences for the overall affordability of computing, not to mention the overall health of the planet.
So the IT world is stepping up, with manufacturers of chips (AMD, Intel) and servers (Dell, HP, IBM, Sun) competing as feverishly on energy efficiency as they do on speed and other performance characteristics. (Click on the preceding links of each to see their respective take on energy issues.)
They're collaborating, too. "We’re looking for other companies, especially ones that have been leaders, to start to share what they know," Dave Douglas, VP Eco Responsibility at Sun Microsystems, told me recently. He said Sun plans to post its own energy use per building on the Internet. "It may or may not be useful to lots of people but I get lots of questions, ‘What is a reasonable amount of greenhouse gas emissions for office employees in a certain locality?' That kind of data is really hard to find today. If we start to have everybody sharing their best practices, share where they’re at, share which projects are working and which one’s aren’t, there’s a lot of value to be found in all that information. There’s a very strong parallel to the open-source community."
Electric utilities, for which the gluttonous energy consumption of servers represents a significant potential threat to power plant and grid stress, are getting into the act. Last week, Pacific Gas & Electric in California announced the first-ever utility financial incentive program to support "virtualization projects" in data centers. Virtualization allows multiple applications to run concurrently on computing equipment, thereby enabling customers to consolidate their data centers and remove a large portion of their existing servers. Qualifying PG&E customers can earn a rebate of up to $4 million per project site, based on the amount of energy savings achieved. In addition to the rebate, PG&E customers can expect to save $300 to $600 in annual energy costs for each server removed. Those savings nearly double when reduced data center cooling costs are taken into account. (PG&E may also be the first utility to set up a dedicated Web page focusing on the needs of high-tech companies.)
It should be noted that the politicians haven't been entirely removed from the picture. Last July, the U.S. House of Representatives approved a bill that calls for a six-month U.S. EPA study on data center efficiency. (The bill was referred to the Senate Committee on Energy and Natural Resources, where it sits awaiting approval from that body.) The specter of congressional scrutiny led an industry consortium called Standard Performance Evaluation Corporation in May to establish a set of benchmarks for servers. The consortium -- whose members include HP, IBM, and Sun -- are hoping the benchmarks will allow them to establish uniform energy-efficiency goals.
Governmental interest notwithstanding, all of the technological progress to date has come through voluntary action on the part of chip and server manufacturers, the result of healthy competition spurred by pressure by customers to solve a burning (and costly) problem.
It serves as an exemplary model of how industry players, simultaneously innovating, competing, and cooperating, can create profitable environmental solutions themselves -- a model those in other sectors would do well to copy.
Links to the Rocky Mountain Institute Data Center Charette (design conference) in 2003. The event focussed mainly on reducing power consumption. The intro of the final report is free, the report itself is $20, alas.
One another good initiative is the blackbox - a data centre in a shipping container from Sun.This reduces power costs (due to design decisions) and space requirements with added flexibility.
Note also the rise of powerful chips that do not need active cooling eg. the VIA 1 Ghz chip, used in some thin-clients - this gives an 80% saving in electricity usage and also reduces the heat output so reducing the need for cooling - can these be an alternative to end-user PCs (ie no disk drive, all our data held on the net), or as low-end blade servers? Look for example at the VXL itona range of thin clients
Seems the answers may come from the Laptop arena, where cooling is in the forfront. And as someone in a previous post was hinting towards, but more specifically, ""Flash" memory. Forward thinking might even suggest a time when data will be transfered via "laser to glass" technology, ie. again like "flash" no moving parts. And perhaps this technology will bleed over to processor design technology. A paradigm shift is needed as computing demands worldwide rise at an increasing rate.
It's not just datacentres that are burning up the Watts. I see that 7 of the 10 most powerful supercomputers in the world are in the US - http://www.top500.org/lists/2006/11 and that may well be repeated for much of the rest of the list of 500.
Looks like some interesting engineering opportunities for the right minds.
I think that Dave Douglas hits on an incredibly important point which I've been mulling about and asking different people on occasion: Does being green in a business context change the nature of business?
Traditionally it's all about beating the competition, but in the case of green businesses (wind power, for example), there are some core values at stake which contradict no-holds-barred competition. If you're in business because you want to do good for the environment, what are you accomplishing if only your business is successful at being green? True, some businesses probably have another agenda at stake: making money. But making money and doing good for the environment both arise out of increasing efficiency.
My sense is that information exchange is a mandatory component to a business's green strategy. Dave Douglas seems to agree with this line of thinking based on his statements about "open-sourcing" information. As for that competition issue, sharing information doesn't eliminate competition; it just shifts the focus of competition to different areas, such as customer service.
Getting rid of Disk Drives is a good idea, but we shouldn't replace them with Disk Drives on Cargo Containers that we access over the internet... that would be much more inefficient. A better way to go about it is to simply just go on the course we are on now, and that is Solid State hard drives. You can already get Flash Hard drives in 20 and 40 gigs now. RAM access speeds at 1/10th the power consumption. It's still very expensive now though.
Now it seems there is the beginning of a "green 500" for supercomputers - see http://green500.org/Home.html
Given electricity prices at present this could soon become a big deal for users such as universities.