Cancel
Advanced Search
KEYWORDS
CATEGORY
AUTHOR
MONTH

Please click here to take a brief survey

Green Computing Update, Part 1: Data Centers
Jeremy Faludi, 7 Nov 07
Article Photo

It's been a while since we've done a comprehensive article on green computing. So much is happening in this industry that it won't all fit into one article. With this first update, I'll focus on data centers; next week's second update will focus on green personal computers, components, and services.

Before getting started, though, it must be said that greening the computer industry is touching off an unprecedented level of cooperation and information-sharing among companies, government, and laboratories. The Server Specs blog recently interviewed industry consultant Deborah Grove, who said it was unlike anything she'd seen in 24 years in high tech. "There is increased transparency... Executives are offering up what worked and what didn't work instead of everybody having to reinvent the wheel," said Deborah. "It's tangible in the sense of better products... Organizations are taking on the responsibilities that they can deliver on -- not worrying about turf, just trying to get the job done."

Deborah's largely referring to the Green Grid, a consortium of industry leaders (like AMD, Intel, Dell, HP, IBM, Sun, and others who are normally competitors) to share data and strategies for greener data centers. Green Grid's membership also includes the Pacific Gas and Electric Company (better known as PG&E), and it recently announced a collaboration agreement with the U.S. Department of Energy.

Data Centers

Data centers (also called server farms) are where companies like Google or Amazon or internet service providers locate the hundreds or thousands of computer servers that provide their online services. As Worldchanging's Joel Makower has written before in a couple great articles, data centers use massive amounts of electricity; large ones can use megawatts of power, with each square meter using as much power as an entire average US home. An EPA report says data centers consume 1.5% of the total electricity used in the US.

It's not just the computers themselves that use all this power: the combined heat output of all these servers, hard drives and network gear is so large that massive air conditioning is required to keep it all from overheating. "Cooling is about 60 percent of the power costs in a data center because of inefficiency," said Hewlett Packard executive Paul Perez in Data Center News. "The way data centers are cooled today is like cutting butter with a chain saw." Cooling capacity is often the limiting factor of how big these systems can be -- I've talked with more than one engineer whose data center facility sat half empty or more; even though there was plenty of room for more servers, the building's air conditioning was maxed out.


How can data centers get greener?

Lawrence Berkeley Labs has published a white paper titled Best Practices for Data Centers. Researchers measured 21 facilities and recommended strategies that they found most successful. These include several HVAC (air conditioning) improvement strategies, water cooling, efficient power supply and conversion, and on-site power generation. Other useful design strategies the paper does not mention are virtualization and "blade" servers. A paper at GreenIT titled Greening the Data Center: A 5-Step Method actually lists many more than five strategies, similar to those suggested by the LBL, but a bit newer and written for the layman. One step specifically emphasized is getting to know your facilities managers, as they are the ones who know about your HVAC system and energy bills. CIO.com also has a list of six strategies, including not letting your cabling block the flow of your air conditioning system, and moving power and cooling systems outside the data center -- even outside the building.

Water cooling is both more efficient than air cooling and can handle higher heat loads, simply because water is far more conductive of heat and has much higher thermal mass than air. It's been slow to catch on because administrators are paranoid about leaks (water and electronics certainly don't mix well), but systems are available now that have been proven reliable. IBM and HP have water-cooled server racks, and Knurr's even won a design award. The Pacific Northwest National Lab even proposed cooling via liquid metal, so that the fluid can be pumped hydromagnetically, with no moving parts.

Good power conversion minimizes the losses that occur when converting AC power to DC and back; this might not sound important, but it's one of the biggest inefficiencies in the system currently. In a server farm, most power coming from the wall outlet gets converted from AC to DC in an uninterruptible power supply (UPS) to safeguard against power outages or spikes; but then it is converted back into three-phase AC power and sent to the server racks, at which point it is once again converted from AC to DC, usually at the power supply for each server. Losses at each stage can be from 5 percent to 20 percent or more, and traditional power supplies stay on whether or not the components they power are actually being used.

Centralizing the power conversion can radically improve system efficiency, partly by eliminating the re-conversion from DC to AC back to DC, but also because a large efficient power supply can be economically competitive with an inefficient one, but small efficient power supplies (such as individual computers have) are still significantly more expensive than inefficient ones. Every gain in power conversion efficiency is twofold, since reducing power loss also means reducing heat loads on the air conditioning system.

Virtualization is a way of making one computer behave like several computers. Anyone who uses Parallels software to run Windows on a Mac has used virtualization, and knows its performance advantage: the virtual machine is isolated from the rest of the computer, so even the worst blue-screen-of-death software meltdown won't harm anything outside the virtual machine. Internet service providers also like them for providing bite-sized administrative domains to users without having to have buildings full of bite-sized hardware. They're also useful for green computing, because they consolidate what would ordinarily be several CPUs, motherboards, disks, etc. all into one machine. This reduces material use not only by letting several virtual machines share one physical machine, but a large virtual machine can be comprised of many smaller older physical machines, allowing hardware to remain in use longer. Virtualization also reduces energy use, because many components use the same amount of power whether they're being used or sitting idle.

Blade Servers are a relatively new architecture for server racks, where each individual server is less self-contained and more dependent on the rack for power conversion, cooling, networking, and other elements that have traditionally been redundantly located in every server in a rack. This makes for both a more compact system and a more efficient system, because it allows for things like the centralized power conversion and water cooling mentioned above.

The new architecture is taking off, and even has its own industry association promoting it now: Blade.org, which has touted its' systems efficiencies.

Dave Ohara, writing for Microsoft TechNet, takes more of a business / structural approach, rather than specific system design strategies (aside from power metering). Here's his list of ten qualities of efficient data centers:


1. Meters are used to break down energy usage to the level of components (such as a 2U server, a 4U server, a switch, a SAN, and a UPS) and which business units are charged for the power being used by those components.
2. Energy usage is continuously monitored to determine peak and low energy demands.
3. Energy capacities are monitored on a total datacenter level all the way down to circuits to make sure all circuits are within acceptable limits.
4. The energy savings plan is documented and rewarded.
5. The energy savings plan is reviewed regularly and corrective action is taken to address failures.
6. Determining how costs are charged back to business units is used to shape behavior, encouraging energy savings among independent business units. This point must be driven at the executive level.
7. CPU throttling is enabled on the servers, and the performance lab measures the range of power consumed under a variety of loads.
8. Thermal profiling is used to identify hot spots and overcooling.
9. IT performance engineering includes energy efficiency measurements.
10. Feedback of live data is available to individual organizations, allowing them to react appropriately.


He also points out that newer server software often has power-throttling capability, which alone can provide significant benefits: "Windows Server [2008] with default energy savings enabled could reduce consumption by up to 20 percent" in some systems, he claims. We've also previously mentioned the company Verdiem, which makes power management software for IT systems. Verdiem claims its software has saved customers a total of over $43 million, and eliminated almost 369 thousand tons of CO2 emissions.

According to Data Center News, "HP plans to practice what it preaches in its own operations by consolidating 85 of its data centers worldwide into just six larger data centers, using virtualization, blade servers, combining applications and smart planning". Fast Company also mentioned IBM's Project Big Green, "a billion-dollar-a-year investment in technology to double the efficiency of servers that currently can run at only 30 percent capacity--a move that could save clients 40% in IT-power costs."


How green is your data center?

These are still early days, so it's hard to know how green your data center is compared with others. Lawrence Berkeley Labs has the best data so far (the paper above cites data for energy use by 21 centers, and more work has been done since then), but the Green Grid is developing a set of measurements that can be used to determine how efficient your data center is. Currently they only define Power Usage Effectiveness and its reciprocal, Data Center Infrastructure Efficiency, but they plan to break this down into more granular measurements like Cooling Load Factor and Power Load Factor, or even efficiency standards for all the different types of gear in a data center (switch gear, chillers, uninterruptible power supplies, etc.) just like the EPA has done for appliances with their EnergyGuide labels.


The bigger picture

Clearly, greener data centers are a crucial part of the green computing world. However, they are still vastly outnumbered by personal computers. The Tech Target paper quoted above notes that


For every data center machine, there can be from 50 to more than 250 end-user computers. The average active, powered-on desktop computer consumes 100 to 300 watts of electricity. Much of the electricity that comes through the power cord of the computer is turned into heat and power conversion waste through the PC power supply."

Next week I'll talk about power supplies and other elements of green computing on the level of personal machines.

Bookmark and Share


Comments

Good article, Jeremy, especially by capturing not only the procedures and facts but the cooperative carrying on amongst the people in the sector.


Posted by: Brian Hayes on 7 Nov 07

Thanks Jeremy for the very informative and timely essay on the state-of-the-art of data center power efficiency profiles. When you say "Centralizing the power conversion ... ", I presume you mean running AC powered motors to run DC generators to directly power electronics and also to charge the batteries for UPS (supported by an Internal combustion engine run generator). Also, these heavy heat-loads are amenable for passive geo-thermal heat exchange.

I look forward to reading your essay on individual PC power efficiencies.


Posted by: Subbarao Seethamsetty on 8 Nov 07

Virtualization and the use of blades is really a huge win-win. My organization is currently undergoing a project we call HA/DR (High Availability/Data Redundancy) and the key feature of the project is to take all of our services and put them onto blades, utilizing virtualization to partition the hardware out to appropriately host the software services. Our main motivation for doing this is greater reliability for our customers, we have virtually no room for downtime. So in addition to giving our customers the service they need, we are also driving our efficiency way up. The green aspect of this is mainly a side effect so it is nice to know that in improving our operations, we are also reducing our impact on the environment.

Are there any other scenarios out there, not just in the IT sector, where the most readily available and cost-effective solution is also the most sustainable/greenest?


Posted by: Rob W on 8 Nov 07

Nice to see the opportunities for greening data centers so expertly and thoughtfully summarized. Great info, good links as well.


Posted by: Pedro H on 8 Nov 07

Interesting how virtualization is being discussed as the hot new thing by current unix and PC users. These concepts were a part of daily life for us mainframe programmers since early 80s with IBM's VM operating system (Virtual machines). IBM also had LPARs (Logical partitions) for flexible CMOS chips hardware splitting and combined with their loosely and tightly coupled CPU complexes (late 80s to early 90s) we had one of the most efficient computing installations. These CMOS chips have (even today) one of the best assembler language ever designed (in the sixties, only the power PC assembler from 1989 comes even close) and assembler language supports the machine language of a chip. The reason to mention the assembler is to indicate the robustness of the hardware and OS (MVS - the best even today created by Fred Brooks) of the computing environment.

All this experience is valuable if it can serve the needs of todays large installations like Yahoo, AOL and Gmail. The blame and responsibility squarely rests on IBM for not leveraging their historical assets to provide efficient computing environments. The reason, I personally believe, is that hot shot computer science graduates squirrel away information with any mention of the word "mainframe" into a NRB (not relevant bucket) and these wizkids are now second generation occupying influential positions, so miss the point entirely.

It is possible that IBM itself is layered with these computer science wizkids so are blind to their true assets. As an example, very few IBMers have ever heard of an operating system called TPF (one of IBM operating systems). It is like the jet engine, 30 years old and is the best High performance, high volume and high availability system TODAY. There is nothing on the planet that can meet its capabilities. Shame on IBM for not bringing it out in the open as an alternative to these ridiculous server farms that span as far as the eye can see.

I am hoping that there are some in the Worldchanging.com readership who can relate to and understand the impact of the points raised in this note. There is no need to "change" all the time; sometimes repurposing efficient existing systems will do as well.


Posted by: Subbarao Seethamsetty on 9 Nov 07

I hate to say it but the simple solution would be to locate any new data centers either in very high mountains that exeperience low average temperatures or plant them in the Arctic. Actually simply putting one in Northern Canada or Alaska would probably save a great deal on air conditioning costs. It always amazes me that the HVAC experts can't seem to figure out how to draw cool, clean(?), FREE air in from outside instead of powering up the air conditioning system. I'm sure a simple fan (or, heaven forbid a simple convection system using wind) would be a lot cheaper than running a compressor and pumping cold gas all over the place. Maybe we need an X-Prize for this!


Posted by: Cybermynd on 9 Nov 07

Building managers, HVAC designers and architects should be brought into the loop on this issue. The "waste" heat could be useful for maintaining building temperatures and/or energy recovery if considered as part of the entire building system. I suspect that for a lot of locations the data server room requirements are retrofit rather than built into the system as a whole, which is why geothermal is not typically considered when it probably should.


Posted by: hphill on 9 Nov 07

Great article! One player in the game that wasn't mentioned: APC (founding green grid member) has been championing right-sizing your data center using modular, scalable infrastructure for years. Their approach has always been to only deploy what you need as you need it. (in the way of full disclosure, I am an employee)


Posted by: Heather on 11 Nov 07

As mentioned above, putting Data centers in Northern Canada has a lot of merit and colocating and heating a greenhouse with the waste heat will increase efficiencies. And yes it amazes me as well that we never consider drawing cool air in for refrigeration to reduce the electric load. Optical fiber cables today make remote Canadian locations a vialble option. Those regions suffer high unemployment rates so data center facilities will relieve some of the economic burden. Newfoundland, New Brunswick, Northern Quebec (additionally host data centers from France) and Northern Ontario are excellent choices.


Posted by: Subbarao Seethamsetty on 11 Nov 07

Exactly - what about heat-exchange technology, using the excess heat to heat onsite greenhouses (nice place to eat lunch in too & grow foods the cafeteria can sell / provide), or to heat the building itself, what about siting datacenters in colder locations, what about channelling outside air inside.

Related news: the green top 500 based list is due out (the most energy-efficient of the 500 fastest supercomputers), http://www.computerworld.com/action/article.do?command=viewArticleBasic&taxonomyName=mainframes_and_supercomputers&articleId=9045138&taxonomyId=67&intsrc=kc_top

And check this story out, for news of a pedal-powered supercomputer and some very energy-efficient new designs too, http://top500.org/blog/2007/10/27/human_powered_supercomputing


Posted by: zupakomputer on 15 Nov 07

Just an additional future-proofing concern - obviously things like power sources & delivery remain the same, but worth mentioning what's on the horizon (in some cases already here) - quantum computing & computers using superconductive fluids for information transfer.

(re: about mainframes & virtualisations above - I'm doing a computer course justnow, but I never forgot about mainframes. I notice the older tech, and tape storage, are still very prefered. As a design-structure it shows up a lot but it's refered to with different terms (eg - an internet cafe where the input nodes are just a keyboard monitor mouse). Assembley to machine language (and to the upper levels of languages) has always concerned me, in terms of its role in how efficiently instructions are being carried out. There's a lot of conversions going on there as there is with the AC to DC to AC back to DC conversions. I'd like to see more focus on OSs that are more directly calling on the hardware.)


Posted by: zupakomputer on 17 Nov 07



EMAIL THIS ENTRY TO:

YOUR EMAIL ADDRESS:


MESSAGE (optional):


Search Worldchanging

Worldchanging Newsletter Get good news for a change —
Click here to sign up!


Worldchanging2.0


Website Design by Eben Design | Logo Design by Egg Hosting | Hosted by Amazon AWS | Problems with the site? Send email to tech /at/ worldchanging.com
©2012
Architecture for Humanity - all rights reserved except where otherwise indicated.

Find_us_on_facebook_badge.gif twitter-logo.jpg