This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy

Green data centers in four steps

2015.05.01 By Zhang Fan, Huawei Data Center Senior Advisor

textStart

Huawei has developed four steps to helping operators plan, construct and manage a new generation of green data centers. Not only will these steps boost operating revenue for enterprises, they will also benefit the planet.

The growth of cloud computing has set off a new wave of data center (DC) construction across the globe. Statistics show that current DC electricity consumption and carbon emissions account for more than 1% of the global total. As such, the construction of green energy-efficient data centers is not just a choice that enterprises themselves must make. It is also a major social responsibility that enterprises and the industry as a whole must assume. As an industry promoter and practitioner of green energy-saving technology, Huawei has years of experience in constructing green data centers and has proposed the "Four Steps" lifecycle of energy savings to help operators build customized green data centers.

Step 1: Effective construction goals

To a large extent, the degree to which a DC is green and energy-efficient is determined in the early planning phase. In DC planning, the builder must determine the construction scale, standards, site selection and phased planning, and specify important indicators such as usability, power density, and power usage effectiveness (PUE). Of these indicators, PUE is the most important for measuring the energy efficiency. In general, a lower PUE indicates higher energy efficiency. The PUE value itself is tied to construction scale, usability, power density, and of course, the natural climate of the local area. However, the greatest determining factor is site selection.

The construction agent should first consider areas that are cold for longer periods over the year or areas that offer rich water resources such as riverbanks and coastal sites. The DC can then tap into natural cooling sources and reduce the system’s energy consumption. Site selection must also consider local humidity, cleanliness, acidity, and other such factors to avoid additional energy consumption by filtration and humidification.

For example, when helping China Mobile build its pilot data center in Harbin, Huawei considered various factors including the local climate and air conditions when adopting an indirect heat exchange scheme with rotary air-conditioning. While the DC uses cold outdoor air, it also prevents the indoor/outdoor air exchange for machine rooms, to effectively avoid issues such as winter humidification and acidic gas corrosion. Six months of third-party testing showed that the data center’s average PUE was 1.22. For this, the project received the "DCD Greenest Data Center" award for that year.

Step 2: Suitable cooling schemes

The use of natural cooling sources is a major development for green, energy-efficient data center construction. To fully utilize air, surface water, groundwater and other free sources of cooling, engineers have designed many new types of cooling systems such as direct ventilation, water source heat pumps, and direct cooling and evaporation from river water or seawater. Several of these have already demonstrated solid application potential. In areas with good air quality and humidity, cool air itself can be used directly as a free cooling source. One great example of this is the large-scale data center that Huawei designed for a customer in Istanbul in which the cooling system uses a combination of direct ventilation and air cooling. Because the data center only uses 215 hours of mechanical cooling all year-round, it can achieve an average annual PUE of 1.28.

To extend the periods of time in which natural cooling sources can be used, the temperature of the air supply for server cabinets needs to be increased, raising higher demands on IT equipment. In the last decade, major ICT vendors have revamped their product design so that IT equipment can operate under air supply temperatures of 35 degrees Celsius or higher.

For the air conditioning terminals, shortened air supply paths and reduced heat exchange losses will be developed in the future. Heat exchange between air conditioning systems and IT equipment will progress from room air conditioning to confined and isolated hot/cold aisles. For operations with relatively high power density, row-level air conditioning is used to shorten the air supply path and reduce losses. There are also cabinet-level schemes such as heat-pipe back panels and cold-water front panels still in the early stages of commercial application, and pilot schemes for server- and even chip-level cooling schemes are currently being run.

The thinking behind such schemes is to bring the heat exchange media as close as possible to the IT equipment and thus improve heat transfer efficiency. Huawei has already completed lab verification of its liquid server cooling technology. At China Mobile's southern base project, Huawei implemented confined cold aisles, row-level air conditioners and heat pipes alongside other new technologies, which combine to use between 20-to-40% less energy than traditional schemes.

Step 3: Improve electrical efficiency via clean energy

The power supply and distribution gear are actually major power consumers. In recent years, technicians have improved product performance and system architectures to greatly reduce their consumption. Solar, wind and other forms of clean energy are also effective at reducing carbon emissions. Huawei designed a combined cooling, heating and power system (CCHP) using natural gas for Alestra’s Mexican data center. The generator is fueled by natural gas, which provides 100% of the data center’s power supply. Compared to coal-fired or oil-fired units, the generator’s carbon emissions are significantly lower. Meanwhile, high-temperature exhaust gas and wastewater generated by combustion can be recycled for use by absorption-cooling machines, the cooling capacity of which can in turn meet roughly 70% of the data center’s required cooling needs.

The major benefit of simplifying the power structure is de-UPS. While uninterruptible power supply (UPS) provides optimal, reliable power for IT equipment, it also consumes approximately 5% of its own electricity. More parties have begun trying out new power technologies to replace UPS schemes, including the use of dynamic flywheel UPS, high-voltage direct current transmission (HVDC) schemes, and battery-based cabinet-level backup power schemes. Of these, HVDC schemes remove two AC-DC conversions, thus greatly improving efficiency. The data center that Huawei helped China Unicom build in Shanghai, for example, adopts a hybrid power supply of 240V HVDC and UPS, and the energy efficiency of the data center’s power system is as high as 96%. With technical and engineering improvements, the efficiency of high-voltage DC power supply will also continue to increase.

Step 4: Optimize operations & management

Routine maintenance, operations and management also have a major impact on the actual energy efficiency of data centers. On the one hand, effective routine maintenance will help ensure that equipment continues to operate at its best; on the other, cooling machines, cooling towers, air-conditioning fans, and other energy-consuming equipment exhibit a non-linear efficiency curve that makes it hard to determine control policies and related parameters at the time of design and construction. Therefore, data centers must constantly monitor and adjust these policies and parameters based on actual operations. What’s more, the acceptable range for room temperature and humidity that IT equipment and operations can sustain can only be determined after a certain duration of operation.

Once a data center is built and specialists are sent to study, record and adjust operational data, optimal energy consumption can be achieved within 2-to-3 years. For example, after two years of continual adjustments and improvements to its data center in Beijing’s Jiuxianqiao, Baidu lowered the data center’s PUE from 1.40 during the design phase to 1.33.

textEnd