According to the NGO Global Carbon Project, global carbon emissions in 2020 reached 34 billion tons. As 34 billion tons is a relatively abstract concept, Scientific American blogger Caleb Scharf gives us a much more intuitive calculation using the carbon dioxide emissions from a burning forest as an example: Burning one acre of coniferous forest releases 4.81 tons of carbon dioxide, so 34 billion tons of carbon emissions is equivalent to burning nearly about 30 million square kilometers of coniferous forest — three quarters of the world's total forested area.
This alarming statistic highlights the need for a concerted global effort to prioritize carbon neutrality. While 66 countries and regions have set carbon-neutrality goals to reduce carbon emissions to protect the planet, only Bhutan and Suriname have achieved carbon neutrality. And with the rapid development of cloud computing, big data, AI, and other digital technologies, data centers are increasingly hungry for computing power and bandwidth. As major carbon emitters, data centers face growing pressure as a priority for reducing energy consumption.
More computing power at better efficiency
According to the June 2021 paper "Usage impact on data center electricity needs: A system dynamic forecasting model", the demand for data center energy will double to triple by 2030, reaching 750 TWh to account for 36% of the entire ICT industry's power consumption.
Figure 1: Increases in per-cabinet power density and per-port power consumption
As well as using renewable energy and improving power supply and heat dissipation, other methods to cutting the power consumption of data centers include improving computing power and shortening job completion time (JCT).
Released by China's Open Data Center Committee (ODCC), the Data Center Computing Power White Paper reports that the computing power of a data center is determined by server computing power, storage throughput, and network transmission efficiency. Assuming the same number of servers, just a 0.1% network packet loss can cause a 50% loss of computing power, wasting huge amounts of energy. With this in mind, improving network capabilities and increasing computing power per unit of energy consumption in data centers is essential to their green transformation.
Figure 2: Relationship between computing power and networks
(Source: Open Data Center Committee (ODCC), ODCC-2020-01008 Data Center Computing Power White Paper)
CloudFabric 3.0 accelerates the green transformation of high-performance data centers
Built with low-carbon and energy-efficient CloudEngine series data center switches, Huawei's CloudFabric 3.0 hyper-converged data center network (DCN) solution draws on lossless networks to fully utilize computing power, shorten JCT, and reduce overall power consumption of IT in data centers.
Figure 3: Drawing on lossless networks to fully utilize computing power
Unique cutting-edge heat dissipation technologies: 54% lower power consumption per bit than the industry average
Powered by more than 50 patented heat dissipation technologies, Huawei's CloudEngine series data center switches offer the industry's highest heat dissipation efficiency, achieving 54% lower per-bit power consumption of 400 GE ports than the industry average.
- Using our wind tunnel model developed in-house, Huawei has created an innovative panel with large holes and narrow ribs for optimal wind resistance. This improves the heat dissipation efficiency of holes by 8%, so fans use 20% less power.
Figure 4: Industry-typical perforation design vs. Huawei's perforation design
- An evaporation-condensation phase change mechanism is used to quickly and efficiently dissipate heat from the chip to the VC heat dissipation substrate, delivering four times higher heat dissipation efficiency. Huawei has also developed a new, ultra-thin material with a large thermally conductive area and a high thermal conductivity coefficient, improving thermal conductivity by six times.
Figure 5: Industry-typical heat dissipation vs. Huawei's phase-change heat dissipation
- Sophisticated mixed-flow fans perform centrifugal and axial motions both vertically and horizontally, delivering three times heat dissipation efficiency of a traditional fan while consuming just half the power.
Figure 6: Industry-typical fan vs. Huawei's mixed-flow fan
A lossless Ethernet network reduces energy consumption per TFLOPS by 27.4% and per IOPS by 30%
Huawei's CloudFabric 3.0 hyper-converged DCN solution uses the iLossless 2.0 algorithm to implement the industry's first all-lossless Ethernet. The solution has diverse applications such as risk control and identification, weather forecasting, and geological
In gaming, for example, players can simulate their faces on metaverse avatars using AI training and image recognition, providing a more immersive gaming experience. Games with algorithm optimization, training, and identification can go online faster and attract more players.
According to the Tolly Group, a leading global provider of testing and third-party validation and certification services, Huawei's lossless Ethernet network can train 478 images per second in TensorFlow GPU distributed training, eclipsing the industry's average of 375 images. This difference in computing efficiency means that Huawei's lossless Ethernet network can shorten training time by 27.4% with the same number of servers. Not only does this accelerate the rollout of games, it also saves thousands of tons of carbon emissions for large data centers every year.
Figure 7: TensorFlow-GPU Distributed AI Training (Parameter Server Architecture) Test Results (Source: Tolly)
In addition to accelerating computing, Huawei's lossless Ethernet network accelerates storage to deliver excellent performance in the financial, government service, and aerospace industries. The Tolly Group used the industry's FIO storage testing tool on storage systems in a fully simulated real-world setting. The test results at different read sequences showed that storage systems using Huawei's lossless Ethernet network deliver at least 30% higher IOPS than the industry average. This means that 30% fewer storage nodes are needed to deliver the same performance as the industry's typical solutions, which in turn reduces storage system energy consumption by 30%.
Figure 8: Distributed storage FIO tool test results (source: Tolly)
A full suite of energy-saving and low-carbon switches
A green, low-carbon design runs through the entire lifecycle of Huawei's data center switches, from manufacturing to end-of-life (EOL), facilitating sustainable development goals.
No polluting emissions: A green, waste-free design requires less coating and no cleaning.
No wasted materials: A design for saving materials is characterized as minimizing the use of materials and paperless.
No over-testing: An energy-saving design features minimal power consumption and lower carbon emissions.
No single-use materials: A green design focusing on recycling creates materials that are easy to disassemble and create zero waste for landfill.
Using green manufacturing techniques, Huawei CloudEngine series data center switches are designed to improve energy efficiency and reduce noise pollution. They are also one of the first Ethernet switches to be certified by China Environmental United Certification Center (CEC). In the future, Huawei will continue to innovate to provide enterprise data centers with high computing power, accelerate green transformation, and help inject digital vitality into global carbon neutrality.