This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy

Eastern Data and Western Computing: Building New Computing-first Networks

Balancing regional computing resources is at the heart of China's plan to build a national system of data centers.

By Li Jun, Chief Architect, Data Center Networks, Huawei
HUAWEI TECH 2022 Issue 02

In May 2021, China's National Development and Reform Commission (NDRC), Cyberspace Administration of China (CAC), Ministry of Industry and Information Technology (MIIT), and National Energy Administration (NEA) jointly issued the Implementation Plan for Computing Hubs of the National Integrated Big Data Center Collaborative Innovation System. The plan proposes a new computing network system that integrates data centers, cloud computing, and big data, as well as Eastern Data and Western Computing demonstration projects that will enable high-quality, green data centers. 

Overall, the project will create an integrated national system of big data centers and strengthen the planning and intelligent scheduling of computing power in line with China’s National Economic and Social Development and Vision 2035. 

Integrated arrangement: Networks are key

As China's digital economy has accelerated, the imbalanced distribution of computing facilities between the eastern and western regions of the country has become an increasing problem. The east has high demand for computing applications and strong innovation capacity, but a shortage of land, hydropower, and other auxiliary resources. The west has a favorable climate and abundant energy sources, but its digital industry is lagging. The project can optimize the distribution of computing power and applications and help form a nationwide market where data is shared, flows freely, and is allocated on-demand.

China will deploy and build hub nodes in the following areas: Beijing-Tianjin-Hebei Metropolitan Region, the Yangtze River Delta, and the Guangdong–Hong Kong–Macau Greater Bay Area in eastern China, and Chengdu, Chongqing, Guizhou, Inner Mongolia, Gansu, and Ningxia in central and western China. 

The hub nodes will consist of data center computing clusters. Data center networks will connect ultra-large-scale computing and storage resources to take advantage of the intensiveness and scale of networks, and ensure the abundant supply of computing power. The hub nodes can exchange data and computing power over a wide area network (WAN), satisfying the needs of high-frequency, real-time interactive services locally. It will also remotely support non-real-time computing tasks like offline analysis, storage, and backup. 

Networks will play a key role throughout computing power generation, transmission, and consumption, and in building the integrated big data center system.

Computing power generation and scheduling: Networks as a challenge

The storage and processing of data in the west of China that comes from the east poses a daunting challenge for network performance. The dependency of large-scale server clusters on networks for interconnection results in packet loss when multiple servers send large numbers of packets to one server simultaneously, exceeding the caching capacity of the switch. Data retransmission will in turn greatly compromise computing and storage efficiency. On RDMA over Converged Ethernet (RoCE) networks, a packet loss rate of 0.1% would result in a 50% decrease in computing power, a massive waste of server CPUs that presents a major  barrier to improving computing power.

WANs act as channels connecting data in eastern China with computing power in western China, bearing hundreds of services for massive enterprises. Each service has unique requirements on key capabilities such as bandwidth, latency, and computing power. WANs must be able to efficiently schedule carried services to deliver the expected value of the Eastern Data and Western Computing project. The best-effort service model of traditional IP networks can barely differentiate services for scheduling and thus fails to meet different service requirements. Network and cloud pool resources cannot be fully utilized and enterprises cannot be allocated the optimal cloud pool, increasing the cost of enterprise migration to the cloud and degrading service experience.

Three innovations: Building IP networks for cross-regional computing power scheduling

Using its expertise in network technology R&D and commercial projects over the past 30 years, Huawei has launched an industry-leading IP network solution that supports cross-regional computing power scheduling. The solution comprises CloudEngine 16800 series data center switches and NetEngine 8000 F8 series WAN routers. The solution uses the innovative intelligent lossless algorithm and intelligent cloud-map algorithm to optimize the scheduling of computing power on the generation side and transportation side, thus achieving minimum loss, best performance, and the optimum transportation of computing power.

Innovation 1: Intelligent lossless algorithm

The key to addressing packet loss in data center networks involves setting an appropriate congestion indication threshold for switch buffer queues. If the threshold is set too high, the transmission rate of the sender server and network congestion cannot be reduced in time, greatly increasing packet loss or latency. If the threshold is set too low, the transmission rate of the sender server greatly declines and the network cannot achieve 100% throughput, resulting in wasted resources. 

Service traffic models now vary greatly and even with extensive testing and simulation, it is difficult to determine the optimal threshold based on human experience. To address this, Huawei has developed intelligent algorithms for data center network switches. These switches can dynamically set the optimal queue threshold using intelligent lossless algorithms based on real-time network status information such as queue depth, bandwidth throughput, and traffic model. To ensure that the algorithm can adapt to any scenario and traffic model, Huawei has used millions of real service samples and tens of millions of random samples to train the algorithm model, achieving the optimal balance between zero packet loss, high performance, and low latency, as shown in Figure 1.

Distributed adaptive routing

Figure 1: Distributed adaptive routing

High-performance computing and parallel computing require extensive computing power. For high-speed interconnections between massive computing nodes, networks must support ultra-large-scale networking and low latency, as well as zero packet loss. Today's mainstream data center network architecture is Clos (as shown in Figure 2), where networking scale is subject to the port density of the switch. For a 64-port switch, a 3-level Clos architecture supports a maximum of 65,536 server ports. More than 200,000 compute nodes need to be connected to build 10-EB-level computing capabilities. Forwarding between computing nodes on a level-3 Clos network takes five hops. On a level-4 Clos network, networking cost increases greatly and forwarding takes seven hops. As a result, computing efficiency is greatly reduced by the latency increase.

Conventional data center

Figure 2: Conventional data center Clos networking

To boost networking scale while minimizing costs and network latency, Huawei has introduced directly connected topology (as shown in Figure 3) for Ethernet networking, breaking the limitations of Clos architecture and realizing ultra-large-scale networking with low network diameter (small number of hops). Distributed adaptive routing technology utilizes unequal cost paths to realize dynamic routing, ensuring low latency while improving bandwidth utilization. Once upgraded, the Huawei CloudEngine series switches support directly connected topologies and adaptive routing. For example, 64-port switches support zero packet loss networking with up to 270,000 servers, a scale that is four times the industry average with level-3 Clos. The number of network hops and latency can be reduced by 25%. At the same scale of server networking, NE nodes can be reduced by 20% to 40% compared with level-3 or level-4 Clos networking.

directly connected topology

Figure 3: New directly connected topologies in data centers

Innovation 2: Intelligent cloud-map algorithm

Conventional WANs use shortest-path scheduling, which results in unbalanced link utilization when services are transmitted on the same path. Multi-path load balancing improves network utilization, but does not satisfy services' different network requirements such as latency, jitter, and reliability. Moreover, this method only considers network factors and not cloud pool factors such as computing usage, cost, and storage, resulting in the unbalanced use of cloud computing resources and the inefficient scheduling of computing resources (as shown in Figure 4).

Conventional WANs

Figure 4: Conventional WAN scheduling

To address this issue, Huawei has introduced the Edge-Disjoint KSP algorithm. This algorithm integrates network factors, such as latency, bandwidth, reliability, and availability, and cloud factors, like computing usage, storage resources, and cost, to implement cloud-network map modeling. The algorithm gives the recommended optimal path for different services through dynamic parallel computation based on multidimensional constraints. The algorithm then defines and orchestrates the path labels of SRv6 packets according to these recommendations and includes service data in the SRv6 packets. WAN routers across the network can forward services in the optimal paths based on service types, which optimizes cloud pools, networks, and services simultaneously. Based on the integrated scheduling of cloud factors and network factors, the algorithm can select the optimal cloud pool according to enterprise requirements and balance cloud-network resources from multiple sources to multiple destinations, improving computing power transmission efficiency by more than 30%, as shown in Figure 5.

WAN-based cloud-network

Figure 5: WAN-based cloud-network integrated scheduling

Since the Chinese government proposed the guiding principle for building integrated big data centers back in 2016, the concept of new computing-first networks has gained popularity in the industry. Huawei is actively involved in designing and building nationwide hub nodes and is thus committed to achieving innovations in IP network solutions.