Design and Testing Considerations as Data Centers Incorporate 400 Gbit Ethernet

October 31, 2022 Anritsu Company

October 31, 2022

Demand for information has been growing for over a decade. It is never more evident than in the exponential increase in data center traffic, as shown in figure 1. To accommodate that spike in usage – which will only continue – 400 Gbit Ethernet is quickly replacing 100 Gbit Ethernet in data centers. This transformation to the “fast lane” means a new approach to network design, architecture, and testing must be taken.

Data center traffic has been rising exponentially since 2010. (Courtesy of Cedric Lam, Google.)
Figure 1: Data center traffic has been rising exponentially since 2010. (Courtesy of Cedric Lam, Google.)

Among the biggest challenges for engineers as data centers evolve are signal integrity, network interoperability, and maintaining service level agreements (SLAs). The end result is that data center operators and networking equipment manufacturers (NEMs) must optimize Ethernet technologies for speed, power, reach, and latency.

One way to meet these benchmarks is to rely on new network elements, such as optical transceivers and high-speed breakout cables. Data centers are transforming into multi-access edge computing networks and network virtualization, as well.

Another advance in data center architectures revolves around data center interconnects (DCIs), as they must now support 400 Gbit Ethernet. DCIs connect data centers to similar operations in the area, as well as routers, leaf and spine switches, top of rack (TOR) and middle of row (MOR) switches, and servers. 

Meeting the Need for Speed

Data transfer rates for server and compute elements are now typically 25 Gbit Ethernet, up from 10 Gbit Ethernet just a few years ago. This speed bump is nothing compared to what is predicted in the near horizon when 100 Gbit Ethernet is expected. These faster speeds are why engineers and data center operators must re-evaluate power, speed, reach, and latency. Here’s how:

Power – Data centers currently have access to maximum power. To meet higher speeds associated with 400 Gbit Ethernet, data center designers must develop innovative methods to use available power more efficiently. It is a key reason why optical transceivers are becoming a prominent component, as they reduce power while increasing bit rate.

Data Center Design – A holistic approach must be adopted to build a new generation of switches, routers, transceivers, NIC architecture, and physical design that supports 400 Gbit Ethernet and beyond. With open architecture becoming common in data center design, testing to verify compatibility between network elements is taking on greater importance, as well.

Latency – Latency key performance indicators (KPIs) are now tighter and are application-specific. From streaming and file storage to e-commerce and social media, consumers expect a high quality of service (QoS) every time. Because so much of the user experience relies on latency, it must be carefully considered when deploying Ethernet connects.

The Interconnectivity Issue

Another byproduct of faster data transport speeds is a shift from centralized models to distributed alternatives that use high-speed, low latency interconnections between resources. The high-speed sources distribute computing across multiple connected locations to create pooled resources for computing-intensive applications. Such a shared approach brings multiple benefits.

Coherent, pluggable 400Gbase-ZR optical modules can transport 400 Gbit Ethernet over individual wavelengths over various optical network devices. For designers and operators, ensuring 400 Gbit Ethernet network interoperability with multi-vendor pluggable modules becomes the challenge. While vendors follow approved industry standards, there are many ways to manipulate the registers related to optics, as well as cables. For these reasons, conducting interoperability testing to confirm different customer deployments and multi-vendor configurations are supported is critical.

Many data center operators use the Network Master™ Pro MT1040A to verify their network is meeting KPIs. To ensure consistent link quality, the MT1040A can measure KPIs frequently and across various demarcation points that may be managed by various providers (figure 2).

Ethernet MEC KPI for 5G applications.
Figure 2: Ethernet MEC KPI for 5G applications.

Rise of Hyperscalers

As part of this explosive data usage, hyperscale data centers are being rolled out. They incorporate high-speed Ethernet optical interface advances so providers can increase their leaf-spine connections up to 400 Gbit Ethernet. As data center operators upgrade leaf-spine connections and deploy equipment from multiple vendors, interoperability challenges arise.

NEMs and data center designers must walk a thin tightrope that stretches between reach and cost in these applications. A good example is passive copper cables. They are economical but at the expense of a short reach. At the other end of the spectrum are multi-mode solutions that are more costly but have extended reach.

New high-speed breakout cables support up to 400 Gbit Ethernet and reduce deployment costs but present performance and distance tradeoffs. They use identical pluggable interface as optics, similar to quad small form-pluggables (QSFPs) or small form-factor pluggables (SFPs). There are fanout cables, however, in which one end supports the aggregate rate and the other is a series of disaggregated interfaces. 

Other Network Design Factors

Data processing units (DPUs) and infrastructure processing units (IPUs) are also helping to shape the new era of data centers (figure 3). DPUs are hardware accelerators that offload networking and communication workloads from the CPU. With the exponential increase in network traffic to the network interface card in the server, gains in software-defined networking (SDN) have put more stress on servers.

IPUs support the data center of the future. (Courtesy of Intel.)
Figure 3: IPUs support the data center of the future. (Courtesy of Intel.)

IPUs accelerate and run the SDN and management software in hardware constructs away from server cores and can continue to run end customer applications. They also provide system-level security, control, and isolation. The software framework offers a common look and feel for users to make it easier to manage.

To accurately and repeatedly test optical interfaces, such as those outlined here, NEMs need high-speed bit error rate testers (BERTs), such as the Signal Quality Analyzer-R MP1900A, that support Ethernet rates up to 800 Gbits/s. Pluggable optical host interfaces have multiple lanes of PAM4 signals. Each carries forward error correction (FEC) patterns that are generated in the pluggable optic and converted to an optical waveform. The waveforms must be evaluated to determine signal integrity is maintained over the physical medium – whether it is optical or coax. The MP1900A supports comprehensive FEC measurements to verify emerging network elements.

Lightwave recently published a paper discussing the evolving data center. You can download a copy of Ethernet in Data Center Networks to learn more.

Previous Article
Why PIM Testing is Important in C-band Deployment
Why PIM Testing is Important in C-band Deployment

Next Article
How Cloud Networking is Reshaping Communications and Associated Testing Processes
How Cloud Networking is Reshaping Communications and Associated Testing Processes

Cloud networking can effectively meet current latency and bandwidth requirements and efficiently be scaled ...