Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Programmable on-chip and off-chip network architecture on demand for flexible optical intra-Datacenters

Open Access Open Access

Abstract

The paper presents a novel network architecture on demand approach using on-chip and-off chip implementations, enabling programmable, highly efficient and transparent networking, well suited for intra-datacenter communications. The implemented FPGA-based adaptable line-card with on-chip design along with an architecture on demand (AoD) based off-chip flexible switching node, deliver single chip dual L2-Packet/L1-time shared optical network (TSON) server Network Interface Cards (NIC) interconnected through transparent AoD based switch. It enables hitless adaptation between Ethernet over wavelength switched network (EoWSON), and TSON based sub-wavelength switching, providing flexible bitrates, while meeting strict bandwidth, QoS requirements. The on and off-chip performance results show high throughput (9.86Ethernet, 8.68Gbps TSON), high QoS, as well as hitless switch-over.

©2013 Optical Society of America

1. Introduction

Intra-datacenter [1,2] network interconnection provide high data rate lines preferably using optical fibres among the servers to enable bandwidth aggressive applications with tight QoS requirements of parallel and distributed computing in massive scales. The requirement of highly connected servers is approached by deploying hierarchies of L2/L3 switches in different levels within and between datacenter racks and clusters [1] with not much concern about resource/spectral efficiency [2]. This is whilst optical networking, specifically sub wavelength networking, supporting efficient optical networking for wide range of bitrates, is an candidate solution for complementing the highly electrical switching architecture of intra data center networks today [3]. In the meantime, the semiconductor industry with innovative FPGA based hardware solutions with increased logical gates and processing power units enable implementation of dynamic and programmable run time reconfiguration of the FPGA boards [4]. The NoC based run time reconfiguration approach can introduce new levels of flexibility to the network architecture, by adapting to different functionalities, through reconfiguration of the on chip components and implemented Intellectual Property (IP) cores connections [5].

In this paper, we propose a novel programmable and reconfigurable network on-chip and-off-chip design and implementation, which enables operation in bitrate ranges of 100Mbps up to ~8.7Gbps using wavelength, and sub-wavelength based technologies, improving networks performance with transparent, highly resource efficient, and ultra low latency communications, well matching with intra datacenter strict requirements. This work is based on open-hardware infrastructure that allows for on-demand programmability that in turn delivers flexibility, agility, efficiency as well as high network performance and high QoS delivery. This design implements FPGA based network line card with runtime reconfigurable on-chip design supporting 10GEthernet and TSON [6] Tx and Rx. The line-card operation of Ethernet or TSON is supported by the flexible and AoD [7] optical transport layer built as off-chip network. It supports guaranteed transport of high capacity (9.8 Gbps per 10G NIC) and ultra-low latency (<10 µs) services on EoWSON or slightly lower capacity (8.68 Gbps) yet statistically multiplexed optical sub-wavelength services over TSON. The hardware infrastructure delivers hitless switch-over between the two offered services on a dynamic fashion.

2. Reconfigurable network of on-chip and off-chip design

The novel concept of the Reconfigurable Network of On-chip and Off-Chip design is displayed in Fig. 1(a) .

 figure: Fig. 1

Fig. 1 (a) legacy ToR in Fat-Tree intra-datacenter network architecture. (b) Transparent network on-chip and off-chip to replace the legacy architecture. (c) Network off-chip where optical components are selected for Ethernet or TSON connectivity paradigms. (d) FPGA based Network on-chip implementation with modules attached to an electronic back plane.

Download Full Size | PDF

The proposed solution presumes installing specially designed on-Chip network line cards capable of Ethernet and TSON (optical sub-wavelengths) on the servers (or PCI-E to Ethernet and TSON), and replacement of a typical Ethernet Top of the Rack switch with a more flexible all optical switch supporting sub-wavelength switching for TSON and wavelength switching for Ethernet transport, which is built as Network-off-Chip solution (Fig. 1(b)). The Network-on-Chip design is implemented using a High Performance Xilinx Virtex6 HXT FPGA, which operates as a bi-functional line-card supporting 10GEthernet and 10G TSON transport interfaces with 9.8 Gbps and 8.6 Gbps throughput respectively. It enables hitless switching between Ethernet and TSON operation, so the user/application or the server itself can deploy either upon request with no interruption (no packet loss) to the running services. This on-Chip design exploits partial reconfigurability of the FPGA chip whilst running aided by MIMO buffers to achieve no packet loss operations. Using TSON the network interface card can optically transmit sub-wavelength data employing time-shared statistical multiplexing; it can support as low as 100Mbps with equal step size over 10Gbps links, offering highly efficient traffic aggregation. Using Ethernet transmission, the server will have a dedicated wavelength to transport its data over wavelength switched optical network (WSON). The TSON configuration allows shared use of on-chip and off-chip resources enabling multiple accesses of several servers over each wavelength channel. The EoWSON allows the Network-on-Chip to operate at maximum throughput and transfer bulk data on a single location per channel.

On the other hand, the Network-off-Chip design provides flexible optical node and network architecture, following the AoD approach. In AoD, an optical backplane, e.g. 3D MEMS switch, interconnects a pool of components and enables reconfiguration of the transport layer by configuring internal cross-connections. The coordinated operation of the on and off chip networking enables bandwidth flexible, efficient and transparent cross connection within datacenters.

Network On-chip design: Network-on-chip design is an approach to design the communication subsystem between sub-modules, so to achieve desired performance and functionality. Several sub-systems (on-chip components, IP cores, etc) for separate functions of Ethernet RX, TSON RX, Ethernet TX and TSON TX have been developed in a modular manner. Using a deployed internal electronic backplane switch, each of the implemented sub-systems can be selected using the internal communication network to enable the preferred service and mode of operation. Different sub modules of the Network-on-chip are displayed in Fig. 1(b). The same fig illustrates how with a select-and-place approach using different subsystems of TSON TX and RX, Ethernet MAC and buffers, the line card is able to operate as Ethernet to TSON or Ethernet to Ethernet interface. The implemented on-chip design allows a hitless switch over between TSON and Ethernet modules, by incorporating Ethernet or TSON related blocks in the operation. The electronic switch-over is dynamically controlled by a server that is able to update the FPGA mode (ETH or TSON) LUT on-demand. The selection of any of the combinations, for example Ethernet-TSON-Ethernet, is supported by the reconfiguration of the transport plane as well.

Off-chip network design: The off-chip networking between different FPGA based line cards happens through an AoD node [7] which is able to implement the required optical node architecture for EoWSON or to re-configure it to support TSON aggregation and switching in 20 msec (optical backplane reconfiguration time). For transparent transport of Ethernet channels, wavelength switching can be adopted using broadcast and select topology by choosing couplers and AWG components and the back plane in-between as in Fig. 2(a) . For TSON operations fast optical switches (PLZT) need to be involved, as shown in Fig. 2(c), in order to multiplex time-slice data sets from different servers and map them on specific time slices over one or few wavelengths. Fibre switching is also available by using the ports on the back plane MEMS switch.

 figure: Fig. 2

Fig. 2 (a) network off-chip for optical transport of broadcast and select delivering EoWSON(b) network on-chip for building TSON and Ethernet interfaces (c) network off-chip for TSON operation

Download Full Size | PDF

3. Results

The FPGA-based Network-on-chip experimental performance carried out using a 10G data analyzer for applying different data rates to the FPGA, reaching the maximum throughput of the implemented design. Figure 3(a) shows the maximum theoretical and achieved experimental Ethernet and sub-wavelength TSON throughput.

 figure: Fig. 3

Fig. 3 FPGA-based TSON-Ethernet switching NoC results. (a) Throughput. (b) Ethernet/TSON latency.

Download Full Size | PDF

TSON delivers 87.96% of the maximum measured Ethernet throughput (9.868 Gbps) for 1500 Byte Ethernet frame size, and 94.38% of maximum 64 Byte Ethernet throughputs (7.69 Gbps). Figure 3(b) shows a comparison of the measured ultra-low latency results for Ethernet (<6.7 µs) and very-low latency for TSON (<160.6 µs). TSON latency decreases with higher bitrates as the data aggregation and time-slice TX take less time.

Figure 4 shows an inside FPGA TSON to/from Ethernet switch-over which is triggered via an Ethernet control interface. At point A the operation mode is switched from TSON to Ethernet, and at point B it is switched from Ethernet to TSON. Figure 4(a), shows the measured throughput result when switching between TSON and Ethernet (Input frame size 64B). When switching from TSON to Ethernet, the bit rate increases for a short duration because the data inside the TSON aggregation FIFO are released immediately. Also, when switching from Ethernet to TSON, the bit rate decreases for a short duration, as it is necessary to wait for at least one time-slice to aggregate the data in burst and then transmitting it.

 figure: Fig. 4

Fig. 4 FPGA-based TSON-Ethernet NoC results. (a) switch-over. (b) Tx/Rx frames.

Download Full Size | PDF

The delay in switching from TSON to Ethernet can last as long as it takes to empty the transmission buffer, which can get as large as 5 time slices (8x1500 Byte packets ~50μs, with regards to the available on-chip memory (524Kb) and 1ms frame size in TSON). The switch-over from Ethernet to TSON is almost immediate since the last Ethernet packet is released right away. Figure 4(b) shows that the number of received frames matches the number of transmitted frames, which proves the hitless feature of the design. Since the TSON design is based on process of aggregating Ethernet packets in an optical burst, and transmitting on pre-allocated time-slices the pattern of the allocation can cause different delays of optical bursts in TX FIFOs. Figure 5 , shows two sample time-slice allocation patterns: pattern 1 (P1) with contiguous allocation and pattern 2 (P2) with distributed slice allocation (“1” for allocated and “0” for not allocated). To reach 1Gbps of bitrate, P1 uses the first 11 time- out of the available 91 slices (includes the overheads), whilst P2 uses distributed 11 time-slices across the 91 time-slices (as shown on the first row of either pattern). The next rows with allocations in each pattern show how the rest of the time-slices are allocated to reach higher data rates.

 figure: Fig. 5

Fig. 5 1Gbps slice allocation patterns. (a) P1: contiguous allocation. (b) P2: distributed allocation.

Download Full Size | PDF

Using P1 and P2, we have benchmarked the latency and jitter of TSON which utilizes the aggregation mechanism. The effect of P1 and P2 with min and max Ethernet frame sizes of (64 and 1500Byte) on the latency is shown in Fig. 6 . P2 shows expectedly less latency in comparison to P1 in both (a) and (b) since it uses more distributed available time-slices in each frame whilst P1 only offers time slices at the beginning of each frame (arriving packets in the middle of the frame should wait). With the increase in the data rate, the buffering and aggregation of the packets and therefore the release of them takes less and less time, so the latency of the implemented TSON designs which uses the aggregation mechanism for higher bitrates drops.

 figure: Fig. 6

Fig. 6 FPGA-based TSON latency for different time slice allocation. (a) for 64 Byte Ethernet frame size, (b) for 1500 Byte Ethernet frame size.

Download Full Size | PDF

The experimental jitter results for TSON under different allocation patterns and frame size are shown in Fig. 7 , and Fig. 8 . In Fig. 7(a), we have jitter for the percentage of minimum Ethernet frames size, received in the time intervals above 1μs, as displayed in the legend. It can be seen from the bars how in both Fig with the increase of bitrate, the frames experience less jitter due to the faster aggregation process(at >7Gbps there are no green bars of 33-100μs(less jitter) in P2 and P1, and less stretches of red bars in P2 in higher bitrates). Starting with roughly more than 0.60% of traffic at 1Gbps experiencing high jitter values of 33-100μs in P2 as an impact of applying more random time slice allocation, P1 introduces low jitter values for <0.10% of ingress traffic. In Fig. 8, we can see the impact of different patterns on the measured jitter for maximum Ethernet size traffic. This fig shows the distribution of the jitter bars along different bitrates for packets arrived with jitters values above 2μs. It can be seen (especially more distinguishable for lower bit rates) that P2 (Fig. 8(b)) shows higher jitter values combining its jitter bars at each bit rate value. Comparing Fig. 7, and Fig. 8, more than 99% of 64 Byte packets are received <1μs, whilst about 87% of 1500 Byte packets are received <2μs. The greater delay variation for the 1500byte packet size has to do with the fixed size FIFOs in TX, which is limited to maximum 8 packets of 1500Byte length, so it is very probable that more packets are deferred to the next time slice for transmission, which can cause higher delay variations compared to smaller packets.

 figure: Fig. 7

Fig. 7 FPGA-based TSON jitter for 64 Byte Ethernet frames with different time slice allocation patterns. (a) time slice allocation pattern 1, (b) time slice allocation pattern 2.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 FPGA-based TSON jitter for 64 Byte Ethernet frames with different time slice allocation patterns. (a) time slice allocation patterns 1, (b) time slice allocation patterns 2.

Download Full Size | PDF

4. Conclusions

This paper presents the novel on-chip and-off chip network design and implementation as programmable and modular open-hardware solution for fast and flexible bitrate switching solution suitable for intra-datacenter optical networks. The proposed approach uses programmable line-cards capable of operating in 10GEthernet or in TSON sub-wavelength mode supported by a flexible optical switching/transport node based on AoD to perform wavelength switching (for Ethernet) or fast time-sliced wavelength switching (for TSON). Experimental results demonstrate hitless switch-over operation between Ethernet and TSON, very high throughput for TSON (8.68Gbps - 87.96% of maximum Ethernet), ultra low latency (<6.7 µs for Ethernet and <160.6 µs for TSON) and <2 μs jitter for 87% of the traffic. We have also studied the QoS performance of the proposed design under contiguous or distributed allocation of network time-sliced resources for TSON sub wavelength switching.

Acknowledgments

This work is supported by the EC through IST STREP project MAINS (INFSO-ICT-247706) and PIANO + ADDONAS, as well as EPSRC grant EP/I01196X: Transforming the Future Internet: The Photonics Hyperhighway.

References and links

1. A. Vahdat, L. Hong, Z. Xiaoxue, and C. Johnson, “The emerging optical data center,” in conference of Optical Fiber Communication Conference and Exposition (OFC/NOFEC) 2011, pp. 1–3.

2. H. Liu, C. F. Lam, and C. Johnson, “Scaling Optical Interconnects in Datacenter Networks, Opportunities and Challenges for WDM.” Google Inc, Moumtain view, CA.(2010). http://www.research.google.com/pubs/archive/36670.pdf.

3. L. Peng, C. Qiao, W. Tang, and C. Youn, “Cube-Based Intra-Datacenter Networks with LOBS-HC,” in international conference on communications (ICC) 2011, pp 1–6.

4. C. Albrecht, J. Foag, R. Koch, E. Maehle, and T. Pionteck, “DynaCORE—Dynamically Reconfigurable Coprocessor for Network Processors,” (Dynamically Reconfigurable Systems, 335–354, 2010).

5. M. Hubner, L. Braun, D. Gohringer, and J. Becker, “Run-time reconfigurable adaptive multilayer network-on-chip for FPGA-based systems,” International Symposium on Parallel and Distributed Processing (IPDPS) 2008. pp. 1–6.

6. G. S. Zervas, J. Triay, N. Amaya, Y. Qin, C. Cervelló-Pastor, and D. Simeonidou, “Time Shared Optical Network (TSON): a novel metro architecture for flexible multi-granular services,” Opt. Express 19(26), B509–B514 (2011). [CrossRef]   [PubMed]  

7. N. Amaya, G. S. Zervas, B. R. Rofoee, M. Irfan, Y. Qin, and D. Simeonidou, “Field trial of a 1.5 Tb/s adaptive and gridless OXC supporting elastic 1000-fold bandwidth granularity,” in European Conference and Exhibition on Optical Communication (ECOC), 2011, pp.1–3.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 (a) legacy ToR in Fat-Tree intra-datacenter network architecture. (b) Transparent network on-chip and off-chip to replace the legacy architecture. (c) Network off-chip where optical components are selected for Ethernet or TSON connectivity paradigms. (d) FPGA based Network on-chip implementation with modules attached to an electronic back plane.
Fig. 2
Fig. 2 (a) network off-chip for optical transport of broadcast and select delivering EoWSON(b) network on-chip for building TSON and Ethernet interfaces (c) network off-chip for TSON operation
Fig. 3
Fig. 3 FPGA-based TSON-Ethernet switching NoC results. (a) Throughput. (b) Ethernet/TSON latency.
Fig. 4
Fig. 4 FPGA-based TSON-Ethernet NoC results. (a) switch-over. (b) Tx/Rx frames.
Fig. 5
Fig. 5 1Gbps slice allocation patterns. (a) P1: contiguous allocation. (b) P2: distributed allocation.
Fig. 6
Fig. 6 FPGA-based TSON latency for different time slice allocation. (a) for 64 Byte Ethernet frame size, (b) for 1500 Byte Ethernet frame size.
Fig. 7
Fig. 7 FPGA-based TSON jitter for 64 Byte Ethernet frames with different time slice allocation patterns. (a) time slice allocation pattern 1, (b) time slice allocation pattern 2.
Fig. 8
Fig. 8 FPGA-based TSON jitter for 64 Byte Ethernet frames with different time slice allocation patterns. (a) time slice allocation patterns 1, (b) time slice allocation patterns 2.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.