Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Performance evaluation of time-aware enhanced software defined networking (TeSDN) for elastic data center optical interconnection

Open Access Open Access

Abstract

Data center interconnection with elastic optical networks is a promising scenario to meet the high burstiness and high-bandwidth requirements of data center services. We previously implemented enhanced software defined networking over elastic optical network for data center application [Opt. Express 21, 26990 (2013)]. On the basis of it, this study extends to consider the time-aware data center service scheduling with elastic service time and service bandwidth according to the various time sensitivity requirements. A novel time-aware enhanced software defined networking (TeSDN) architecture for elastic data center optical interconnection has been proposed in this paper, by introducing a time-aware resources scheduling (TaRS) scheme. The TeSDN can accommodate the data center services with required QoS considering the time dimensionality, and enhance cross stratum optimization of application and elastic optical network stratums resources based on spectrum elasticity, application elasticity and time elasticity. The overall feasibility and efficiency of the proposed architecture is experimentally verified on our OpenFlow-based testbed. The performance of TaRS scheme under heavy traffic load scenario is also quantitatively evaluated based on TeSDN architecture in terms of blocking probability and resource occupation rate.

© 2014 Optical Society of America

1. Introduction

Nowadays, the cloud computing and high-bitrate data center-supported services, such as video on demand, remote storage and processing and online games, have attracted much attention of service providers and network operators due to their rapid evolution in recent years. As data center services are typically diverse in terms of required bandwidths and usage patterns, the network traffic from such services shows high burstiness and high-bandwidth characteristics. These required services pose a significant challenge to the networking of the data centers for more efficient interconnection with reduced latency and high bandwidth [1]. The traditional wavelength-division-multiplexing (WDM) optical transport network is inefficient to carry these services due to the strictly fixed ITU-T wavelength grids and spacing. To support the service operation flexibly, the architecture of elastic optical network has been proposed and experimentally demonstrated [2, 3], which is achieved by taking advantage of orthogonal frequency division multiplexing (OFDM) technology [4]. It can allocate the necessary spectral resources with a finer granularity through sub-wavelength, super-wavelength and multiple-rate data traffic accommodation tailored for a variety of user connection demands. Therefore, data center interconnection with elastic optical networks is a promising scenario to allocate spectral resources for applications in a highly dynamic, tunable and efficient control manner.

In addition, a large amount of network-based data center applications require time-aware end-to-end quality of service (QoS) guarantee [5], with considering the different time demands for QoS in different categories. For instance, the time sensitive services (e.g., video conference) pursue the low delay and needed bandwidth accommodation, while time tolerant services (e.g., data backup) just need to be achieved in a relatively loose scope of time without a specified transmission duration and network bandwidth. From the view point of operators, how to arrange the start time and duration of service and transport bandwidth brings strong pressure to guarantee the various QoS. Depending on the technological heterogeneity and resource diversity, the service delivery considering the time-aware resources scheduling is practically impossible in independent operation scenario without cross stratum optimization (CSO) [6], which allows global optimization and control across optical transport network and data center resources [7]. Recently, software defined networking (SDN) (e.g., the OpenFlow [8] architecture) as a centralized control architecture has been paid much attention by supporting programmability of data center and network functionalities [911]. It can provide flexibility by abstracting heterogeneous resources as unified interfaces for the joint optimization [1215]. The orchestration of elastic data center and inter-data center transport network resources using a combination of OpenStack and OpenFlow has been demonstrated [16], which can allow a data center operator to request optical lightpaths from a transport network operator to accommodate rapid changes of inter-data center workflows and analyze four types of the typical workflows inside a data center.

The enhanced software defined networking (eSDN) over elastic optical network for data center service migration has been reported and demonstrated in our previous works [17, 18]. On the basis of it, in this paper, aimed at the time-aware data center service scheduling, we propose a time-aware enhanced software defined networking (TeSDN) architecture for elastic data center optical interconnection. The traditional service scheduling schemes allocate the optimal application resources from data center server and transport with specified bandwidth at the arriving time. They may cause that the available network and data center resources are wasted, while the QoS is hardly guaranteed in elastic data center optical interconnection. Different from the traditional strategies, the time-aware resources scheduling (TaRS) scheme is introduced based on the proposed architecture, which considers the data center service scheduling with elastic service time and service bandwidth according to the various time sensitivity requirements. The TeSDN can accommodate the data center services with required QoS considering the time dimensionality, and enhance cross stratum optimization of application and elastic optical network stratums resources based on spectrum elasticity, application elasticity and time elasticity. The overall feasibility and efficiency of the proposed architecture is experimentally verified on our OpenFlow-based testbed [17]. The performance of TaRS scheme under heavy traffic load scenario is also quantitatively evaluated based on TeSDN architecture in terms of blocking probability and resource occupation rate.

The rest of this paper is organized as follows. Section 2 introduces the TeSDN architecture. The time-aware resources scheduling scheme under this network architecture is proposed in Section 3. The interworking procedure for the TeSDN with TaRS scheme is described in Section 4. Finally, we describe the testbed and present the numeric results and analysis in Section 5. Section 6 concludes the whole paper by summarizing our contribution and discussing our future work on this area.

2. TeSDN architecture for elastic data center optical interconnection

The use of SDN enabled by OpenFlow protocol has been widely studied in terms of packet-switched and circuit-switched networks, especially including the optical networks employing packet, fixed and flexible grid technologies as well as device [9, 13]. Our previous works [17, 18] present the enhanced SDN architecture over elastic optical network to focus on the issue about the data center service resources and users’ QoS requirement at two dimensionalities, i.e., application and spectrum dimensionalities. Firstly, heterogeneous services are replicated over multiple data centers so that a user request can be served from one of many potential data center application resources supporting the specified service (i.e., anycast principle [19]) through the flexible choice of the application resources from data centers, which is called application elasticity. Secondly, from the spectrum’s perspective, elastic optical network can adjust spectrum parameter in physical layer (e.g., modulation format) to accommodate the service according to transport distance of path. It can enhance the spectrum utilization and realize the spectrum elasticity. Based on the two aforementioned dimensionalities, we focus on a new dimensionality to enhance the QoS and network performance in this paper, i.e., time dimensionality. Note that this paper researches the data center services in terms of storage migration, which involves either the back-up or transfer of data for future usage, to reduce access time for example [16]. Aimed at different kinds of data center services, time-aware enhanced SDN architecture can provide elastic start time, service time and corresponding transport bandwidth to meet various QoS requirements and implement time elasticity. We investigate the TeSDN to enhance the quality of service provisioning and improve the application and network resources utilization, which consider the application, spectrum and time elasticity from three dimensionalities. In this section, the main core and structure of the novel architecture are briefly pointed out. After that, the functional building blocks of TeSDN and coupling relationship between them are presented in detail.

2.1 TeSDN architecture for data center interconnection based on elastic optical networks

The TeSDN architecture for OpenFlow-based elastic data center optical interconnection is illustrated in Fig. 1.The elastic optical networks are used to interconnect the distributed data centers. It follows that the network architecture mainly consists of two stratums: the elastic optical resources stratum and application resources stratum (e.g., CPU and memory), which is shown in Fig. 1. Each resource stratum is software defined with OpenFlow and controlled locally by a transport controller (TC) and an application controller (AC) in a unified manner. To control the elastic optical networks for data center interconnection with extended OpenFlow protocol (OFP), OpenFlow-enabled elastic optical device nodes with OFP agent software are required, which are referred to as software defined OTN (SD-OTN), as proposed in [17]. The motivations for the TeSDN architecture over elastic data center optical interconnection are twofold. Firstly, the TeSDN emphasizes the cooperation between AC and TC to realize software defined path (SDP) with application and spectrum elasticity through the global interworking of cross stratum resources. Secondly, based on the different time sensitivity requirements of services reasonably, the TeSDN can schedule data center services with time elasticity to optimize the application and network resources utilization further. Based on functional architecture described above, a TaRS scheme is proposed in the AC to realize the application and network stratums resources optimization through arranging the start time, transport time and corresponding transport bandwidth for services. We implement one AC and one TC in the architecture for simplicity due to the limits of experimental conditions. With the growing extent of network scale and types, the information needed to maintain and operate a single controller is expected to increase rapidly and this will inevitably stress the controller performance. In fact, the architecture can be scaled up and distributed to adapt the extension of network scale. Specifically, a separate controller can manage each domain; this multiple controller architecture extends the scalability and flexibility of the network. We have studied this issue in our previous work [7]. On the other hand, we have also evaluated the practical performance of controller in a large scale elastic optical network testbed with 1000 virtual optical transport nodes, including network scalability, communication bandwidth limitation, and restoration time [20].

 figure: Fig. 1

Fig. 1 The architecture of TeSDN for elastic data center optical interconnection.

Download Full Size | PDF

2.2 Functional models of TeSDN for elastic data center optical interconnection

To obtain the architecture described above, the application and transport controllers (AC and TC) have to be extended in order to support the TeSDN function as shown in Fig. 2.The basic responsibilities and interactions among these functional modules are provided as follows. The responsibility for the AC is concerned with maintaining application stratum resources, monitoring the dynamic changes of application resources, and performing the resource abstraction in data center servers for TeSDN, while the TC sustains optical network stratum information abstracted from physical network and SDP provisioning with modulation formats in elastic optical networks. Note that, the AC manages the data center servers and their application resources through the VMware software, which can gather the CPU and storage resources, and configure and control the virtual machines via internal API in the data centers. The AC is also responsible for arranging data center services with the TaRS scheme based on time sensitivity of the requests, and realizing cross stratum optimization of application and abstracted network resources interworking with the CSO agent of TC. In the TC, the network model abstracts the network topology through path computation element (PCE), which is capable of computing a network route or path based on a network graph, and of applying computational constraints [21, 22]. The discovery and tunable spectrum control modules are used to discover physical layer network elements and control the tunable spectrum bandwidth and modulation format in underlying layer network. When the data center service request arrives, the AC schedules start time, service time and transport bandwidth through TaRS module and forwards them to CSO module for global resources optimization. After implementing CSO computation, the AC selects the most optimal server or virtual machine to allocate application resources for request and determines the location of application. According to the results, the AC transmits application requirements to TC through application-transport interface (ATI). The SDP can be calculated in PCE module on receiving service request from AC through ATI, while the modulation format of service can be determined and adjusted based on the length of SDP. If the SDP has short distance, more precious spectrum bandwidth is economized due to the use of high-level modulation format. The end-to-end path is provisioned by controlling all corresponding SD-OTNs along the computed path by using extended OFP in TC. Note that, the OFP agent software embedded in SD-OTN maintains optical flow table and models node information as software and maps the content to control the physical hardware [17].

 figure: Fig. 2

Fig. 2 The functional models of application and transport controllers.

Download Full Size | PDF

3. Time-aware application and network resources scheduling

3.1 Problem statement

In the phase of data center service (i.e., storage migration) accommodation in elastic optical networks, the traditional strategy allocates the optimal application resources from data center server and transports on corresponding SDP with specified bandwidth at the arriving time. It may cause two following cases, which can influence the resource utilization and even block the service request. Two instances are used to explain the relevant cases shown in Fig. 3(a) and Fig. 3(b) respectively. As shown in Fig. 3(a), we assume there are two data center servers (i.e., server A and B) for the destination server candidates in this example. The storage utilization of server A is 70%, while the usage in server B is 75% at the service arriving time tc. After a relatively short time, the services provided from two servers have changed, i.e., some services are released due to time deadline and new ones arrive. The storage utilization of server A is up to 95%, while usage in server B reduces to 20% at t1. If according to the traditional allocation principle, it chooses the server A (70% at arriving time tc) as the destination node. It causes that other requests which should be served from server A at t1 will be blocked since the storage resource of server is almost unavailable (95%). However, if assign the resource for the request after a period of time (within delay tolerance of user), the server B (20% at t1) should be chosen as the destination. Compared to the traditional allocation principle, the new scheme can optimize the resource distribution better and realize the resource efficient utilization. The other example is about the service bandwidth, which is shown in Fig. 3(b). Most of data center services use the specified bandwidth and transmit the fixed service time to achieve the overall data volume of service (i.e., the specified product of service bandwidth and time). If according to the traditional fixed bandwidth, the network cannot provide the service in case of less available bandwidth. The feasible allocation scheme can compress the bandwidth to adapt the available bandwidth and increase the service time (overall data volume of service is constant). Similarly, when network resource is relatively plentiful, to enhance the provided bandwidth can complete the service as soon as possible with relatively abundant resources. Two aforementioned issues can provide time elasticity for data center service. Therefore, based on this feature, we propose the time-aware resources scheduling scheme in the proposed architecture.

 figure: Fig. 3

Fig. 3 Illustration of time-aware (a) data center application and (b) network resource allocation.

Download Full Size | PDF

3.2 Time-aware resources scheduling scheme

The software defined data center interconnection with elastic optical networks is represented as G (V, L, F, A), where V = {v1,v2,...,vn} denotes the set of OpenFlow-enabled optical switching nodes, L = {l1,l2,...,ln} indicates the set of bi-directional fiber links between nodes in V. F = {ω1,ω2,...,ωF} is the set of spectrum sub-carriers on each fiber link and A denotes the set of data center servers, while N, L, F and A represent the number of network nodes, the links, the spectrum sub-carriers and data centers respectively. Due to the realization complexity of the experiments, we consider that a service involves the allocation of one CPU task and the required storage space in the available server. For each service request from source node s, it requires overall data volume of service D, storage space S and various time requirements. According to different time sensitivities, we divide the services (i.e., storage migration) into two categories, i.e., burst delay-sensitive and delay-tolerant services. The delay-sensitive service needs service accommodation immediately and the ith request is described as SRi(s,D,S). The delay-tolerant service contains the arriving time tc and tolerant latency T, and ith service request can be denoted as SRi(s,D,S,tc,T). The service request SRi + 1 will arrive after connection demand SRi in time order. In addition, some requisite notations and their definitions are listed as Table 1.

Tables Icon

Table 1. Notations and Definitions

Based on the functional architecture described in section 2, we present a novel time-aware resources scheduling (TaRS) scheme to be implemented in application controller to realize data center service schedule with time sensitivity requirement. In the existing SDPs and corresponding data center destination servers, the remainder resources, i.e., the volume of storage space Sr, the volume of rest bandwidth Br and related life time t can be recorded and expressed as the idle resource matrix IR hereafter.

Ri:R1R2...Ri...RFIrIR=[Sr1Sr2...Sri...SrkBr1Br2...Bri...Brkt1t2...ti...tk]SrBrt

In this matrix, each column vector Ri denotes the state of remainder storage space Sri, bandwidth Bri and life time ti in ith server and related SDP. From another perspective, each row Ir in the matrix indicates the idle resources in corresponding application, bandwidth and time dimensionality. For the incoming data center service request, we analyze the time sensitivity of each service and then divide them into burst delay-sensitive service request SRi(s,D,S)and delay-tolerant service requestSRi(s,D,S,tc,T), which contains data volume and tolerant latency. For the delay-tolerant service, the idle resource matrix IR of existing data center servers and corresponding SDPs can be searched in increasing order whether they have enough volume of storage space Sri and the product of rest bandwidth Bri and life time ti compared with the arriving service. If the SriSand Bri×tiD, that means there are the existing servers and corresponding SDP candidates with enough remainder application and network resources in the current network status. Therefore, the candidate with minimum cross stratum optimization factor, which is proposed in [17], can be chosen as the destination and related SDP to provision the service. If there is not enough resources to accommodate at arriving time tc, the service waits for available resources until exceeds the max waiting delay tw, i.e., described as Eq. (2), which can consider the average transport bandwidth E[Ba(τ0,i0)] and guard time TG. Note that, we utilize the statistics of the previous bandwidth demands to estimate the average transport bandwidth status for a coming request. For the ith demand occurring at time t, the bandwidth usage Bi(t) captures the traffic bandwidth volume consumed in optical stratum. Therefore, we propose to use a bandwidth expectation E[Ba(τ0,i0)] for the i0 requests at the last time τ0. The expectation is useful for emphasizing the average bandwidth of i0 connection demands recently experienced in the optical stratum, which is expressed as the Eq. (3). Here, fi(t) indicates the probability of occurrence of the ith request at time t, while, tc and ic denote the current time and current number of service request respectively. Then the new SDP can be computed with cross stratum optimization [17] to accommodate the service.

tw=TD/E[Ba(τ0,i0)]TG
E[Ba(t,i)|t=τ0,i=i0]=t=tcτ0tci=ici0icBi(t)fi(t)/t=tcτ0tci=ici0i0fi(t),t[tcτ0,tc],i[ici0,ic]

For the delay-sensitive service request, we first find the available resources in idle resource matrix IR of existing candidates with the same principle. When enough application and network resources can be accommodated, the data center server and related SDP with minimum service transmission time from the candidates is prepared to provision. Once there is no available ready-made path, the TaRS scheme setups new SDP through CSO strategy for the service immediately. The release SDP procedure is reckoned by service time immediately to realize the quick response for service provisioning. Therefore, through the analysis of TaRS scheme according to the time sensitivity requirement and the current network and application resources utilization, the application controller decides to accommodate the service with elastic bandwidth at the elastic service time. The flowchart of TaRS scheme is shown in Fig. 4.

 figure: Fig. 4

Fig. 4 The flowchart of TaRS scheme.

Download Full Size | PDF

4. Interworking procedure for TeSDN

In this section, we illustrate the interworking procedure of TeSDN based on TaRS scheme in the proposed architecture. As shown in Fig. 5, for flow monitoring real-time, the TC sends traffic monitor request to each SD-OTN using an OFPT_FEATURES_REQUEST message periodically, while obtaining the status information of each one with OFPT_FEATURES_ REPLY message from corresponding node. When a new data center service request arrives, the AC interworks the data center application resources, and sends the service request to TC to ask for the network resources information. After the session establishment, the AC captures the utilization of network resources receiving from TC over a period of time, and then the TaRS scheme in AC can be completed to choose the optimal destination node with suitable bandwidth and related transmission time according to various time requirements. When TC receives the setup request responded from AC, the TC calculates a path with optimal modulation format in the network based on the transmission distance and optical network bandwidth information. The end-to-end SDP is setup by controlling corresponding SD-OTN along the computed path by using OFPT_FLOW_MOD message. Then all corresponding SD-OTNs should report the setup status to TC through OFPT_PACKET_IN message. Note that, these existing OpenFlow messages are reused to simplify our implementation. The new messages types will be defined to support new functionalities in the future work. When the TC obtains setup success reply from the last SD-OTN, it responses the setup reply to AC with provisioning SDP. Meanwhile, the TC records the setup time and service duration time. The release SDP procedure is reckoned by service time immediately. After that, the TC sends an update message to the AC in order to synchronize the application usage.

 figure: Fig. 5

Fig. 5 Interworking procedure of TeSDN for data center service.

Download Full Size | PDF

5. Experimental setup, results and discussion

To experimentally evaluate of the proposed architecture, we set up TeSDN in data center interconnection based on elastic optical networks for data center services comprising both control and data planes based on our testbed, as shown in Fig. 6.In the data plane, four OpenFlow-enabled elastic optical nodes are equipped with Huawei Optix OSN 6800, each of which comprises flex ROADM and ODU boards and corresponding tributary card [17], making them possible to switch or transport the signal in optical networks elastically. We develop the software OFP agent according to the API function to control the related hardware through the OFP. Data centers and the other nodes are implemented on an array of virtual machines created by VMware ESXi V5.1 running on IBM X3650 servers. Since each virtual machine has an operating system, its own CPU, and storage resource, it can be considered as a real node. Therefore, system virtualization technology makes it easy to set up experiment topology, which is comprised of 200 nodes and divided into four domains implemented by mesh [6]. For OpenFlow-based TeSDN control plane, the TC is assigned to support the proposed architecture and deployed in three virtual machines for elastic spectrum control, PCE computation and resource abstraction, while the database management servers are responsible for maintaining traffic engineering database, management information base, and the configuration of the database and transport resources. The AC server is responsible for TaRS scheme with CSO model and monitoring the application resources from data center servers. We deploy the service information generator in the AC, which can implement batch data center services for the experiments. The front-end interface implemented by TWaver-Flex is developed for user management and resources visualization of data center and elastic optical network.

 figure: Fig. 6

Fig. 6 Experimental testbed and demonstrator setup.

Download Full Size | PDF

We have designed and verified experimentally SDPs provisioning for data center services with time elasticity in elastic data center optical interconnection based on TeSDN. The experimental results are shown in Figs. 7 and 8.The destination data center is calculated by AC based on various application utilizations among data centers and current network resources, while the elastic transport bandwidth and corresponding service time of data center service are determined with TaRS scheme based on time requirement and resources status. Additionally, the SDP for service accommodation is setup from source to destination node, while the spectrum bandwidth and corresponding modulation format can be tunable according to different SDP distances. In the traditional data center optical interconnection architecture, the mapping between service request and OpenFlow-based optical flows needs the complex mapping schemes to translate service demand to the transport parameter, which can be scheduled in the optical networks. In the proposed TeSDN architecture, we consider the real application scenarios and the complexity of the experiments, and set the service request implemented in application controller for simplicity. The front-end interface of the testbed for resources visualization is shown partly in Fig. 7. It is used to verify the service bandwidth and related time elasticity of SDP according to various time demands, and the accommodation status of data center and optical network resources. As shown in Fig. 7(a), two kinds of data center services, i.e., delay-tolerant service and delay-sensitive service, can be provisioned in the TeSDN architecture concurrently. We can see that three delay-sensitive services (shown in green) and two delay-tolerant services (shown in yellow) are accommodating on the SDPs with different bandwidths and service times, which can be presented in the virtual topology of interface. Another phenomenon is shown that Fig. 7(b) indicates the current application resource status of data center servers and corresponding information of serving SDPs, which include detailed path, service bandwidth and related service time.

 figure: Fig. 7

Fig. 7 The front-end graphical user interface of testbed: (a) the topology tab and (b) information tab.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 (a)The capture of the OpenFlow messages sequence and (b) extended flow table modification message for TeSDN.

Download Full Size | PDF

The experimental results are further emphasized in Figs. 8(a) and 8(b). Figure 8(a) illustrates the capture of the OpenFlow message exchange sequence for TeSDN through Wireshark deployed in TC. As shown in Fig. 8, 10.108.50.74 and 10.108.65.249 represent the IP addresses of AC and TC respectively, while 10.108.50.21 and 10.108.51.22 denote the corresponding SD-OTN node respectively. For simplicity our design and implementation use a proprietary protocol based on UDP for the interaction between the AC and TC. Through the interworking between them, the AC obtains the elastic optical network resources utilization and performs TaRS scheme according to different time demands. When TC receives the setup request via UDP message, the TC computes an end-to-end SDP for the service, and then provisions the elastic lightpath with distance adaptive modulation format to control all corresponding SD-OTN through OFPT_FLOW_MOD message along the calculated SDP. After SDP setup achievement, the corresponding SD-OTNs report the setup status to TC through OFPT_PACKET_IN message. After timing the service time, the path is released by OFPT_FLOW_MOD message, while TC updates the application usage with UDP message to keep synchronization. The experimental results correspond to the procedure we depicted in Fig. 5. As shown in Fig. 8, we can also see the path setup latency for a flow is around 12.7ms, while the path release latency for the flow is around 11.9ms. The reason is the release lightpath procedure is reckoned by service time immediately to realize the quick response for service provisioning. To satisfy the requirement of TeSDN architecture in elastic data center optical interconnection, the OFPT_FLOW_MOD message is extended for sending the path setup or release commands to elastic optical nodes. The corresponding extensions range over OFP_Match structure and Command field. In the extension, 16 bits channel spacing, 16 bits central frequency, 16 bits grid, 16 bits tributary slots, 32 bits spectrum bandwidth and 32 bits time slots are to present some features of SDP in elastic data center optical interconnection, which are implemented in OFP_Match structure. Add, switch, drop, configure and release are actions which are implemented by extending Command field. Optical nodes collect OFP_Match and Command information in order to control the path setup and release. Figure 8(b) shows a snapshot of the extended flow table modification message for SDP provisioning, which verifies OPF extensions for TeSDN in elastic data center optical interconnection.

We also evaluate the performance of TeSDN in elastic data center optical interconnection under heavy traffic load scenario through simulations, and compare TaRS scheme with TA-CSO scheme (proposed in [17]), traditional CSO (TCSO) scheme and physical layer adjustment (PLA) scheme [18]. With the TCSO scheme, the AC selects the data center server node as the destination based on the application status and network condition considering the global load balancing traditionally. The PLA scheme provisions the lightpath for data center service with modulation format adjustment according to transmission distance. The TA-CSO scheme not only chooses the destination data center for the optimization between application and network stratums resources, but also considers the distance adaptive modulation format with transport parameters. Among three schemes, the service request without time sensitivity requirement can be served with the required fixed bandwidth at the arriving time. In the TaRS scheme, for simplicity, we set the values of i0, τ0 and TG as 5, 5ms and 5ms respectively according to the experience of experiments. Along the computed SDP, the first fit assignment strategy searches for available spectrum sub-carrier resources from the lowest sub-carrier until the required number of contiguous ones are found. The service data volumes from data center node are randomly from 50Gbit to 400Gbit, while the needed application resource usage in data center is selected randomly from 1% to 0.1% for each service request. They arrive at the network following a Poisson process and results have been extracted through the generation of 1 × 105 demands per execution. The experimental results are shown in Figs. 9(a) and 9(b). Figure 9(a) shows the comparisons on the performances among four schemes in terms of blocking probability. As shown in Fig. 9(a), the TaRS scheme significantly reduces blocking probability compared to the TCSO, PLA and TA-CSO schemes, especially when the network is heavily loaded. The reason is that TaRS scheme implements global optimization not only considering CSO of data center and optical network resources with distance adaptive modulation format, furthermore on the basis of it, performs the service time elasticity with various delay sensitivity requirements and adjusts the service bandwidth according to the distribution of network resources. Another phenomenon can be seen that TA-CSO scheme has lower blocking probability than TCSO and PLA schemes. That is because TA-CSO scheme can select the data center with CSO and adjust spectrum bandwidth through modulation format, and increase greatly the available resources for new arriving demands. The comparisons on resource occupation rate among four schemes are compared in Fig. 9(b). The resource occupation rate reflects the percentage of occupied resources to the entire elastic optical network and data center application resources. We can see that the proposed TaRS scheme can enhance the resource occupation rate remarkably more than the other schemes, the main reason of which is that more resources are occupied when the blocking probability is lower. Another phenomenon can be found in Fig. 9(b) where the advantage of TaRS scheme is again more obvious with the increase of offered loads. That is because network under higher workload needs urgently to jointly optimize optical network and application resources considering elastic service time schedule and flexible provisioned bandwidth with time elasticity.

 figure: Fig. 9

Fig. 9 (a) Blocking probability and (b) resource occupation rate among various schemes under heavy traffic load scenario.

Download Full Size | PDF

6. Conclusion

To enhance network performances and meet the QoS requirement of data center services, this paper presents a TeSDN architecture in data center interconnection with elastic optical networks for data center services. Additionally, different from the traditional service scheduling schemes, the TaRS scheme is introduced for TeSDN in the proposed architecture, which considers the data center service scheduling with flexible service time and service bandwidth according to the various time sensitivity requirements. The functional architecture and overall signaling procedure are described in this paper. The feasibility and efficiency of TeSDN is verified on our testbed built by both control and data planes. We also quantitatively evaluate the performance of our approach under heavy traffic load scenario in terms of blocking probability and resource occupation rate, and compare it with TCSO, PLA and TA-CSO schemes. The experimental results indicate that the TeSDN with TaRS scheme can schedule service with time elasticity and utilize cross optical network and application stratums resources effectively in elastic data center optical interconnection.

Our future TeSDN work includes two aspects. One is to improve TaRS scheme performance with dynamic parameters. The other is to research the relationship and interface between Openstack orchestration and application controller, and implement the network virtualization in data center interconnection with elastic optical networks on our OpenFlow-based testbed.

Acknowledgments

Preliminary work has been presented in Opt. Express 21(22), 26990-27002 (2013). This work has been supported in part by 863 program (2012AA011301), 973 program (2010CB328204), NSFC project (61271189, 61201154, 60932004), RFDP Project (20090005110013, 20120005120019), the Fundamental Research Funds for the Central Universities (2013RC1201), and Fund of State Key Laboratory of Information Photonics and Optical Communications (BUPT).

References and links

1. C. Kachris and I. Tomkos, “A survey on optical interconnects for data centers,” IEEE Commun. Surv. Tut. 14(4), 1021–1036 (2012). [CrossRef]  

2. M. Jinno, H. Takara, B. Kozicki, Y. Tsukishima, Y. Sone, and S. Matsuoka, “Spectrum-efficient and scalable elastic optical path network: architecture, benefits, and enabling technologies,” IEEE Commun. Mag. 47(11), 66–73 (2009). [CrossRef]  

3. O. Gerstel, M. Jinno, A. Lord, and S. J. B. Yoo, “Elastic optical networking: a new dawn for the optical layer?” IEEE Commun. Mag. 50(2), s12–s20 (2012). [CrossRef]  

4. W. Shieh, “OFDM for flexible high-speed optical networks,” J. Lightwave Technol. 29(10), 1560–1577 (2011). [CrossRef]  

5. L. Velasco, A. Asensio, J. L. Berral, A. Castro, and V. López, “Towards a carrier SDN: an example for elastic inter-datacenter connectivity,” Opt. Express 22(1), 55–61 (2014). [CrossRef]   [PubMed]  

6. H. Yang, Y. Zhao, J. Zhang, S. Wang, W. Gu, Y. Lin, and Y. Lee, “Cross stratum optimization of application and network resource based on global load balancing strategy in dynamic optical networks,” in Proceedings of Optical Fiber Communications and National Fiber Optic Engineer Conference (OFC/NFOEC 2012) (Optical Society of America, 2012), paper JTh2A.38. [CrossRef]  

7. H. Yang, Y. Zhao, J. Zhang, S. Wang, W. Gu, Y. Ji, J. Han, Y. Lin, and Y. Lee, “Multi-stratum resource integration for OpenFlow-based data center interconnect [Invited],” J. Opt. Commun. Netw. 5(10), A240–A248 (2013). [CrossRef]  

8. L. Liu, W. R. Peng, R. Casellas, T. Tsuritani, I. Morita, R. Martínez, R. Muñoz, and S. J. B. Yoo, “Design and performance evaluation of an OpenFlow-based control plane for software-defined elastic optical networks with direct-detection optical OFDM (DDO-OFDM) transmission,” Opt. Express 22(1), 30–40 (2014). [CrossRef]   [PubMed]  

9. M. Channegowda, R. Nejabati, M. Rashidi Fard, S. Peng, N. Amaya, G. Zervas, D. Simeonidou, R. Vilalta, R. Casellas, R. Martínez, R. Muñoz, L. Liu, T. Tsuritani, I. Morita, A. Autenrieth, J. P. Elbers, P. Kostecki, and P. Kaczmarek, “Experimental demonstration of an OpenFlow based software-defined optical network employing packet, fixed and flexible DWDM grid technologies on an international multi-domain testbed,” Opt. Express 21(5), 5487–5498 (2013). [CrossRef]   [PubMed]  

10. N. Amaya, S. Yan, M. Channegowda, B. R. Rofoee, Y. Shu, M. Rashidi, Y. Ou, E. Hugues-Salas, G. Zervas, R. Nejabati, D. Simeonidou, B. J. Puttnam, W. Klaus, J. Sakaguchi, T. Miyazawa, Y. Awaji, H. Harai, and N. Wada, “Software defined networking (SDN) over space division multiplexing (SDM) optical networks: features, benefits and experimental demonstration,” Opt. Express 22(3), 3638–3647 (2014). [CrossRef]   [PubMed]  

11. S. Das, G. Parulkar, and N. McKeown, “Why OpenFlow/SDN can succeed where GMPLS failed,” in Proceedings of European Conference on Optical Communication (ECOC 2012) (Optical Society of America, 2012), paper Tu.1.D.1. [CrossRef]  

12. F. Paolucci, F. Cugini, N. Hussain, F. Fresi, and L. Poti, “OpenFlow-based flexible optical networks with enhanced monitoring functionalities,” in Proceedings of European Conference and Exhibition on Optical Communications (ECOC 2012) (Optical Society of America, 2012), paper Tu.1.D.5. [CrossRef]  

13. L. Liu, R. Muñoz, R. Casellas, T. Tsuritani, R. Martínez, and I. Morita, “OpenSlice: an OpenFlow-based control plane for spectrum sliced elastic optical path networks,” in Proceedings of European Conference on Optical Communication (ECOC 2012) (Optical Society of America, 2012), paper Mo.2.D.3. [CrossRef]  

14. R. Muñoz, R. Casellas, R. Martínez, and R. Vilalta, “Control plane solutions for dynamic and adaptive Flexi-Grid optical networks,” in Proceedings of European Conference on Optical Communication (ECOC 2013) (Optical Society of America, 2013), paper We.3.E.1.

15. L. Liu, T. Tsuritani, I. Morita, H. Guo, and J. Wu, “Experimental validation and performance evaluation of OpenFlow-based wavelength path control in transparent optical networks,” Opt. Express 19(27), 26578–26593 (2011). [CrossRef]   [PubMed]  

16. T. Szyrkowiec, A. Autenrieth, P. Gunning, P. Wright, A. Lord, J. P. Elbers, and A. Lumb, “First field demonstration of cloud datacenter workflow automation employing dynamic optical transport network resources under OpenStack and OpenFlow orchestration,” Opt. Express 22(3), 2595–2602 (2014). [CrossRef]   [PubMed]  

17. J. Zhang, H. Yang, Y. Zhao, Y. Ji, H. Li, Y. Lin, G. Li, J. Han, Y. Lee, and T. Ma, “Experimental demonstration of elastic optical networks based on enhanced software defined networking (eSDN) for data center application,” Opt. Express 21(22), 26990–27002 (2013). [CrossRef]   [PubMed]  

18. J. Zhang, Y. Zhao, H. Yang, Y. Ji, H. Li, Y. Lin, G. Li, J. Han, Y. Lee, and T. Ma, “First Demonstration of enhanced Software Defined Networking (eSDN) over elastic Grid (eGrid) Optical Networks for Data Center Service Migration,” in Proceedings of Optical Fiber Communications and National Fiber Optic Engineer Conference (OFC/NFOEC 2013) (Optical Society of America, 2013), paper PDP5B.1. [CrossRef]  

19. J. Abley, A. Canada, and K. Lindqvist, “Operation of Anycast services,” IETF RFC 4786 (2006), https://tools.ietf. org/html/rfc4786.

20. Y. Zhao, R. He, H. Chen, J. Zhang, Y. Ji, H. Zheng, Y. Lin, and X. Wang, “Experimental performance evaluation of software defined networking (SDN) based data communication networks for large scale flexi-grid optical networks,” Opt. Express 22(8), 9538–9547 (2014). [CrossRef]   [PubMed]  

21. A. Farrel, J. P. Vasseur, and J. Ash, “A path computation element (PCE)-based architecture,” IETF RFC 4655 (2006), http://tools.ietf.org/html/rfc4655.

22. R. Casellas, R. Martinez, R. Munoz, L. Liu, T. Tsuritani, and I. Morita, “An integrated stateful PCE/OpenFlow controller for the control and management of flexi-grid optical networks,” in Proceedings of Optical Fiber Communications and National Fiber Optic Engineer Conference (OFC/NFOEC 2013) (Optical Society of America, 2013), paper OW4G.2. [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 The architecture of TeSDN for elastic data center optical interconnection.
Fig. 2
Fig. 2 The functional models of application and transport controllers.
Fig. 3
Fig. 3 Illustration of time-aware (a) data center application and (b) network resource allocation.
Fig. 4
Fig. 4 The flowchart of TaRS scheme.
Fig. 5
Fig. 5 Interworking procedure of TeSDN for data center service.
Fig. 6
Fig. 6 Experimental testbed and demonstrator setup.
Fig. 7
Fig. 7 The front-end graphical user interface of testbed: (a) the topology tab and (b) information tab.
Fig. 8
Fig. 8 (a)The capture of the OpenFlow messages sequence and (b) extended flow table modification message for TeSDN.
Fig. 9
Fig. 9 (a) Blocking probability and (b) resource occupation rate among various schemes under heavy traffic load scenario.

Tables (1)

Tables Icon

Table 1 Notations and Definitions

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

R i : R 1 R 2 ... R i ... R F I r IR=[ S r1 S r2 ... S ri ... S rk B r1 B r2 ... B ri ... B rk t 1 t 2 ... t i ... t k ] S r B r t
t w =TD/ E[ B a ( τ 0 , i 0 ) ] T G
E[ B a (t,i)|t= τ 0 ,i= i 0 ]= t= t c τ 0 t c i= i c i 0 i c B i (t) f i (t) / t= t c τ 0 t c i= i c i 0 i 0 f i (t) ,t[ t c τ 0 , t c ],i[ i c i 0 , i c ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.