Performance Measurements of MPLS Traffic Engineering and QoS

Tamrat Bayle <tamrat@hiroshima-u.ac.jp>
Reiji Aibara <ray@hiroshima-u.ac.jp>
Kouji Nishimura <kouji@hiroshima-u.ac.jp>
Hiroshima University
Japan

Abstract

Providing Quality of Service (QoS) and traffic engineering capabilities in the Internet is very essential, especially to support the requirements of real-time, as well as mission critical applications. For this purpose, the current Internet must be enhanced with new technology that enables it to offer such capabilities for controlling its behavior as needed. Multiprotcol Label Switching (MPLS) is an emerging technology, which plays a key role in IP networks by delivering QoS and traffic engineering features. In this paper, we first examine how MPLS dramatically improves the performance and scalability of service providers and carrier backbone networks. Since verifying its main features, such as traffic engineering and QoS functionality, and assessing their performances using an experimental network is an indispensable task, we then carry out some functionality tests, and performance measurements by setting up explicit label switched paths (LSPs) between ingress and egress label switching routers (LSRs) in an MPLS capable experimental network. To ensure the provision of a specific class of service (CoS) in the MPLS network, we set up a system to mark packets at the ingress router, and allow them to follow the explicit LPSs, which are associated with appropriate performance metrics of link bandwidth and output transmission queues. We also perform traffic distribution tests using explicit LSPs. The goal of this experiment is to evaluate how well MPLS traffic engineering and QoS can improve the performance of today's Internet, and identify opportunities for improvement, and development of new mechanisms to ensure the provision of traffic engineering, as well as QoS/CoS features in future networks. Some preliminary tests indicate the effectiveness of MPLS in that through explicit LSPs, MPLS permits to support traffic engineering, as well as the provision of differentiated levels of service.

Keywords: MPLS, IP, QoS/CoS, traffic engineering, explicit routing, LSP, performance measurements

Contents

1. Introduction

The overwhelming growth of the Internet and the growing popularity of real-time applications set new challenges to the Internet community. Big Internet service providers are growing ever larger and supporting increasingly a variety of services, with different requirements both from different applications, and their customers. Service providers need for a commercially viable and scalable tools to make the most of their networks in order to increase their revenues by supporting the needs of time or/and mission critical applications. This is because different applications have varying needs for delay, delay variation (jitters), bandwidth, and packet loss.

For example, real-time applications such as voice over IP (VoIP) and video conferencing are extremely latency-dependent. Here, the timeliness of data delivery is an issue of utmost importance. But in the Internet, where there is no predictable traffic control, these applications do not run effectively. Service differentiation for traffic flows and performance optimization of the operational networks are very critical for the Internet to remain as successful as before. However, the Internet, particularly its core protocol IP (Internet Protocol) was never designed with quality of service (QoS) in mind. Instead it was originally designed as a research and educational resource, and thus the underlying technology that forms the backbone of today's Internet is largely based on that philosophy. But times have changed a lot since, and service differentiation using QoS mechanisms while optimizing the operational network has become quite an important issue in the Internet for these applications to run effectively and efficiently.

On the other hand, as the Internet is required to support different types of services, effective and efficient bandwidth management tools in IP networks becomes increasingly important, especially when dealing with how to allocate the available network resources in order to optimize the overall performance of the networks. And yet, when the network has to sustain heavy traffic load, and has limited resources, the situation of having some congested links, while others remain underutilized is almost an inevitable phenomenon. One of the main reasons to cause such congestion events in IP networks is that of the destination based forwarding paradigm. In IP networks, the Interior Gateway Protocols (IGPs), such as Open Shortest Path First (OSPF), and Intermediate System-Intermediate System (IS-IS) routing protocols use destination-based forwarding algorithm, without considering other network parameters, such as the available bandwidth. In effect, all traffic between any two nodes traverses across the IGP shortest path. Hence, it is obvious that such situation can create hot spots on the shortest distance between two points, while other alternative routes may still be underutilized. As a result, degradation of throughput, and long delay, and packet losses can be noticed. In such situation, minimizing the effects of congestion by optimizing the performance of the operational networks becomes more critical. Traffic engineering is very important in this regard [1], and plays a key role in that it offers service providers a means for performance optimization and bandwidth provisioning. In fact, without traffic engineering, it is also difficult to support QoS on a large scale and at reasonable cost [2].

Thus, the key to address the problem of  traffic engineering is to have the ability to place the traffic onto the network as flexibly as needed, so that we can minimize network congestion that leads to poor network performance. Because minimizing congestion by optimizing the distribution of traffic on a given network is the central goal of traffic engineering [3]. In order to address the QoS issue, however, the ability to introduce connection-oriented forwarding techniques to connectionless IP networks becomes necessary. In effect, this allows IP networks to reserve resources, such as bandwidth over predetermined paths for service differentiation in order to provide QoS guarantees.

It follows that providing QoS and traffic engineering capabilities in the Internet is very essential, especially to support the requirements of real-time, as well as mission critical applications. Otherwise, they lose importance, value and effectiveness. For this purpose, the current Internet must be enhanced with new technology that enables it to offer such capabilities for controlling its behavior as needed. Multiprotcol Label Switching (MPLS) [4] is an emerging technology, which plays a key role in IP networks by delivering QoS, as well as traffic engineering features. MPLS has been developed and standardized by the Internet Engineering Task Force (IETF) to address these issues, in a more scalable and cost effective way.

The rest of this paper is organized as follows. Following this section, we present an overview of MPLS in general. Traffic engineering using MPLS is briefly discussed in section 3, whereas MPLS QoS feature is described in section 4. In both cases, we try to emphasize how MPLS dramatically improves the performance and scalability of service providers and carrier backbone networks. In section 5, by first explaining the objective of our performance measurements, we describe briefly the experimental network configuration, and tools used to evaluate MPLS traffic engineering and QoS features. We then demonstrate how MPLS traffic engineering minimizes the effects of congestion using explicit label switched paths (LSPs), and how service differentiation is effectively applied using the Class of Service (CoS) value in the MPLS header by the presenting the measurements results. In the same section, we particularly elaborate on how our experience in traffic engineering using an MPLS enabled network reduces the latency and increases throughput by minimizing congestion, which is a clear insight for building the Internet of tomorrow based on MPLS. We finally, present a brief concluding remarks.

2. MPLS Background

An overview of Multiprotocol Label Switching (MPLS), is presented in [5], and its main applications are well described in [6]. MPLS is the industry-standard approach developed by the Internet Engineering Task Force (IETF) for reducing the complexity of forwarding in IP networks. It is particularly an approach for achieving the simplified connection-oriented forwarding characteristics of layer 2 switching technologies while retaining the equally desirable flexibility and scalability of layer 3 routing [4]. By combing the best of network layer routing and link layer switching, MPLS introduces a new forwarding paradigm for IP networks, and brings connection-oriented properties, such as similar to that of the traffic engineering capabilities of Asynchronous Transfer Mode (ATM) to IP networks, but in a very scalable and cost effective way. In addition, MPLS introduces a new forwarding paradigm for IP networks in that by eliminating the need for routers to perform an address lookup for every packet, MPLS speeds up packets forwarding to their destination with greater efficiency.

MPLS uses label-swapping forwarding paradigm known as label switching for flexible control over routing across the network. A router that involves in label switching is referred to as label switching router (LSR). As IP packet enters an MPLS domain, it is assigned a small fixed-length label that specifies the path and its priority. Label assignment to packet is typically based on its membership in a forwarding equivalence class (FEC), which is a group of packets that require equivalent forwarding treatment across the same path. It is important to note that the label at each hop has only a local significance representing the next-hop and QoS requirements for packet's belonging to each FEC. The path along which MPLS packet traverses is called a label-switched path (LSP). At each hop across an LSP tunnel through an MPLS domain, the packet gets a new label value that determines an outbound interface to the next hop, and its treatments. The LSP is defined by the transition in label values, as the label is swapped at each LSR. Since the mapping between labels is constant at each LSR, the LSP is actually determined by the initial label value at the ingress LSR.



Figure 1: An MPLS example network

The label can be encoded into a packet in several ways (because MPLS is defined to be used over many layer 2 technologies) [7], but typically an MPLS label stack header (also known as MPLS shim header) consists of one or more 32 bit entries, and precedes the payload, such as the original IP packet. The label itself is 20 bits wide, with 3 additional bits for EXP (experimental) field (also called CoS). As the name implies, the EXP field is used for experimentation to indicate packet's treatments, such as queuing and scheduling. The encapsulation specification also includes a 1 bit bottom of stack (S) field. The S bit is set to 1 to indicate the bottom of stack entry before the original packet, and set to 0 for all other entries. An LSR that pops a stack entry with S set to 1 would treat the original packet using the traditional IP routing. Plus an 8 bit  time to live (TTL) field is included in the shim header to assist in the detection and discard of looping MPLS packets in LSPs. The TTL is set to a finite value at the beginning of the LSP, decremented by one at every LSR, and discarded if the TTL reaches zero.

The basic operation of an MPLS network is illustrated in Figure 1. Almost the same topology, including all the devices here, is also used for our experimental analysis of MPLS QoS in section 5.4. In this network, Host A is sending IP packets to destination Host C, while Host B is sending IP packets to destination Host D. As IP packets enter the MPLS domain from both sources, the ingress LSR 1 typically determines which LSP to use for each packet, and attaches a label to the outgoing packet accordingly. It is important to note that it is the label value at the ingress router that actually determines which LSP to travel for the packet. LSR 1 then forwards the packet via the appropriate interface for the selected LSP. As the intermediate LSR 2 receives each packet, it decides based on the incoming interface and label value, the outgoing interface and label value with which to forward the packet to the next hop. So, according to this diagram, at the ingress router, IP packets from Host A to destination Host C are mapped into label switched path 1 (LSP 1) by attaching label 13. The intermediate LSR 2 simply forwards each packet with label 13 to LSR 3 after swapping with new label 65. At the same time, packets from Host B to destination Host D are mapped into label switched path 2 (LSP 2) by attaching label 22. As the packets traverse LSP 2, the intermediate LSR 2 examines the label in the MPLS header of each packet, and looks up its MPLS forwarding table to mach for the incoming label 22. LSR 2 then swaps label 22 with an outgoing label 16 before it forwards to the egress LSR 3. Finally, at the egress LSR 3, all labels are removed, and packets are then forwarded using just the traditional IP forwarding paradigm to their respective destinations.

MPLS technology, among its may benefits, enables service providers, and carriers to efficiently offer traffic engineering and traffic management functions in IP networks. Thus, MPLS is key to enabling service providers and carrier backbone networks to simplify traffic engineering and traffic management in future IP networks. We discuss each of them briefly in the next consecutive sections.

3. Traffic Engineering with MPLS

The challenge of traffic engineering is how to make the most effective use of the available bandwidth in a large IP backbone networks, or place the traffic over them so that we can achieve the best performance characteristics from the same IP networks even in the event of congestion. MPLS addresses the traffic engineering issue by setting up explicit paths through the network using constraint based routing. The requirements for traffic engineering over MPLS are proposed in [8].

MPLS traffic engineering dynamically establishes and maintains an LSP tunnel across the MPLS domain using signaling protocols. The two signaling mechanisms used for distributing labels across an MPLS domain, in the context of traffic engineering and QoS, are constraint-based routing label distribution protocol (CR-LDP) [9], and resource reservation protocol with traffic engineering extension (RSVP-TE) [10]. Explicit routing or constraint-based routing is particularly interesting for traffic engineering purpose.

The label distribution protocols (both CR-LDP and RSVP) determine the path across which the LSP tunnel is established based on its resource requirements and available network resources, such as bandwidth. Then, they move the label binding information along a predefined route. At the ingress, LSRs assign label to packets as they enter the network. Here, the label binding to packets is done based on FEC membership. This feature of MPLS allows scalable aggregation of flows into a single FEC based on their requirements. In fact, it also makes the task very simple for MPLS traffic engineering to route traffic flows across an LSP tunnel by associating the resources required by a given FEC (or LSP) with actual backbone capacity and topology.



Figure 2: Traffic engineering using explicit LSPs

To illustrate how explicit LSPs are used to solve the traffic engineering problem, let us consider the example shown in Figure 2. In this Figure, LSR 1 through LSR 3 are label switching routers, while Host A through Host D are traffic sources and sinks. Assume a 100 Mbps PVC connections among all routers. Further assume the traffic from Host A to Host C is 100 Mbps and that of the traffic from Host B to Host D is 100 Mbps.

With MPLS explicit routing capability, the traffic from Host B to Host D can be assigned to LSP 2, which in turn traverses the path LSR 1-LSR 2-LSR 3, while the traffic from Host A to Host C can be made to travel across LSP 1, which consists of path LSR 1-LSR 3. As a result, the traffic flows between hosts one side (in this case, Host A and Host B) and hosts on the other side (Host C and Host D) can be distributed over the network according to their demands, such as bandwidth guarantees, by just establishing different LSPs. This enables to attain efficient utilization of bandwidth, as well as a significant performance gain. The whole idea here, is well elaborated in the experimental analysis part.

4. MPLS CoS/QoS

The important issue to overcome here is, the need for QoS capability in the Internet, which arises from the fact that the current Internet offers just best-effort packet delivery service, and thus all packets are treated equally, regardless of the needs of applications for some levels of resource assurance. This is mainly because IP on which the Internet is based, was not typically designed as a connection-oriented protocol. Instead IP is providing a connectionless or datagram service, thus no end-to-end guarantees of packet delivery can be provided. But in order to provide QoS guarantees in the Internet, all packets of a flow  must traverse the same path, and some means for reserving resources along that path must exist. At least, in this way, the Internet can have the ability to differentiate between high priority and lower priority network traffic in order to ensure guaranteed quality of service.

MPLS provides a robust QoS control feature in the Internet. In addition, MPLS class of service feature can work in conjunction with other QoS architectures for IP networks defined by the IETF. Integrated Services (IntServ) [11] using Resource ReSerVation Protocol (RSVP) [12] and Differentiated Services (DiffServ) [13] are the two models defined by the IETF for providing QoS in IP networks. Combining MPLS with DiffServ is particularly interesting, and provides the required levels of end-to-end QoS management in a scalable way. MPLS support for DiffServ is defined in [14]. In fact, both MPLS and DiffServ are the real enhancements to IP networks. However, they don't make any requirements about each other, so they can work independently.

The QoS feature of  MPLS represents the capability to provide differentiated levels of service and resource assurance across an MPLS network. This capability typically includes a set of techniques necessary to manage network bandwidth, delay, jitter, and packet loss. For example, the ability to mark packets with a certain priority combined with buffer management and queuing schemes ensures that voice traffic remains within the acceptable bounds for packet loss, delay, and jitter. So, in an MPLS network, when a packet arrives at a LSR, the label is used to determine outbound interface and new outbound label, but the CoS (EXP) field value is used to determine the type of treatments, such as queuing and scheduling.

With MPLS QoS, there are two approaches to mark traffic for controlling QoS within an MPLS network. That means, when IP traffic enters an LSP tunnel, the CoS bits in the MPLS header are set in one of two ways. In the first way, queuing information is encoded into the experimental (EXP) field of the MPLS shim header. Since the EXP field allows eight different CoS markings, the marking is used as packet's CoS value. Here, different packets might receive different markings depending on their requirements, so that they can receive different treatments along the path. This approach is referred to as experimental bit inferred label switched paths (E-LSPs), to indicate that QoS information is inferred from the EXP field.

In the second method, the label associated with MPLS packet specifies how a packet should be treated, and all packets entering the LSP will be marked with a fixed CoS value. This means that all packets entering the LSP receive the same class of service. This approach is known as label inferred label switched paths (L-LSPs), to indicate that QoS information is inferred from the MPLS label.

Either way, MPLS offers an effective way to allocate network resources to traffic according to their requirements with different granularity. Since MPLS also allows dedicated paths to be set up, and bandwidth reservation across the same path, achieving QoS guarantees is not difficult any more. Especially using an explicit LSP, MPLS provides precise QoS controls for delivering enhanced IP-based services.

5. Measurements of MPLS Traffic Engineering and QoS

This section presents the performance results of MPLS traffic engineering and QoS measurements. A series of tests were run to evaluate the performance of TCP and UDP flows on an MPLS network. The tests measured also included the effects of setting MPLS different features on the performance of traffic flows.

The goal of this experiment is to evaluate how well MPLS traffic engineering and QoS can improve the performance of today's Internet, and identify opportunities for improvement, and development of new mechanisms to ensure the provision of traffic engineering, as well as QoS/CoS features in future networks. To that end, we investigate an MPLS network performance behavior both from traffic engineering, as well as service differentiation perspectives.

The result of this experimentation is that we verify how to optimize the use of the available bandwidth and minimize the effects of  network congestion with MPLS traffic engineering, and provide some level of resource assurance using MPLS QoS feature. More specifically, we verify how traffic is best mapped into an explicit LSP in order to improve performance of  IP networks.

5.1 Experimental Network Configuration

The experimental topology we use to evaluate the traffic engineering performance is shown in Figure 3, including the description of devices employed below. Actually, the network topology shown here is the same topology as Figure 2 above, which we use to discuss how MPLS addresses the traffic engineering issue. However, it should be noted that the network diagram here shows only part of the actual experimental network setup, but other devices that we consider irrelevant to our experimental analysis are completely omitted.


Figure 3: Experimental network configuration for traffic engineering

All host computers (CPU: Intel Pentium II 300 MHz processors, and RAM: 128 MB) are equipped with Fast Ethernet network interface cards (NICs), and running FreeBSD 4.1 operating system. In addition, the three routers (we refer them as LSRs from now onwards) are Juniper Networks M40TM routers running JUNOSTM Internet Software 4.2 that supports the Juniper Networks MPLS implementation. The three LSRs are interconnected with each other using OC-12 ATM links, though we use 80 Mbps ATM permanent virtual circuit (PVC) connections for our experimental purpose. These ATM PVCs are treated as point-to-point links between label switching routers both by MPLS and RSVP-TE. Also, hosts are connected to the MPLS domain using 100Base-T connections via Gigabit Ethernet switches (not shown in this diagram). The physical distance between LSR 1 and LSR 3, and LSR 2 and LSR 3 is about 40km, while LSR 1 and LSR 2 are 5km apart.

Netperf version 2.1 [15] was used as the test tool in all the measurements. Netperf measures the throughput both at the TCP and UDP levels, thus the measurements can model many real life applications in the Internet since these applications use the TCP and UDP protocols.

5.2 Experiment Using MPLS Explicit LSPs

In this experiment, we attempt to minimize the effects of network congestion on the performance of the network, utilizing MPLS traffic engineering capability. We do so, by applying explicit routing, in such away that an explicit LSP is predetermined through the network taking full advantage of the available network resources. MPLS together with RSVP-TE signaling protocol can offer this feature in our experimental network.

The first part of experimental study involves performance measurements of TCP bulk data transfer between two PC hosts running the Netperf benchmarking tool on one host as a client, and on another as a server. That means, transmitting TCP bulk stream between Host A and Host C for throughput measurement. On the other hand, we use Host B and Host D just to introduce a reasonable level of congestion with Netperf as TCP bulk stream generator.

In the first scenario, we establish two explicit LSPs between LSR 1 and LSR 3, which are directly following the IGP shortest path. In the second scenario of the experiment, we also set up again two explicit LSPs among the three LSRs. But this time IP packets from Host A to destination Host C are made to traverse label switched path 1 (LSP 1), while packets from Host B to destination Host D are mapped into label switched path 2 (LSP 2).

First, for the throughput test, we use Netperf TCP stream to measure the performance of TCP flow from from Host A to destination Host C, after moderately congesting the same link using another TCP bulk stream running between Host B and Host D, according to Figure 3. We then divert the flow from Host A to Host C to travel an explicit LSP passing through LSR 2.

In order to show the effect of traffic distribution using MPLS traffic engineering on the network performance, we also measure the throughput performance of TCP flows from Host A to Host C, and Host B to Host D, at the same time, as they traverse LSP 1 and LSP 2, respectively. The results of throughput measurements along with a brief analysis are presented in the next section.

5.3 Measurement Results and Analysis

Figure 4 shows the Netperf throughput measurements of TCP flow between Host A  and  Host C, as a variation of message sizes, and while traversing an MPLS explicit path and IGP shortest path. The bottom line shows the throughput when the traffic flow traverses the IGP shortest path while another flow from Host B to Host D follows the same IGP path. The top line shows the throughput when the same traffic flow traverses an MPLS explicit path, LSP 2. As it can be seen clearly from Figure 4, when the traffic, from Host A to Host C, traverses through a semi-congested link (, which is located on the IGP shortest path, in this case), the result is that a low throughput of TCP traffic is observed.


Figure 4: Throughput of TCP flow from Host A to Host C


Figure 5: Throughput of TCP for both flows

However, after diverting the traffic path using explicit routing of MPLS, we observe a significant throughput improvement over IGP shortest path. This is the result of using explicit LSP, which passes across LSR 2. It should also be noted at this point that although the explicit LSP 2 has one additional hop (LSR 2), it still provides much better performance over the traditional shortest-path IGP route, which is competed by another flow.

In addition to allowing us to get an improved performance for the traffic flow between Host A to Host C, forcing the traffic into an alternate path, frees bandwidth for another flow on the congested link. We can as well observe this implication from Figure 5. Common to all results is that the throughput improves with increasing message size, and beyond a certain message size it remains fairly constant as expected.

On the other hand, we also measure the average round-trip time (RTT) for both TCP and UDP streams, using Netperf request/response performance method. The performance metric here is the aggregate number of request/response packet pairs (transactions) per second. That means, the test gives the amount of transactions exchanged per second, where one transaction is the exchange of a single request and a single response. So, from this result, we can calculate, and infer the average round-trip latency incurred by each message. The results, in Figure 6 and Figure 7, below show both TCP and UDP average round-trip latencies, respectively. Obviously, the round-trip latency dramatically increases for the congested IGP path, while it is minimal for packets traversing the MPLS explicit LSPs.



Figure 6: TCP average round-trip latency


Figure 7: UDP average round-trip latency

In all measurement cases, we observe very significant performance improvements for traffic that follows the non-congested explicit LSPs. But, without MPLS traffic engineering, all unlabeled packets take the shortest path, while other links remain unutilized. And if the traffic traverses across the congested IGP route, it incurs long delays, and throughput degradation. This in turn shows how MPLS is robust to maximize the utilization of links and nodes throughout the network. Especially, the results of the delay analysis strongly support our approach of the traffic engineering problem with MPLS.

Thus, it is obvious at this point that our experimental analysis demonstrates how traffic engineering problem has a significant impact on network performance and also how MPLS enables robust traffic engineering using explicit routing.

5.4 Experiment Using MPLS CoS/QoS

Figure 8 shows the configuration we use to measure MPLS CoS performance. In this configuration, each traffic flow is mapped onto an MPLS Label Switched Path (LSP) that extends across the MPLS domain. In addition, each LSP is characterized with a certain reserved bandwidth across the MPLS network, as well as with different CoS values. This allows us to provide guaranteed bandwidth and different levels of service for the two flows.

The configuration is especially set up to effectively apply MPLS service differentiation along the same path. Since MPLS allows bandwidth reservation to be specified over an LSP, and packets can be marked to indicate their loss priority, we first establish two explicit label switched paths, LSP 1 and LSP 2, with bandwidth reservations. In this particular test, we reserve 70% of the available bandwidth (80 Mbps) for LSP 1, while the remaining 30% is reserved for LSP 2. The result from this test, as shown in Figure 9, demonstrates how effectively one can reserve resources in advance, as well as ensure guaranteed bandwidth.



Figure 8:  Network configuration for MPLS CoS test

On the other hand, since verifying MPLS CoS (class of service) feature was one of our objectives, we then performed a serious of other tests using the same network set up of Figure 8  to verify  how the two flows benefit from this feature based on the CoS values they are assigned. Since we are using the experimental field (EXP) of MPLS header, we have 8 different possible classes (0-7) to assign. The class values typically indicate which output transmission queue to put, how much percent of the queue buffer to use, what percent of the link bandwidth to serve, and packet loss priority to apply in the presence of congestion. Here, the CoS value is an important parameter that affects both throughput and latency performance of the two flows, thus the traffic flow with higher priority class receives better treatment than that of with low priority class.

The test involves configuring the ingress router LSR 1 so that it can classify and map the incoming flows into LSP 1 and LSP 2, as they enter an MPLS domain based on their destination addresses. The two LSPs are also configured with different CoS values in order to place the traffic into a corresponding transmission priority queues. Since the CoS value is encoded as part of MPLS header and since this value remains in the packets until the MPLS header is removed when the packets exit from the egress router, all LSRs along a specific LSP utilize the same CoS value set at the ingress router, and treat packets in the same manner accordingly.



Figure 9: Bandwidth reservation over LSPs



Figure 10: Latency of TCP flows with different CoS values

Thus, to ensure the provision of such specific CoS in the MPLS network, we set up a system to mark packets at the ingress router, and allow them to follow the explicit LPSs, which are associated with appropriate performance metrics of link bandwidth and output transmission queues.

According to our configuration in Figure 8, each LSP is associated with a single CoS value, so all packets entering each LSP receive the same level of treatment. The configuration also specifies how to serve each queue with a percentage of the total available bandwidth, and configure separate drop profiles for the two flows in the CoS (EXP) field. That means, when a packet arrives at a LSR 1, the EXP field is set to the appropriate value, and used to determine the queuing, as well as scheduling treatments in the next LSRs (LSR 2 and LSR 3).

The results of the average round-trip latency for both TCP flows are shown in Figure 10. The traffic through LSP 1 (i.e., traffic from Host A to Host C) was offered a higher service level, and delivered with low latency. But the traffic through LSP 2 (i.e., traffic from Host B to Host D), on the other hand, was assigned a lower service level, and delivered with high latency. That means that the flow that was identified as higher priority class was allocated the necessary priority and bandwidth levels across the LSP to run effectively, while the flow that was identified as lower priority class was allocated low bandwidth and other network resources, and thus run at a lower performance.

Besides, the results make it clear that service differentiation using MPLS CoS value has a significant effect on the performance of applications. And the performance effect of CoS is even more significant, especially when the network is under saturation (i.e., during congestion). Whatever the case, the flow with higher priority class always receives better treatment than the flow with lower priority.

This shows that LSRs in an MPLS network can effectively prioritize packets based on their classes, and give the appropriate treatments to time critical traffic such as VoIP and video streaming, which are extremely latency dependent.

We remark that all the empirical results presented here, demonstrate the effectiveness of MPLS traffic engineering and QoS in IP networks in order to achieve highest performance.

6. Conclusions

Providing Quality of Service (QoS) and traffic engineering capabilities in the Internet is very essential, especially to support the requirements of real-time, as well as mission critical applications. For this purpose, the current Internet must be enhanced with new technologies, such as MPLS that enables it to offer such capabilities for controlling its behavior as needed.

This paper first discusses the core issues of traffic engineering, and QoS briefly, and presents how MPLS traffic engineering dramatically improves the performance and scalability of service providers and carriers IP backbone networks. MPLS also offers an effective way to allocate network resources to traffic according to their requirements with different granularity. Since MPLS allows dedicated paths to be set up, and bandwidth reservation across the same paths, achieving QoS guarantees is not difficult any more. Especially using an explicit LSP, MPLS provides precise QoS controls for delivering enhanced IP-based services.

We investigate an MPLS network performance behavior both from traffic engineering, as well as service differentiation perspectives. The result of this experimentation is that we verify how to optimize the use of the available bandwidth and minimize the effects of  network congestion with MPLS traffic engineering, and provide some level of resource assurance using MPLS QoS feature. More specifically, we verify how traffic is best mapped into an explicit LSP in order to improve the performance of  IP networks.

Beyond that, the results of our experimental study complement our strong arguments about the effectiveness of MPLS traffic engineering. As a result, we believe that MPLS will play a key role in future service providers and carriers IP backbone networks. Furthermore, the use of MPLS in IP backbone networks will facilitate the development of new several services such as real time applications support in the Internet. However, further investigation pertaining to MPLS traffic engineering, and QoS is still underway.

Acknowledgments

The authors would like to greatly acknowledge the Hiroshima Prefecture Government for allowing us the Hiroshima Information Triangle Network (HITN) research environment at our disposal. The experimental results presented in this paper are based on this research testbed. This work was supported in part by Research for the Future Program of Japan Society for the Promotion of Science under the Project "Integrated Network Architecture for Advanced Multimedia Application Systems" (JSPS-RFTF97R16301), and Internet Technology Research Committee (ITRC).

References

  1. G. Swallow, "MPLS Advantages for Traffic Engineering," IEEE Communication, December 1999.
  2. Zheng Wang, "Internet QoS: Architectures and Mechanisms for Quality of Service," Morgan Kufmann, 2001.
  3. D. Awduche, "MPLS and Traffic Engineering in IP Networks," IEEE Communication, December 1999.
  4. E. Rosen, A. Viswanathan and R. Callon, "Multiprotocol Label Switching Architecture," RFC-3031, January 2001.
  5. R. Callon et al., "A Framework for Multiprotocol Label Switching," September 1999.
  6. B. Davie and Y. Rekhter, "MPLS: Technology and Applications," Morgan Kufmann, 2000.
  7. E. Rosen et al., "MPLS Label Stack Encoding," RFC-3032, January 2001.
  8. D. Awduche et al., "Requirements for Traffic Engineering over MPLS," RFC 2702, September 1999.
  9. B. Jamoussi et al., "Constraint-Based LSP Setup using LDP," work in progress, draft-ietf-mpls-cr-ldp-05.txt, February 2001.
  10. D. Awduche et al., "RSVP-TE: Extensions to RSVP for LSP Tunnels," draft-ietf-mpls-rsvp-lsp-tunnel-08.txt, February 2001.
  11. R. Braden, D. Clark and S. Shenker, "Integrated Services in the Internet Architecture: an Overview," RFC 1633, July 1994.
  12. Braden et al., "Resource ReSerVation Protocol (RSVP) - Version 1 Functional Specification," RFC-2205, September 1997.
  13. S. Blake et al., "An architecture for Differentiated Services," IETF RFC-2475, December 1998.
  14. Le Faucheur et al., "MPLS Support of Differentiated Services," work in progress, draft-ietf-mpls-diff-ext-08.txt, February 2001.
  15. Rick Jones, "Netperf. Hewlett-Packard," http://www.netperf.org/netperf/NetperfPage.html, February 1995.