The Evolution of Quality of Service: Where Are We Headed?
By Paul Ferguson <firstname.lastname@example.org> and Geoff Huston <email@example.com>
Portions of text in this paper have been extracted from Quality of Service: Delivering QoS on the Internet and in Corporate Networks, by Paul Ferguson and Geoff Huston, published by John Wiley & Sons, January 1998, ISBN 0-471-24358-2.
It is hard to dismiss the entrepreneurial nature of the Internet today. This is no longer a research project. For most organizations connected to the Internet, it is a full-fledged business activity. Having said that, it is equally hard to dismiss the poor service quality that is frequently experienced. Rapid growth of the Internet, and increasing levels of traffic, make it difficult for Internet users to enjoy consistent and predictable end-to-end levels of service quality.
The Metrics of Service Quality
What causes poor service quality within the Internet? The glib and rather uninformative response is localized instances of substandard network engineering that is incapable of carrying high traffic loads.
Perhaps the more appropriate question is, What are the components of service quality and how can they be measured? Service quality in the Internet can be expressed as the combination of network-imposed delay, jitter, bandwidth, and reliability. When we refer to differentiated service quality, we are referring to the differentiation of one or more of those basic quality metrics for a particular category of traffic.
Delay. Delay is the elapsed time for a packet to be passed from the sender, through the network, to the receiver. The higher the delay, the greater the stress that is placed on the transport protocol to operate efficiently. For the transmission-control protocol (TCP), higher levels of delay imply greater amounts of data held in transit in the network, which in turn places stress on the timers associated with the protocol. It should also be noted that TCP is a network-clocking protocol; the senders transmission rate is dynamically adjusted to the flow of signal information coming back from the receiver via the reverse-direction acknowledgments (ACKs), which notify the sender of successful reception: the greater the delay between sender and receiver, the more insensitive the feedback loop becomes, and therefore the protocol becomes more insensitive to short-term dynamic changes in network load. For UDP (user-datagram-protocol)-based applications that are not network clocked, delay is also relevant to service quality. For interactive voice and video applications, the introduction of delay causes the system to appear unresponsive.
Jitter. Jitter is the variation in end-to-end transit delay. High levels of jitter cause the TCP to make conservative estimates of round-trip time, causing the TCP to operate inefficiently when it reverts to time-outs to reestablish a data flow. High levels of jitter in UDP-based applications are unacceptable in situations where the application is real-time based, such as an audio or video signal. In such cases, jitter causes the signal to be distorted, which in turn can be rectified only by increasing the receivers reassembly playback queue, which effects the delay of the signal, making interactive sessions cumbersome to maintain.
Bandwidth. Bandwidth is the maximal data transfer rate that can be sustained between two end points. It should be noted that this is limited not only by the physical infrastructure of the traffic path within the transit networks, which provides an upper bound to available bandwidth, but also by the number of other flows that share common components of this selected end-to-end path.
Reliability. Reliability commonly refers to property of the transmission system. In this context it can be thought of as the average error rate of the medium. Reliability can also be a by-product of the switching system. A poorly configured or poorly performing switching system can alter the order of packets in transit, delivering packets to the receiver in a different order than that of the original transmission by the sender or even dropping packets through transient routing loops. Unreliable or error-prone network transit paths can also cause retransmission of the lost packets. TCP cannot distinguish between loss due to packet corruption and loss due to congestion. Packet loss invokes the same congestion-avoidance behavior response from the sender, causing the senders transmit rates to be reduced by invoking congestion-avoidance algorithms even though no congestion may have been experienced by the network. In the case of UDP-based voice and video applications, unreliability causes induced distortion in the original analog signal at the receivers end.
How Is Service Quality Implemented within the Internet?
The Internet is composed of a collection of routers and transmission links. Routers receive an incoming packet, determine the next hop interface, and place the packet on the output queue for the selected interface. Transmission links have characteristics of delay, bandwidth, and reliability. Poor service quality is encountered when the level of traffic selecting a particular hop exceeds the transmission bandwidth of the hop for an extended period of time. In such cases, the routers output queues associated with the saturated transmission hop begin to fill, causing additional transit delay (increased jitter and delay), until the queue is filled. The router is then forced to discard packets, which reduces reliability. This in turn forces adaptive flows to reduce their sending rate to minimize congestion loss, which reduces the available bandwidth for the application.
When we refer to quality of a service, we are looking at those four metrics as the base parameters of quality. It should be noted that there are a variety of network events that can affect these parameter values. In attempting to take a uniform best-effort network service environment and introduce structures that allow some form of service differentiation, the tools that allow such service environments to be constructed are configurations within the networks routers designed to implement one or more of the following.
The art of implementing an effective quality-of-service (QoS) environment is to use those tools in a way that can construct robust, differentiated service environments.
Service Quality versus Quality of Service
Service quality can be defined as the delivery of consistently predictable service, including high network reliability, low delay, low jitter, and high availability. QoS, on other hand, is a method of providing preferential treatment for some arbitrary amount of network traffic, as opposed to all traffic being treated as best effort. Providing such preferential treatment is an attempt to increase the quality level of one or more of the basic metrics for this particular category of traffic.
There are several tools available to provide that differentiation, ranging from preferential queuing disciplines to bandwidth reservation protocols and from ATM (asynchronous-transfer-mode)-layer congestion and bandwidth allocation mechanisms to traffic shaping. Each may be appropriate dependent on what problem is being solved. We do not see QoS as being concerned primarily with attempting to deliver guaranteed levels of service to individual traffic flows within the Internet. While such network mechanisms may have a place within smaller network environments, the sheer size of todays Internet effectively precludes any QoS approach that attempts to reliably segment the network on a flow-by-flow basis. The major technology force that has driven the explosive growth of the Internet as a communications medium is the use of stateless switching systems that provide variable best-effort service levels for intelligent peripheral devices. Recent experience has indicated that this approach has extraordinary scaling properties. Stateless switching architectures can scale easily to gigabits per second, thereby preserving a continued functionality, whereas the unit cost of stateless switching has decreased at a level that is close to the basic scaling rate.
We also suggest that if a network cannot provide a reasonable level of service quality, then attempting to provide some method of differentiated QoS on the same infrastructure is virtually impossible. This is where traditional engineering, design, and network architecture principles play significant roles.
There are several mechanisms and architectural implementations that can provide differentiation for traffic in the network. We classify those mechanisms into three basic groups, which align with the lower three layers of the OSI reference model: the Physical, Link, and Network layers.
The physical layer -- also referred to as L1, or layer 1 -- consists of the physical wiring, the fiber optics, and the transmission media in the network itself. It is reasonable to ask how layer-1 physical media figures within the QoS framework, but the time-honored practice of constructing diverse physical paths in a network is, perhaps ironically, a primitive method of providing differentiated service levels. In some cases, diverse paths are constructed to be used primarily by network layer routing to provide for redundant availability should the primary physical path fail for some reason. This can lead to adverse performance whereby, for example, having more than one physical path to a destination can theoretically allow for some arbitrary amount of network traffic to take the primary low-delay, high-bandwidth path while the balance of the traffic takes a backup path that may have different delay and bandwidth properties. In turn, such a configuration leads to reduced reliability and increased jitter within the network as a consequence, unless the routing profile has been carefully constructed to stabilize the traffic segmentation between the two paths.
Some believe that traffic service differentiation can be provided with specific link-layer mechanisms -- also referred to as layer 2, or L2. Traditionally, this belief in differentiation has been associated with asynchronous transfer mode and frame relay in the wide area network (WAN) and with ATM in the local area network (LAN) campus.
ATM. ATM is one of the few transmission technologies that provides data-transport speeds in excess of 155 Mbps. It also provides a complex subset of traffic-management mechanisms, virtual circuit (VC) establishment controls, and various associated QoS parameters for those VCs. The predominant use of ATM in todays Internet networks is the result of the high data-clocking rate and multiplexing flexibility available with ATM implementations. There are few other transmission technologies that provide such a high-speed bit-rate clock.
Frame relay. Frame relays origins lie in the development of ISDN (integrated-services digital network) technology, where frame relay originally was seen as a packet-service technology for ISDN networks. The frame relay rationale reflects the perceived need for efficient relaying of HDLC framed data across ISDN networks. With the removal of data link-layer error detection, retransmission, and flow control, frame relay opted for end-to-end signaling at the transport layer of the protocol stack to undertake those functions. This allows the network switches to consider data-link frames as being forwarded without waiting for positive acknowledgment from the next switch. This in turn allows the switches to operate with less memory and to drive faster circuits with the reduced switching functionality that frame relay requires.
Frame relay is certainly a good example of what is possible with relatively sparse signaling capability. However, the match between frame relay as a link-layer protocol and QoS mechanisms for the Internet is not a particularly good one.
Frame relay networks operate within a locally defined context of using selective frame discard as a means of enforcing rate limits on traffic as it enters the network. This is done as the primary response to congestion. The basis of this selection is undertaken without respect to any hints provided by the higher-layer protocols. The end-to-end TCP protocol uses packet loss as the primary signaling mechanism to indicate network congestion, but it is recognized only by the TCP session originator. The result is that when the network starts to reach a state of congestion, the method in which end-system applications are degraded matches no particular imposed policy. In the current environment, frame relay offers no great advantage over any other link-layer technology.
The seminal observation regarding the interaction of QoS mechanisms within various levels of the model of the protocol stack is that without coherence between the link-layer-transport signaling structures and the higher-level protocol stack, the result -- in terms of consistency of service quality -- is completely chaotic.
Network- and Transport-Layer Mechanics
Within the global Internet, it is undeniable that the common bearer service is the TCP/IP protocol suite. IP is therefore the common denominator. (The TCP/IP protocol suite is commonly referred to simply as IP, which has become the networking vernacular used to describe IP as well as ICMP, TCP, and UDP.) This thought process has several supporting lines of reason. The common denominator is chosen in the hope of using the most pervasive and ubiquitous protocol in the network, whether it be layer 2 or layer 3 (the network layer). Using the most pervasive protocol makes implementation, management, and troubleshooting much easier and yields a greater possibility of successfully providing a QoS implementation that actually works.
It is also the case that this particular technology operates in an end-to-end fashion, using a signaling mechanism that spans the entire traversal of the network in a consistent fashion. In most cases, IP is the end-to-end transportation service. Although it is possible to create QoS services in substrate layers of the protocol stack, such services cover only part of the end-to-end data path. Such partial measures often have their effects masked by the effects of the signal distortion created from the remainder of the end-to-end path in which they are not present, or they introduce other signal distortion effects and as mentioned previously, the overall outcome of a partial QoS structure is generally ineffectual.
When the end-to-end path does not consist of a single pervasive data-link layer, any effort to provide differentiation within a particular link-layer technology most likely will not provide the desired result. This is the case for several reasons. In the Internet, an IP packet may traverse any number of heterogeneous link-layer paths, each of which may -- or may not -- possess characteristics that inherently provide methods to enable traffic differentiation. However, the packet also inevitably traverses links that cannot provide any type of differentiated services at the link layer, rendering an effort to provide QoS solely at the link layer an inadequate solution.
The Internet today carries three basic categories of traffic, and any QoS environment must recognize and adjust itself to these three basic categories. The first category is long-held adaptive reliable traffic flows, where the end-to-end flow rate is altered by the end points in response to network behavior and where the flow rate attempts to optimize itself in an effort to obtain a fair share of the available resources on the end-to-end path. Typically, this category of traffic performs optimally for long-held TCP traffic flows. The second category of traffic is a boundary case of the first category -- short-duration reliable transactions -- where the flows are of very short duration and the rate adaptation does not get established within the lifetime of the flow, so that the flow sits completely within the start-up phase of the TCP adaptive flow control protocol. The third category of traffic is an externally controlled load, unidirectional traffic flow, which is typically a result of compression of a real-time audio or video signal. The peak flow rate may equal the basic source signal rate, and the average flow rate is a by-product of the level of signal compression used, and the transportation mechanism is an unreliable traffic flow with a UDP unicast flow model.
Within most Internet networks today, empirical evidence indicates that the first category of traffic accounts for less than 1 percent of all packets. Because the data packets are typically large, this application accounts for some 20 percent of the volume of data. The second category of traffic is most commonly generated by World Wide Web servers using the HTTP/1.0 application protocol. This traffic accounts for roughly 60 percent of all packets and a comparable relative level of volume of data carried. The third category accounts for some 10 percent of all packets. As the average packet size is less than one-third of the first two flow types, it currently accounts for some 5 percent of the total data volume.
In order to provide elevated service quality for those three common traffic flow types, there are three different engineering approaches that must be used. The efficient carriage of long-held, high-volume TCP flows requires the network to offer consistent signaling to the sender regarding the onset of congestion loss within the network. To ensure efficient carriage of short-duration TCP traffic requires the network to avoid sending advance congestion signals to the flow end points. Given that those flows are of short duration and low transfer rate, any such signaling will not achieve any appreciable load shedding. Instead, it will substantially increase the elapsed time that the flow is held active, which results in poorly delivered service without any appreciable change in the relative allocation of network resources to service clients. To ensure efficient carriage of externally clocked UDP traffic requires the network to be able to, at a minimum, segment the queue management of such traffic from adaptive TCP traffic flows and possibly replace adaptation by advance notification and negotiation. Such a notification and negotiation model could allow the source to specify its traffic profile in advance and have the network either respond with a commitment to carry such a load or indicate that it does not have available resources to meet such an additional commitment.
As a consequence, it should be noted that no single transport or network-layer mechanism will provide the capabilities for differentiated services for all flow types and that a QoS network will deploy a number of mechanisms to meet the broad range of customer requirements in this area.
Working within a Common Denominator
A number of dichotomies exist within the Internet that tend to dominate efforts to engineer possible solutions to the quality-of-service requirement. Thus far, QoS has been viewed as a wide-ranging solution set against a very broad problem area. This fact often can be considered a liability. Ongoing efforts to provide perfect solutions show that attempts to solve all possible problems result in technologies that are far too complex, have poor scaling properties, or simply do not integrate well into the diversity of the Internet. By the same token, and by close examination of the issues and technologies available, some very clever mechanisms are revealed under close scrutiny. Determining the usefulness of those mechanisms, however, is perhaps the most challenging aspect in assessing the merit of any particular QoS approach.
Within the Internet it becomes an issue of implementing QoS within the most common denominator -- clearly the TCP/IP protocol suite -- because a single link-layer media will never be used pervasively end to end across all possible paths. What about the suggestion that it is certainly possible to construct a smaller network of a pervasive link-layer technology, such as ATM? Although this is certainly possible in smaller private networks and perhaps in smaller peripheral networks in the Internet, it is rarely the case that all end systems are ATM attached. This does not appear to be a likely outcome in the coming years. In terms of implementing visibly differentiated services based on a quality metric, using ATM only on parts of the end-to-end path is not a viable answer. The ATM subpath is not aware of the complete network-layer path, and it does not participate in the network or transport-layer-protocol end-to-end signaling.
The simplistic answer to this conundrum is to dispense with TCP/IP and run native cell-based applications from ATM-attached end systems. This is certainly not a realistic approach in the Internet, though, and chances are that it is not realistic in a smaller corporate network, either. Very little application support exists for native ATM. In theory the same could have been said of frame relay transport technologies of the recent past. In general, link-layer technologies are similar to viewing the world through plumbers eyes. Every communications issue is seen in terms of point-to-point bit pipes. Each wave of transport technology attempts to add more features to the shape of the pipe, but the underlying architecture is a constant perception of the communications world as a set of one-on-one conversations, with each conversation supported by a form of singular communications channel.
One of the more enduring aspects of the communications industry is that there is no such thing as a ubiquitous single link-layer technology. Hence, there is an enduring need for an internetworking end-to-end transport technology that can straddle a heterogeneous link-layer substrate. Equally, there is a need for an internetworking technology that can allow differing models of communications, including fragmentary transfer, unidirectional data movement, multicast traffic, and adaptive data flow management.
For QoS to be functional, it may be necessary for all nodes in a given path to behave in a similar fashion with respect to QoS parameters or at the very least, to not impose additional QoS penalties other than conventional best effort into the end-to-end traffic environment. The sender -- or network ingress point -- must be able to create some form of signal associated with the data that can be used by downstream routers to potentially modify their default outbound interface selection, queuing behavior, and/or discard behavior.
The insidious issue here is attempting to exert control at a distance. The objective in this methodology is for an end system to generate a packet that can trigger a differentiated handling of the packet by each node in the traffic path, so that the end-to-end behavior exhibits performance levels in line with the end users expectations and perhaps even a contracted fee structure.
This control-at-a-distance model can take the form of a guarantee between the user and the network. This guarantee would be possible if the ingress traffic conforms to a certain profile, the egress traffic maintains that profile state, and the network does not distort the desired characteristics of the end-to-end traffic expected by the requester. To provide such absolute guarantees, the network must maintain a transitive state along a determined path where the first router commits resources to honor the traffic profile and passes this commitment along to a neighboring router that is closer to the nominated destination and also capable of committing to honor the same traffic profile. This type of state maintenance is viable within small-scale networks, but in the heart of large-scale public networks such as the global Internet, the cost of state maintenance is overwhelming.
The alternative to state maintenance and re-source reservation schemes is the use of mechanisms for preferential allocation of resources, which creates varying levels of best effort. Given the absence of end-to-end guarantees of traffic flows, this removes the criterion for absolute state maintenance, so that better-than-best-effort traffic with classes of distinction can be constructed inside larger networks. Currently, the most promising direction for such better-than-best-effort systems appears to lie within the area of modifying the network layer queuing and discard algorithms. These mechanisms rely on an attribute value within the IP (Internet protocol) packets header, so these queuing and discard preferences can be made at each intermediate node. First, the Internet service providers routers must be configured to handle packets based on their IP precedence level or on similar semantics expressed by the bit values defined in the IP packet header. There are three aspects to this. First, you need to consider the aspect of using the IP precedence field to determine the queuing behavior of the router both in queuing the packet to the forwarding process and in queuing the packet to the output interface. Second, consider using the IP precedence field to bias the packet discard processes by selecting the lowest precedence packets to discard first. Third, consider using any priority scheme used at layer 2 that should be mapped to a particular IP precedence value.
In conclusion, QoS is possible in the Internet, but it does come at a price of compromise. There are no perfect solutions. In fact, one might suggest that expectations have not been appropriately managed, since guarantees are simply not possible in the Internet -- at least not for the foreseeable future. What is possible, however, is delivering differentiated levels of best-effort traffic in a manner that is predictable and fairly consistent and that provides the ability to offer discriminated service levels for different customers and for different applications.