Performance Comparison of TCP Implementations in QoS Provisioning Networks

Hiroyuki KOGA <koga@cse.kyutech.ac.jp>
Kyushu Institute of Technology
Japan

Yoshiaki HORI <hori@kyushu-id.ac.jp>
Kyushu Institute of Design
Japan

Yuji OIE <oie@cse.kyutech.ac.jp>
Kyushu Institute of Technology
Japan

Abstract

Over the future Internet, real time communication generating such as constant bit rate (CBR) traffic will spread widely, whereas the current Internet has no ability to provide quality of service (QoS) assurance for real time communication so far. In QoS networks, CBR traffic will have priority for its stringent QoS requirement over non-real time traffic such as TCP connections, which use the bandwidth left unused by CBR connections. Therefore, there is a possibility that CBR traffic with priority will cause TCP throughput degradation in QoS networks. For this reason, the performance of Tahoe TCP has been examined in that context, but other TCP variants such as Reno TCP, NewReno TCP and TCP with SACK option, which are now very common, have not yet been investigated clearly. In the present research, we will clarify how these TCP variants behave in QoS networks by means of simulations and compare their performance. From the results, SACK TCP can adapt very well to the changing bandwidth available and is very robust against the fluctuation, i.e., burstness, of  the CBR packet arrival process.

Contents

1. Introduction

In the future Internet, real time communication generating such as constant bit rate (CBR) traffic, which will be transmitted by UDP datagram, will spread widely. However, the current Internet supports mainly the non-real time communication and cannot provide quality of service (QoS) assurance for real time communication so far. The performance of CBR traffic sharing a link with Transmission Control Protocol (TCP) connections is very likely to degrade due to TCP connections that execute flow control to utilize as much available bandwidth as possible [1]. Therefore, in QoS networks, CBR traffic should have priority for its stringent QoS requirement over non-real time traffic such as TCP connections, which will use the bandwidth left unused by CBR traffic.

In the QoS networks, a link is used by traffic of several classes, each of which requires different quality and is provided with a dedicated buffer on each node. As a packet scheduling method, Class-Based Queueing(CBQ) [2] was proposed to avoid starvation for low class traffic such as TCP. We use the CBQ as the QoS scheduler in this paper.

In this context, the amount of a link bandwidth available for TCP connections will change with variations in the amount of CBR traffic. If some CBR connections newly join a link, the amount of bandwidth available for TCP connections will abruptly and drastically decrease, thereby causing multiple packet loss on TCP connections and further resulting in timeout occurrence. Consequently, this can lead to throughput degradation of TCP connections. The performance of Tahoe TCP has been already examined [2]. However, the performance of other TCP variants such as Reno TCP, NewReno TCP and TCP with SACK option has not been studied yet. Reno TCP is implemented mainly on current Internet computers and NewReno TCP [3] has been proposed to solve Reno's problem that the multiple packet loss in one window causes timeout and throughput degradation. SACK TCP [4][5] employs more a efficient retransmission mechanism, i.e., Selective-Repeat ARQ(Automatic Repeat reQuest) different from Go-Back-N ARQ used by Reno TCP and NewReno TCP.

Our major purpose is to investigate whether they work well even in QoS networks where the available bandwidth drastically changes from time to time. In other words, we are interested in whether some or all of them can adapt their window flow mechanism in response to changing available bandwidth to use it efficiently. To this end, in the present paper, we will clarify how these TCP variants behave in QoS networks by means of simulations and compare their performance.

2. Simulation model


Figure 1: Simulation Model

In our simulation, we employ the CBQ as a QoS scheduler. A link is shared by 5 CBR connections and 20 TCP connections as shown in Figure 1; CBR connections have priority. The UCB/LBNL/VINT Network Simulator NS Version 2 [6] is used for this research.

In the figure, Node 1 is a CBQ Gateway connected to Node 2 through the bottleneck link, which is 100 Mbps in bandwidth and 1000 km in length, resulting in 5 msec propagation delay. Of the link bandwidth, 30 Mbps is allocated to CBR traffic and 70 Mbps to TCP traffic. The CBQ Gateway is equipped with a buffer of 200 packets in length for each traffic, and packet loss can occur in this CBQ Gateway.

Hosts c1-c5 transmit CBR traffic to c6-c10, and t1-t20 transmit TCP traffic to t21-t40. Hosts c1-c5 are connected to Node 1 with links of 6 Mbps and of 5 msec in propagation delay, which are the same as the links connecting c6-c10 with Node 2; t1-t40 are connected with nodes through links of 100 Mbps. Among them, all links between Node 2 and each of t21-t40 are of 5 msec in propagation delay. When the propagation delay of all TCP connections is the same, their window flow control is very likely to be synchronized [1]. This is very different from an actual environment. Thus, the propagation delay here varies from 3 to 7 msec in links connected with t1-t20 in order to prevent such synchronization.

In this simulation, it is assumed that each TCP traffic is used for file transfer, while each CBR traffic is generated by MPEG video stream of 6 Mbps. The TCP variants employed here are 4.3BSD-Reno, NewReno, and SACK as mentioned above. CBR traffic is assumed to have a constant packet interval as a result of shaping with a hardware shaper unless otherwise noted.

The packet size is set at 500 bytes for both of CBR and TCP. We carry out simulation experiments for 20 seconds.

3. Simulation results

In this section, we will show the characteristics of TCP performance in QoS networks by means of simulation results. In particular, some variants of TCP, which are currently or will be very common, will be examined, and the difference are discussed.

Before investigating TCP performance in QoS networks, we give results of TCP performance in networks accommodating only TCP connections just for later comparison.

3.1. TCP performance in networks without CBR traffic

We examine the total TCP throughput over a link used by only TCP traffic. The resulting performance can be considered to be very similar to that in the current Internet because TCP traffic is dominant there. Figures 2, 3, and 4 show the throughput characteristics of Reno TCP, NewReno TCP, and SACK TCP. The available bandwidth is set at 82 Mbps for comparison with the results in the following subsections. In Figure 4, SACK TCP achieves the maximum throughput at almost all time. As shown in Figures 2 and 3, Reno TCP and NewReno TCP also provide good performance, although SACK TCP outperforms them.


Figure 2: Total TCP throughput: Reno


Figure 3: Total TCP throughput: NewReno


Figure 4: Total TCP throughput: SACK

3.2. TCP traffic and one CBR stream of large bandwidth

We now consider the case where CBR traffic, which requires a large bandwidth such as video streams, and TCP traffic are sharing the bottleneck link. This case will happen very often or be very common in the future Internet. We suppose that CBR traffic is transmitted at 30 Mbps from 0 to 6 seconds and from 14 to 20 seconds. Hence, the average rate of CBR traffic is 18 Mbps over 0 to 20 seconds, and the available bandwidth for TCP traffic is thus 82 Mbps on average, which is the same as that in the previous subsection. Figures 5, 6, and 7 show the total throughput characteristics of Reno TCP, NewReno TCP, and SACK TCP, respectively. The flow of CBR traffic abruptly joins at 14 seconds and causes the throughput degradation to Reno TCP and NewReno TCP connections, particularly, whereas SACK TCP is very robust against changing available bandwidth.


Figure 5: Total TCP throughput: Reno


Figure 6: Total TCP throughput: NewReno


Figure 7: Total TCP throughput: SACK

Let's investigate the reason for the above degradation. For this purpose, the number of the timeout and Fast Recovery occurrences of all TCP flows is shown in Table 1. The average throughput over 0 to 20 seconds is given in "large CBR" of Table 3. From the table, we can see that NewReno TCP and SACK TCP suffer far fewer timeouts than Reno TCP. Furthermore, timeout rarely occurs in NewReno TCP. Timeout occurrences on some TCP connections prevents the related sender from transmitting packets for long duration, e.g., few to several seconds. This results in severe throughput degradation as shown in Table 3. NewReno TCP successfully avoids timeout occurrence by improving window flow control mechanism. However, since it still employs G-back-N ARQ like Reno TCP, its throughput is limited as shown in the table. Furthermore, SACK TCP achieves excellent performance, whereas it suffers more timeout occurrence than NewReno TCP. Therefore, we see that Selective-Repeat ARQ employed by SACK TCP is very effective in using changing available bandwidth.

Table 1: The number of Timeout and Fast Recovery
  Reno NewReno SACK
Timeout 121 2 13
Fast Recovery 520 442 466

3.3. TCP traffic and multiple CBR streams

We now consider the case that multiple CBR streams and TCP traffic are sharing the link. We assume that each CBR transfer rate is 6 Mbps and the sum of CBR transfer rates are 30 Mbps in maximum and 18 Mbps on average. Therefore, the available bandwidth of TCP traffic becomes 82 Mbps on average. Figures 8, 9, and 10 show the total throughput characteristics in this case where TCP algorithms are Reno, NewReno and SACK. The total throughput is shown in "multiple CBR" of Table 3. From the figures and table, changing CBR traffic has a great influence on the performance of Reno TCP and NewReno TCP, but not on SACK TCP.


Figure 8: Total TCP throughput: Reno


Figure9: Total TCP throughput: NewReno


Figure 10: Total TCP throughput: SACK

As in the previous subsection, the number of timeout and fast recovery occurrence of all TCP flows is shown in Table 2. NewReno TCP experiences timeout and fast recovery less often than SACK TCP. Nevertheless, as shown in Table 3, SACK TCP outperforms NewReno TCP due to effective Selective-Repeat ARQ employed, as mentioned earlier.

Table 2: The number of Timeout and Fast Recovery
  Reno NewReno SACK
Timeout 108 6 10
Fast Recovery 519 446 488

 

Table3: Total TCP throughput [Mbps]
  Reno NewReno SACK
Only TCP 78.4 78.6 81.0
Large CBR 76.9 78.2 80.8
Multiple CBR 73.6 76.9 80.3

Furthermore, from Table 3, even as long as the total average bandwidth of CBR is the same, multiple CBR streams with small bandwidth has a greater influence on TCP performance than a single CBR stream with large bandwidth. This must be because multiple CBR streams frequently changes the available bandwidth for TCP and TCP cannot adapt its flow control very well in response to the frequent fluctuation.

Finally, the fairness of TCP flows is shown in Table 4, which is defined as a coefficient of variation of throughput; fairness of 0 refers to complete fair share of the link among TCP connections. SACK TCP attains excellent fairness in throughput.

Table 4: The fairness of TCP flows
  Reno NewReno SACK
Only TCP 0.27 0.24 0.20
Large CBR 0.29 0.27 0.19
Multiple CBR 0.17 0.11 0.11

3.4. Coexistence of Reno TCP with SACK TCP

In the previous sections, we treated a homogeneous case in terms of the variant of TCP; i.e., all of 20 TCP connections were the same TCP variant. In that case, Reno TCP is the worst and SACK TCP is the best among TCP variants treated there. SACK TCP can aggressively utilize almost all the available bandwidth, while Reno TCP cannot do that very well. We are thus very interested in what will happen on Reno TCP connections when a link is shared by Reno TCP connections and SACK TCP connections. Therefore, in this subsection, we deal with a heterogeneous case of link shared by 10 Reno TCP connections and 10 SACK TCP connections.

The related total throughput performance is given in Table 5. By comparing Table 5 with Table 3, we see that the throughput per connection in SACK TCP is larger in the heterogeneous case than that in the homogeneous case, whereas that of Reno TCP degrades in the heterogeneous case. In addition, the total throughput of the heterogeneous case is between that of Reno TCP and that of SACK TCP in the homogeneous case. Namely, SACK TCP achieves better performance in the heterogeneous case at the cost of Reno TCP performance degradation. Nevertheless, from Table 5, the ratio of the total throughput of Reno TCP is kept at approximately 40 percent in any case listed there. Therefore, the effect of coexistence with SACK TCP on performance of Reno TCP does not depend upon deployment of QoS networks.

Table 5: The total TCP throughput [Mbps]
  Reno SACK SUM
Only TCP 32.3 40.4% 47.6 59.6% 79.9
Large CBR 29.9 38.3% 48.2 61.7% 78.1
Multiple CBR 30.3 39.2% 47.1 60.8% 77.4

3.5. Effect of fluctuation of the interarrival times of CBR packets

As mentioned in Section 2, we assumed that CBR traffic has a constant packet interval time as a result of using a hardware shaper of fine granularity. This looks an ideal model, but the interarrival time of CBR packets can generally fluctuate in actual networks. In this section, we consider the case that the interarrival time of CBR packets fluctuates or can be bursty sometimes for the following reason. CBR traffic goes through multiple gateways, in each of which CBR packets can be forced to wait. Thus, interarrival times of packets of some CBR stream will not be constant to some gateway.


Figure 11: The model of the interarrival times of CBR packets

We treat two cases as shown in Figure 11; one is the case where packets are randomly delayed in at most one packet generation interval of 0.67 msec. Note that when CBR packets of 500 bytes are transmitted at 6 Mbps, their packet intervals become 0.67 msec. Another is the case where some packets arrive back to back at the gateway. Here, 15 packets are transmitted back to back in an interval of 10 msec. The former is indicated as "fluctuation" and the latter as "burst" in Tables 6 and 7. When TCP traffic shares the link with multiple CBR streams (multiple CBR), the total throughput of TCP traffic and the fairness of TCP flows in the above two cases are shown in Tables 6 and 7. Fairness in Table 7 is defined as coefficient of variation of throughput as mentioned earlier. Tables 6 and 7 show that the total throughput is not affected by the fluctuation of the interarrival times of CBR packets, particularly in NewReno TCP and SACK TCP, but the fairness can be improved by use of hardware shaper of fine granularity.

Table 6: The total TCP throughput [Mbps]
  Hard Shaper Fluctuation Burst
Reno 75.0 73.3 74.5
NewReno 77.5 77.8 77.6
SACK 80.5 80.3 80.6

 

Table 7: The fairness of TCP flows
  Hard Shaper Fluctuation Burst
Reno 0.16 0.23 0.22
NewReno 0.13 0.17 0.21
SACK 0.10 0.13 0.16

4. Concluding remarks

In our research, we have examined the performance of three TCP variants in QoS networks by means of simulations. In QoS networks, there can be some traffic which has priority over TCP traffic so that TCP will use the bandwidth left unused by high priority traffic; CBR traffic is treated here as high priority traffic. Therefore, the amount of available bandwidth for TCP traffic can change from time to time. We have thus shown how they are affected by changing available bandwidth. Among them, Reno TCP degrades most due to changing available bandwidth. SACK TCP outperforms other two TCP variants in QoS networks as well as in the current Internet. Moreover, we studied how the fluctuation of interarrival times of CBR packets affects the performance of TCP. The total throughput of TCP connections is not affected by it, but it deteriorates the fairness of throughput among them. Nevertheless, SACK TCP is very robust against it.

SACK TCP works very well in QoS networks in terms of its throughput performance.

Acknowledgments

This work was supported in part by Research for the Future Program of Japan Society for the Promotion of Science under the Project "Integrated Network Architecture for Advanced Multimedia Application Systems" (JSPS-RFTF97R16301).

References

  1. Y. Hori, H. Sawashima, H. Sunahara, and Y. Oie, "Performance Evaluation of UDP Traffic Affected by TCP flows," IEICE Transactions on Communications, Vol.E81-B, No.8, pp.1616-1623, August 1998.
  2. S. Floyd and V. Jacobson, "Link-sharing and Resource Management Models for Packet Networks," IEEE/ACM Transactions on Networking, Vol.3, No.4, pp.365-386, August 1995.
  3. S. Floyd, T. Henderson, "The NewReno Modification to TCP's Fast Recovery Algorithm," RFC 2582, April 1999.
  4. K.Fall and S. Floyd, "Simulation-based Comparison of Tahoe, Reno, and SACK TCP," ACM Computer Communication Review, Vol.26, No.3, pp.5-21, July 1996.
  5. M.Mathis, J. Mahdavi, S. Floyd, and A. Romanow, "TCP Selective Acknowledgement Options," RFC 2018, October 1996.
  6. UCB/LBNL/VINT Network Simulator NS, http://www-mash.cs.berkeley.edu/ns/.