Fairness Comparisons Between TCP Reno and TCP Vegas for Future Deployment of TCP Vegas

Kenji KURATA <k-kurata@ics.es.osaka-u.ac.jp>
Go HASEGAWA <hasegawa@econ.osaka-u.ac.jp>
Masayuki MURATA <murata@ics.es.osaka-u.ac.jp>
Osaka University
Japan

Abstract

According to past research, a TCP (Transmission Control Protocol) Vegas version is able to achieve higher throughput than TCP Tahoe and Reno versions, which are widely used in the current Internet. However, we need to consider a migration path for TCP Vegas to be deployed in the Internet. By focusing on the situation where TCP Reno and Vegas connections share the bottleneck link, we investigate the fairness between two versions. We consider drop-tail and RED (Random Early Detection) as scheduling disciplines at the router buffer. From the analysis and the simulation results, we find that the fairness between TCP Reno and Vegas cannot be kept at all with the drop-tail router, and the performance of TCP Vegas is much smaller than that of TCP Reno as opposed to an expectation on TCP Vegas. The RED algorithm improves the fairness to some degree, but there still is an inevitable trade-off between fairness and throughput. It is true that TCP Vegas solely can obtain high throughput, and it has a good feature of having a backward compatibility with older versions of TCP. Nevertheless, it is unlikely that a current version of TCP Vegas penetrates in the Internet as our results clearly indicate.

Contents

1 Introduction

TCP (Transmission Control Protocol) is widely used by many Internet services including HTTP (Hypertext Transfer Protocol) (and World Wide Web) and FTP (File Transfer Protocol). Even if the network infrastructure may change in the future, it is very likely that TCP and its applications would be continuously used. However, TCP Tahoe and Reno versions (and their variants), which are widely used in the current Internet, are not perfect in terms of throughput and fairness among connections, as shown in past literature. Therefore, active research on TCP has been made, and many improvement mechanisms have been proposed (see, for example, [1-4] and the references therein). Among them, a TCP Vegas version [5,6] is one of the most promising mechanisms because of its high performance. TCP Vegas enhances the congestion avoidance algorithm of TCP Reno. In essence, TCP Vegas dynamically increases/decreases its sending window size according to observed RTTs (Round Trip Times) of sending packets, whereas TCP Tahoe/Reno continues increasing its window size until packet loss is detected. The authors in [5] conclude through simulation and implementation experiments that TCP Vegas can obtain even 40% higher throughput than TCP Reno.

However, we need to consider a migration path when a new protocol is deployed in the operating network, i.e., the Internet. It is important to investigate the effect of existing TCP versions (Tahoe and Reno) on TCP Vegas in the situation where those different versions of TCP co-exist in the network. The authors in [7] have pointed out that when connections of TCP Reno and Vegas share the bottleneck link, the Vegas connection may suffer from significant unfairness. However, the authors have assumed that only a single TCP Reno connection shares the link with another TCP Vegas connection.

In this paper, we focus on the situation where multiple TCP Reno and Vegas connections share the bottleneck link, and investigate the fairness between two versions of TCP to seek the possibility of a future deployment of TCP Vegas. One important point we should take into account is the underlying network assumed by TCP Vegas. When the original TCP Vegas was proposed in [5], the authors did not consider the RED (Random Early Detection) mechanism [8], which is now being introduced in the operating network. TCP Vegas may or may not be effective even when the router is equipped with the RED mechanism. We therefore consider two packet scheduling mechanisms, the RED router as well as the conventional drop-tail router, in our study. One of the contributions in this paper is to derive analysis results of the throughput of TCP Reno and Vegas in such a situation to explain why TCP Vegas cannot obtain the good throughput when sharing the link with TCP Reno. The accuracy of our analysis is validated by comparing the simulation results. Through the analysis and simulation results, we will show the fairness between TCP Reno and Vegas as follows. TCP Vegas receives significant low and unfair throughput compared with TCP Reno, when the router employs the drop-tail router. When the RED algorithm is applied, on the other hand, the fairness can be improved to some degree, but there still exists an inevitable trade-off between fairness and throughput. That is, if the packet-dropping probability of RED is set to be large, the throughput of TCP Vegas can be improved, but the total throughput is degraded.

We believe that the subject treated in this paper is a good example for considering the protocol migration from the existing immature one. It is true that TCP Vegas solely can obtain higher performance than TCP Reno, and it has a good feature of having a backward compatibility with the older versions of TCP. Nevertheless, it is unlikely that a current version of TCP Vegas penetrates in the Internet as our results clearly indicate.

The rest of this paper is organized as follows. Section 2 briefly introduces congestion control mechanisms of TCP Reno and TCP Vegas. We next describe the network model used in our analysis and simulation experiments in Section 3. Section 4 shows the analysis results of fairness between two versions of TCP, which are validated by the simulation results in Section 5. Finally, we conclude our paper and present some future works in Section 6.

2 Congestion control mechanisms of TCP

In this section, we summarize the congestion control mechanisms of two versions of TCP: TCP Reno and Vegas. For detailed explanation, refer to [9] for TCP Reno and [5,6] for TCP Vegas. The essence of the congestion avoidance mechanism of TCP is to dynamically control the window size according to the congestion level of the network. In what follows, we denote the current window size of the sender host at time t as cwnd(t).

2.1 TCP Reno

In TCP Reno, the window size is cyclically changed in a typical situation. The window size continues to be increased until packet loss occurs. TCP Reno has two phases in increasing its window size: slow start phase and congestion avoidance phase. When an ACK (acknowledgment) packet is received by TCP at the sender side at time t + tA[sec], the current window size cwnd(t + tA) is updated from cwnd(t) as follows (see, e.g., [9]);

where ssth(t)[packets] is a threshold value at which TCP changes its phase from slow start phase to congestion avoidance phase. When packet loss is detected by retransmission timeout expiration, cwnd(t) and ssth(t) are updated as [9]:

On the other hand, when TCP detects packet loss by a fast retransmit algorithm [9], it changes cwnd(t) and ssth(t) as

TCP Reno then enters a fast recovery phase [9] if the packet loss is found by the fast retransmit algorithm. In this phase, the window size is increased by one packet when a duplicate ACK packet is received. On the other hand, cwnd(t) is restored to ssth(t) when the non-duplicate ACK packet corresponding to the retransmitted packet is received.

2.2 TCP Vegas

As described in the previous subsection, in TCP Reno (and the older version, TCP Tahoe), the window size continues to be increased until packet loss occurs due to congestion. When the window size is throttled because of packet loss, the throughput of the connection would be degraded. It cannot be avoided because of an essential nature of the congestion control mechanism adopted in TCP Reno; it can detect network congestion only by packet loss. However, throttling the window size is not adequate when the TCP connection itself causes the congestion because of its too large window size. If the window size is appropriately controlled such that the packet loss does not occur in the network, the throughput degradation due to the throttled window can be avoided. This is a key idea of TCP Vegas.

TCP Vegas controls its window size by observing RTTs (round-trip times) of packets that the sender host has sent before. If observed RTTs become large, TCP Vegas recognizes that the network begins to be congested, and throttles the window size. If RTTs become small, on the other hand, the sender host of TCP Vegas determines that the network is relieved from the congestion, and increases the window size again. Hence, the window size in an ideal situation is expected to be converged to an appropriate value. More specifically, in congestion avoidance phase, the window size is updated as

where rtt[sec] is an observed round trip time, base_rtt[sec] is the smallest value of observed RTTs, and and are some constant values.

TCP Vegas has another feature in its congestion control algorithm: a slow slow-start mechanism. The rate of increasing its window size in slow start phase is one half of that in TCP Tahoe and TCP Reno. Namely, the window size is incremented every other time an ACK packet is received. Note that the congestion control mechanism used by TCP Vegas (Eq.(4)) indicates that if observed RTTs of the packets are identical, the window size remains unchanged.

According to [5], TCP Vegas can achieve over 40% higher throughput than TCP Reno. However, it is not clear whether TCP Vegas works well with TCP Reno or not. Our contribution in this paper is that we compare throughput performances of two versions where those share the bottleneck link, in order to discuss the possibility on the deployment of TCP Vegas in the future Internet.

3 Network model

Figure 1: Network Model

Figure 1 shows the network model used in this paper. It consists of Nr sender hosts using TCP Reno (SR1, ... SRNr), Nv sender hosts using TCP Vegas (SV1, ... SVNv), a receiver host, an intermediate router, and links connecting the router and the sender/receiver hosts. The bandwidth of each link between the sender hosts and the router is bw[Mbps]. The bandwidth of the bottleneck link between the router and the receiver host is BW[Mbps] = [packets/sec]. The size of the buffer at the router is B[packets]. The propagation delay between the sender hosts and the router and that between the router and the receiver host is represented by [sec] and [sec], respectively. We denote the total propagation delay between the sender hosts and the receiver host by , being equal to + . As the scheduling discipline at the router buffer, we consider drop-tail and RED algorithms. We assume that the sender hosts always have an infinite amount of sending data.

4 Analysis

In what follows, we use the network model depicted in Figure 1 and derive the average throughput of each TCP connection through a mathematical analysis. In the analysis, we assume that the throughput of each connection becomes proportional to buffer occupancy at the drop-tail router. It is also appropriate for the RED router as we will explain below. Note that the validation of our approximate analysis will be given in Section 5.

4.1 Case of Drop-Tail Router

In Figure 2, we illustrate a typical change of the total number of packets queued at the router buffer when the drop-tail algorithm is utilized. Here, we assume that all TCP Reno connections behave identically. Since TCP Reno connections continue to increase their window sizes until packet loss occurs at the buffer, the change of the window size also has cycles triggered by packet losses, even when the TCP Reno connections share the link with TCP Vegas connections. By assuming that all packet losses can be detected by the fast retransmit algorithm, it takes one RTT[sec] for the sender side TCP to detect the packet loss after the packet loss actually occurs at the route buffer. It corresponds to the flat part of buffer occupancy shown in Figure 2.

Figure 2: A Typical Change of Buffer Occupancy at Drop-Tail Router

TCP Vegas connections, on the other hand, control their window sizes according to the observed RTTs of sending packets. Each of those tries to keep the number of queued packets in the router buffer between and [packets] [4]. As RTTs becomes large, TCP Vegas connections continue to decrease their window sizes. On the other hand, TCP Reno connections continue to increase their window sizes regardless of the increased RTT, which results in the window sizes of the TCP Vegas connections being decreased until those reach within the range from to [packets]. See Eq.(4). From the above observation, the total of window sizes of Nv TCP Vegas connections, Wv[packets], is obtained as

We determine [packets], the average value of Wv, from Eq.(5) as follows:

which is a reasonable assumption from its behavior.

TCP Reno connections continue to increase their window sizes until the router buffer becomes full and eventually some packets are lost. Accordingly, Wr[packets], the total of the window sizes of TCP Reno connections when packet loss occurs at the router buffer, can be obtained as:

The number of lost packets during buffer overflow duration becomes Nr[packets], since from Eq.(1), the window sizes of TCP Reno connections are increased by 1[packet/RTT] in the congestion avoidance phase as having been explained in Section 2. By assuming that a packet loss probability for each connection is proportional to its window size, we can obtain Lr[packets] and Lv[packets], the numbers of packet losses of TCP Reno and Vegas connections during buffer overflow duration, respectively, as:

Each of TCP Reno connections detecting the packet loss halves its window size according to the fast retransmit algorithm. Therefore, Wr'[packets], the total window size of the TCP Reno connections just after the buffer overflow, can be determined by Eqs.(1) and (8) as:

From Eq.(1) (and Figure 2), the following equation holds for [packets], the average value of the total window size of TCP Reno connections:

Accordingly, we obtain [packets] and [packets], the average number of packets at the router buffer for TCP Reno and Vegas, respectively:

We finally have [packets/sec] and [packets/sec], the average throughput of the connections of two versions of TCP as:

since we have assumed that they become proportional to the buffer occupancy at the router.

4.2 Case of RED Router

The RED algorithm drops incoming packets at the preset probability when the number of packets in the buffer exceeds a certain threshold value [8]. For simplicity of the following analysis, it is assumed that all packet losses occur with probability p by the RED algorithm, and no buffer overflow takes place.

Even with the RED algorithm, TCP Reno connections continue to increase their window sizes until packet loss occurs. Therefore, as in the drop-tail case, the TCP Vegas connections cannot open their window sizes and keep them ranging from to . Therefore, the following equations yield for Wv and :

Each of the TCP Reno connections, on the other hand, changes its window size cyclically triggered by packet losses as in the drop-tail router case. Since all arriving packets are dropped with probability p by our assumption, the connection can send 1/p packets in one cycle (between two events of packet losses) on average. We define the number of packets transmitted during one cycle as Np, and is given by:

Different from the drop-tail router case, we focus on a certain TCP Reno connection because we assume that all TCP Reno connections behave identically under the stochastic packet-dropping algorithm employed by RED.

Although the RED algorithm can eliminate the bursty packet losses, retransmission timeout expiration cannot be perfectly avoided [10]. Even if timeout expiration rarely happens, the effect of timeout expiration on throughput is not negligible. Therefore, we must take into account throughput degradation caused by timeout expiration. We denote the probability of occurring timeout expiration within the window by pto. By using , the average value of the window size of a certain TCP Reno connection when packet loss is detected, we determine pto by the following simple equation:

In what follows, we distinguish two cases of detecting packet loss: retransmission timeout expiration (TO case) and the fast retransmit (FR case), because in each of two cases, a different algorithm of changing the window size is used.

In the TO case, that is, if packet loss is detected by retransmission timeout expiration, the window size is reset to 1[packet]. It is then updated according to the slow start phase (Eq.(1)) until it reaches /2[packets]. From Eq.(1), we can determine Tto,1[sec], the time duration of the slow start phase, and Ato,1[packets], the number of packets transmitted in the slow start phase, by the following equations:

where rtt[sec] is the mean value of RTTs of sending packets. Furthermore, we can easily obtain Tto,2[sec] and Ato,2[packets], which are the time duration and the number of transmitted packets in the following congestion avoidance phase, respectively, from Eq.(1) as:

These equations hold due to the fact that the window size is increased by 1[packet] per RTT[sec] in the congestion avoidance phase (Eq.(1)).

On the other hand, if the TCP Reno connection detects the packet loss by the fast retransmit algorithm (FR case), the window size is halved to /2, and the congestion avoidance phase starts again. That is, time duration and the number of transmitted packets during the slow start phase (denoted as Tfr,1 and Afr,1, respectively) are zeros:

Similarly, time duration and the number of transmitted packets in the congestion avoidance phase (Tfr,2 and Afr,2) are represented as

Consequently, the following equations are satisfied for the number of transmitted packets and the average window size during one cycle from Eqs.(16)-(19):

where rto[sec] is the retransmission timeout value of the connection. Since we can obtain pto and by solving Eqs.(20) and (21), the average value of the total window size of all TCP Reno connections, , can be easily obtained as follows:

Finally, and in the RED case can be determined similarly to the drop-tail router case, from Eqs.(11)-(12), (13) and (22).

5 Numerical examples and discussions

In this section, we show some numerical examples by using analysis results presented in the previous section, which are aimed at discussing the fairness between two versions of TCP. Simulation results are also provided to assess the accuracy of our analysis. In what follows, we set = 0.0015[sec], = 0.005[sec], bw = 10[Mbps], and BW = 1.5[Mbps] as network parameters. For the RED router, we set the threshold values, thmin = 5[packets] and thmax = 0.6 B[packets].

5.1 Case of drop-tail router

Figure 3 shows the average throughput of TCP Reno and TCP Vegas connections as a function of the buffer size B[packets] of the drop-tail router. We consider three cases for the number of connections of TCP Reno and Vegas (Nr and v): Nr = 5, Nv = 5 for Figure 3(a), Nr = 5, Nv = 10 for Figure 3(b), and Nr = 10, Nv = 5 for Figure 3(c). In these figures, we show both of the analysis and simulation results for validating our analysis presented in Section 4. We can see in these figures that our analysis gives appropriate estimations of throughput, regardless of the number of connections of two versions of TCP. However, especially when the router buffer size is very small (< 20[packets]), our analysis under-estimates the throughput of TCP Reno connections and over-estimates that of TCP Vegas connections. It is because the assumption that the window sizes of TCP Vegas connections are fixed at = (+)/2 does not hold for too small buffer size, while such a very small buffer size is not realistic.

Figure 3: Case of Drop-Tail Router
(a)Nr=5, Nv=5.

(b)Nr=5, Nv=10.

(c)Nr=10, Nv=5.

An important observation obtained from Figure 3 is that TCP Vegas connections suffer from significantly low throughput, compared with TCP Reno connections. It is due to the difference of buffer occupancy at the router. TCP Reno connections can increase their window sizes until the buffer becomes full and packet loss occurs. On the other hand, TCP Vegas connections do not inflate the window size larger than , as has been described in Section 4. This observation can be confirmed by our analysis in the previous subsection. From Eqs.(7) and (10), the average window size of TCP Reno connections becomes large as the router buffer size B[packets] is increased. The increase of the window size of each TCP Reno connection can directly lead to the throughput improvement, as can be seen from Eqs.(11) through (12). On the other hand, the window size of TCP Vegas connections remains unchanged regardless of the router buffer size (see Eq.(6)). Therefore, buffer occupancy of TCP Vegas connections is decreased as the router buffer size is set to be large. That is, the larger the router buffer size becomes, the worse the fairness between TCP Reno and TCP Vegas connections becomes.

In this subsection, we have considered the drop-tail router. The mechanism of the RED router can inhibit the bursty losses of packets from the same connection to improve the fairness among connections. Such a mechanism is also useful in our case, which will be examined in the next subsection.

5.2 Case of RED router

We next show the case of the RED router in Figure 4. In this case, the packet-dropping probability, p, is set to be 1/30. Analysis results in the figure are not affected by the router buffer size. It is because we have assumed in our analysis that the packet-dropping probability is constant, and that all packet drops are caused by stochastic dropping of the RED algorithm, not by the buffer overflow of packets. The differences between analysis and simulation results become apparent when the buffer size is small because in that region, throughput degradation caused by buffer overflow cannot be negligible. However, such a small buffer size is not realistic in the operating network and our analysis results can well illustrate how different the throughput performance of two versions of TCP are.

Figure 4: Case of RED Router: p=1/30
(a)Nr=5, Nv=5.

(b)Nr=5, Nv=10.

(c)Nr=10, Nv=5.

We can observe from Figure 4 that the fairness between two versions of TCP is greatly improved when compared with the case of drop-tail router, while the total throughputs of all connections are almost identical for the large buffer size. It can be explained as follows. With the RED algorithm, TCP Reno connections do not inflate their window sizes until the router buffer becomes fully utilized, since packet loss occurs before the buffer becomes full due to the essential nature of the RED algorithm. It results in the decrease of buffer occupancy of TCP Reno connections, leading to throughput degradation of TCP Reno connections. It also contributes the throughput improvement of TCP Vegas connections. The observation can be confirmed by our analysis. In contrast with the drop-tail router case, the window size of TCP Reno is independent of the router buffer size, since the total number of packets transmitted between two events of packet losses is only dependent on the packet-dropping probability of the RED algorithm p as shown in Eq.(14). Therefore, throughput values of two versions are not changed even when the router buffer size becomes large.

From the above discussion, one may expect that if the packet-dropping probability is further increased, the fairness can be improved because the average window sizes of TCP Reno connections get smaller. This observation can be partly confirmed by Figure 5, where we increase the packet-dropping probability to 1/10 (from 1/30 in the previous case). We can see the fairness enhancement by comparing these results with the previous results in Figure 4. It can be verified by Eq.(14), i.e., the number of packets that the sender host can transmit in one cycle is decreased as p becomes large. It causes the decrease of the average window size of TCP Reno connections because it inflates its window size until the packet loss is detected. Then, buffer occupancy of TCP Reno connections is decreased, and that of TCP Vegas connections is increased, since the window size of TCP Vegas is not affected by p. Hence, the fairness between the two versions of TCP can be improved.

Figure 5: Case of RED Router: p=1/10
(a)Nr=5, Nv=5.

(b)Nr=5, Nv=10.

(c)Nr=10, Nv=5.

As one can imagine, however, we cannot avoid the degradation of the total throughput if the packet-dropping probability of RED algorithm is set too high for further fairness improvement. Figure 6 shows simulation results for the throughput of TCP Reno and Vegas connections and the total throughput, by changing p (the packet-dropping probability of the RED algorithm). In obtaining this figure, we fix the other parameters: Nr = 5, Nv = 5, and B = 100~[packets]. We can see from the figure that when the packet-dropping probability becomes large (> 0.01), the fairness between two versions of TCP can be much improved, but the total throughput degrades. In other words, there exists an inevitable trade-off between fairness and throughput in the RED algorithm. Furthermore, it would be difficult to choose an appropriate value of p in the operating network since it must be affected by the active numbers of connections of two TCP versions.

Figure 6: Throughput vs. Packet-Dropping Probability of RED Algorithm

In this paper, we have considered two versions of TCP. The one is TCP Reno, an existing and widely used protocol. The other is TCP Vegas; the newly proposed protocol which gives higher throughput than TCP Reno as demonstrated in the original paper of TCP Vegas [6]. TCP Vegas also has an excellent feature of backward compatibility to the older versions of TCP including TCP Reno. However, when two versions of TCP share the bottleneck link, the performance of TCP Vegas is much degraded, which was not originally expected. For the new protocol to be deployed in the operating network, its migration path should be taken into account. In this sense, TCP Vegas does not seem to be successful.

However, there are several approaches to overcome the above problem. One possible solution is to improve the congestion control algorithm of TCP Vegas itself to be able to compete equally with TCP Reno. For this, the window of TCP Vegas should be increased more aggressively as TCP Reno does. Another approach is to modify the RED algorithm at the router so that the router can detect mis-behaving connections, which correspond to TCP Reno connections in the current context. Then the router eliminates the unfairness by intentionally dropping more packets from the mis-behaving connections than well-behaving connections. Those will be reported in forthcoming papers.

6 Conclusion

In this paper, we have investigated the fairness between TCP Reno and Vegas in the case where the TCP connections of the two versions share the bottleneck link. We have observed the following results through the mathematical analysis and the simulation experiments: TCP Vegas suffers from serious performance degradation with drop-tail routers because of the difference of buffer occupancy at the router. RED routers can improve the fairness to some degree, but there exists an inevitable trade-off between fairness and throughput.

References

  1. Z. Wang and J. Crowcroft, "Eliminating Periodic Packet Losses in 4.3--Tahoe BSD TCP congestion control," ACM Computer Communication Review, vol.22, pp. 9-16, April 1992.
  2. Michel Perloff and Kurt Reiss, "Improvements to TCP performance," Communications of ACM, vol.38, pp. 90-100, February 1995.
  3. Matthew Mathis and Jamshid Mahdavi, "Forward acknowledgment: Refining TCP congestion control," ACM SIGCOMM Computer Communication Review, vol.26, pp. 281-291, October 1996.
  4. Go Hasegawa and Masayuki Murata and Hideo Miyahara, "Fairness and stability of congestion control mechanisms of TCP," in Proceedings of IEEE INFOCOM'99, pp. 1329-1336, March 1999.
  5. Lawrence S. Brakmo and Sean W. O'Malley and Larry L. Peterson, "TCP Vegas: New techniques for congestion detection and avoidance," in Proceedings of ACM SIGCOMM'94, pp. 24-35,October 1994.
  6. Lawrence S. Brakmo and Larry L. Peterson, "TCP Vegas: End to end congestion avoidance on a global Internet," IEEE Journal on Selected Areas in Communications, vol.13, pp.1465-1480, October 1995.
  7. Jeonghoon Mo and Richard J. La and Venkat Anantharam and Jean Walrand, "Analysis and comparison of TCP reno and vegas," in Proceedings of IEEE INFOCOM'99, March 1999.
  8. Sally Floyd and Van Jacobson, "Random early detection gateways for congestion avoidance," IEEE/ACM Transactions on Networking, vol.1, pp. 397-413, August 1993.
  9. W. Richard Stevens, TCP/IP Illustrated, Volume 1: The Protocols. Reading, Massachusetts: Addison-Wesley, 1994.
  10. K. Fall and S. Floyd, "Simulation-based comparisons of Tahoe, Reno, and SACK TCP," ACM SIGCOMM Computer Communication Review, vol.26, pp. 5-21, July 1996.