Characteristics of UDP Packet Loss: Effect of TCP Traffic

Hidenari Sawashima <hidena-s@is.aist-nara.ac.jp>
Nara Institute of Science and Technology
Japan

Yoshiaki Hori <hori@kyushu-id.ac.jp>
Kyushu Institute of Design
Japan

Hideki Sunahara <suna@wide.ad.jp>
Yuji Oie <oie@itc.aist-nara.ac.jp>
Nara Institute of Science and Technology
Japan

Abstract

On wide area networks (WANs), UDP has likely been used for real-time applications, such as video and audio. UDP supplies minimized transmission delay by omitting the connection setup process, flow control, and retransmission. Meanwhile, more than 80 percent of the WAN resources are occupied by Tramsmission Control Protocol (TCP) traffic. As opposed to UDP's simplicity, TCP adopts a unique flow control mechanism with sliding windows. Hence, the quality of service (QoS) of real-time applications using UDP is affected by TCP traffic and its flow control mechanism whenever TCP and UDP share a bottleneck node.

In this paper, the characteristics of UDP packet loss are investigated through simulations of WANs conveying UDP and TCP traffic simultaneously. In particular, the effects of TCP flow control on the packet loss of real-time audio are examined to discover how real-time audio should be transmitted with the minimum packet loss, while it is competing with TCP traffic for the bandwidth. The result obtained was that UDP packet loss occurs more often and successively when the congestion windows of TCP connections are synchronized. Especially in this case, the best performance of real-time audio applications can be obtained when they send-small sized packets without reducing their transmission rates.

Keywords: packet loss, UDP, TCP, WAN, real-time communication.

Contents

Introduction

On WAN, many real-time applications, such as video and audio, have been available for experimental and practical uses. The activity in this area as well as the number of real-time applications are increasing rapidly. However, the mechanism to guarantee the QoS for real-time applications on the Internet has not been established yet; only best-effort service is available. As a result, the applications must tolerate some degradation of QoS in terms of packet loss, delays, and delay jitter for messages transmitted over the networks. IPv6 has real-time application support mechanisms, but it will not be widely used for several years.

Traditionally, UDP, not TCP has been used as a transport layer protocol for real-time applications. UDP is a much simpler protocol without connection setup delays, flow control, and retransmission, providing applications with a more raw interface to the network. From this simplicity, UDP meets the requirements of delay-sensitive real-time applications that can implement their own flow control and retransmission schemes. Moreover, UDP is able to perform multicast communications, which allows the development of applications such as network conferencing.

Currently, more than 80 percent of the Internet's bandwidth[1] is consumed by TCP-based applications, such as HTTP and FTP. TCP uses a sliding window flow control mechanism. Under the TCP flow control, network congestion is recognized by detection of packet loss. When this occurs, the packet is retransmitted. At the same time, TCP reduces its congestion window size, effectively reducing its output rate to avoid further congestion. In the absence of congestion, TCP increases its congestion window size and output rate.

On the other hand, UDP which consumes a great part of the remaining Internet bandwidth has no flow control as mentioned above. That is, UDP only transmits messages through the network to a specified receiver's port.

Sharing Internet bandwidth by TCP and UDP causes the presence of one to affect the performance of the other. The UDP packet loss is especially affected by TCP traffic and its flow control mechanism. This is because TCP flow control continues to increase its window size until packet loss occurs if the advertised window size is large enough.

Furthermore, when TCP connections share a bottleneck node, the evolutions of their congestion windows are likely synchronized;[2] This TCP synchronization as well as other synchronizing traffic because of periodic update of routing tables[3] are harmful for network use. Some mechanisms have been proposed for avoiding synchronization.[4] However, they are not used widely at the present time. Consequently, UDP streams are considered to be strongly affected by TCP synchronization.

In fact, in one report,[5] UDP packet loss and delay performance are examined, using results actually measured on the Internet, and issues relating to the effect of synchronizing traffic from periodic routing table updates are discussed. In another report, UDP packet loss is analyzed by means of the queuing model, including Internet stream and UDP audio stream.[6] In both cases, the amount of successive packet loss is reported to be small when the Internet is not highly loaded. Moreover FEC (Forward Error Corrections) is also discussed as a way to minimize the impact of packet loss[7].

Nevertheless, UDP packet loss on WAN must be studied with respect to the effects of TCP traffic, especially TCP flow control behavior. Studies should also be conducted on how real-time messages should be transmitted by raw UDP with minimum packet loss.

In this paper, we investigate the characteristics of UDP packet loss through the simulation of UDP and TCP streams sharing a bottleneck node to WAN. In particular, the effect of TCP synchronization on UDP audio packet loss is examined. The final goal of this paper is to investigate ways to minimize the packet loss of real-time audio using raw UDP. For this purpose, UDP packet loss is examined as a function of several parameters, such as the number of existing TCP connections, UDP packet size, and UDP transmission rate.

In Section 2, the simulation scenario is described as well as the parameters of TCP and UDP streams. Section 3 shows the results of simulation, focusing on the effects of TCP synchronization on UDP packet loss. Section 4 concludes this paper with a discussion of how UDP real-time audio should be transmitted with the minimum packet loss.

Simulation model

UDP packet loss occurs very often on the bottleneck node from local area networks (LANs) to WANs. This is because heavy traffic concentrates on the bottleneck node, thereby causing traffic overload against the capacity of the WAN. For this reason, our network model consists of the bottleneck node from LAN to WAN and source-destination pairs through the node, as shown in figure 1. The simulator to be used below is REAL 4.0[8] with our additional implementation of the UDP source.


Figure 1: Scenario for Wide Area Networks Simulation

The bandwidth available is fixed at 10 Mbps for LAN and 1.5 Mbps for WAN. The bottleneck node buffer is based upon a FIFO, and its buffer size is specified at 16 packets. Though the buffer size is relatively small compared with the one on real network gateways, the characteristics of UDP packet loss can be easily observed and applied.

The UDP traffic is parameterized according to Internet audio applications. UDP packet size is thus set at 80, 160, and 320 bytes with the addition of UDP and IP headers. The transmission rate is set at 16, 32, and 64 Kbps. Several combinations of packet size and transmission rate are examined.

The TCP version is 4.3BSD-Reno, which reduces the current congestion window size to half when congestion is detected. TCP traffic is assumed to be generated by FTP applications on the WAN. This is the reason that approximately 70 percent of WAN traffic is occupied by TCP bulk data transfers,[1] and they can be modeled by FTP data transfer. The size of FTP packet is 512 bytes with added TCP and IP headers. The buffer size of the receiver is specified at 16 packets.

Regarding the number of existing TCP connections, we examined 3, 6, 9, and 12 connection cases. In the 3 TCP connection case, all connections can transmit their packets with maximum window size because congestion does not occur on the bottleneck node. However, in the 12 TCP connection case, some connections can transmit only a few packets because serious packet loss causes repeated timeouts. Therefore the results are shown for the 6-, 9- and 12 TCP connections.

The network delay on each connection is specified in the following two cases: (1) the homogeneous case, in which all connections' network delays are uniform; and (2) the heterogeneous case, in which all connections are different. In the homogeneous case, all connections' network delays are fixed at 52ms, assuming 1ms on the sender-side LAN, 50ms on the WAN and 1ms on the receiver-side LAN. In the heterogeneous case, the network delays on the sender-side LAN and the WAN are fixed in the same manner as the one in the homogeneous case, and the delays on the receiver-side LAN follow the pattern of 1ms, 3ms, 5ms, 7ms, and so forth, so that the delay times increase by 2ms with each connection. In a sense, the homogeneous case is not very realistic. However it's possible that some FTP connections are set up on almost similar delay networks or some TCP connections are set up on the same host pair at the same time, as with HTTP.

The results presented below are based on simulations carried out for 200 simulation seconds. This duration can be considered to be long enough to estimate the characteristics of UDP packet loss because we did not obtain significantly different characteristics from results even for the duration of 20,000 simulation seconds on preliminary experiments.

The characteristic of UDP packet loss is investigated in terms of the following items:

  1. The effect of TCP synchronization.
  2. The effect of the number of existing TCP connections.
  3. The effect of UDP packet size.
  4. The effect of the UDP transmission rate.

Simulation results

TCP synchronization

First, we examine the TCP flow control behaviors where only TCP streams are carried.

Figure 2 shows the congestion window size evolutions of 6 TCP connections in the heterogeneous case, and figure 3 shows the one in the homogeneous case.

All figures in this subsection show results for a short duration from 45s to 50s on simulation time because the behaviors during other parts are very similar to them.


Figure 2: Window Size Evolutions of 6 TCP Connections (Heterogeneous Case)


Figure 3: Window Size Evolutions of 6 TCP Connections (Homogeneous Case)

In figure 2, some TCP connections transmit their packets with maximum window size, while others must endure the time-out, thereby reducing their window size to only one packet. The window sizes of connections are not correlated to each other. Instead, as shown in figure 3, no connection can increase its window size to the maximum. In addition, the window size is changed in a periodic manner within some limited range. That is to say, all the window-size evolutions depicted are obviously synchronized (i.e., TCP synchronization).

Figure 4 shows the queue length of the bottleneck buffer in the heterogeneous case (TCP nonsynchronization) and figure 5 shows the one in the homogeneous case (TCP synchronization).


Figure 4: Queue Length Evolution of Bottleneck Node Buffer (Heterogeneous Case)


Figure 5: Queue Length Evolution of Bottleneck Node Buffer (Homogeneous Case)

In both cases, some oscillation is observed in terms of the queue-length evolution in the bottleneck buffer, but the TCP synchronization causes greater oscillations, as shown in figure 5. In TCP the nonsynchronization case, the average queue length is 9.3 packets in the buffer of 16 packets in length, and the total throughput of TCP connections is 1.37 Mbps (excluding headers). In the TCP synchronization case, the average queue length is 7.3 packets and the total throughput of TCP connections decreases to 1.24 Mbps because of the following reasons: In TCP synchronization, packet loss occurs over all the TCP connections at almost the same time, causing all of their window sizes to be reduced significantly as shown in figure 3. As a result, it is very likely that the bandwidth to be used by TCP connections is almost unused for some duration as well as the buffer as shown in figure 5.

As mentioned before, the homogeneous case is the worst case in the sense that the network must suffer TCP synchronization, which degrades the performance extremely. In addition, our results show that the queue length changes in a rather cyclic manner even in the heterogeneous case.

Figures 2 through 5 show the 6 TCP connection case, but the same behaviors are observed in 9 and 12 TCP connection cases. When the number of existing TCP connections are increased, the TCP window size is kept within a narrower range, and the average of queue length and the total throughput of TCP connections is decreased.

The characteristics of UDP packet loss: TCP traffic effect

Next, we will treat the case where both UDP stream and TCP streams are transmitted over Internet and the characteristics of UDP packet loss are examined. In the following simulations, the characteristics of UDP packet loss are evaluated in two cases: TCP nonsynchronization case and TCP synchronization case.

UDP packet size and the number of existing TCP connections

In this subsection, the effects of UDP packet size and the number of existing TCP connections on the UDP packet loss is investigated.

The UDP stream is transmitted at a rate of 64 Kbps, with 6, 9, or 12 TCP connections, and UDP packet loss rates in each case are examined. The UDP packet loss rate is defined as a ratio of the number of lost packets to the total number of transmitted packets. Figure 6 shows the UDP packet-loss rate in the heterogeneous case (TCP nonsynchronization case) for UDP packets of 80, 160, and 320 bytes. Figure 7 shows the loss rate in the homogeneous case (TCP synchronization case).


Figure 6: Effect of the Packet Size on UDP Packet Loss (TCP Nonsynchronization)


Figure 7: Effect of the Packet Size on UDP Packet Loss (TCP Synchronization)

UDP packet loss occurs very often due to TCP synchronization, especially when UDP packets are of 160 and 320 bytes, as shown in figure 7. So that packet loss can be examined in detail, the evolution of successively lost packets are measured in the case of 320-byte UDP transmitted at a rate of 64 Kbps with 6 TCP connections. The result in the TCP nonsynchronization case is shown in figure 8 and the TCP synchronization case is presented in figure 9.


Figure 8: Evolution of the Number of Successively Lost UDP Packets (TCP Nonsynchronization)


Figure 9: Evolution of the Number of Successively Lost UDP Packets (TCP Synchronization)

In the TCP nonsynchronization case (figure 8), the packet losses are isolated and infrequent. At most, only two packets are lost successively. However, in the TCP synchronization case, packet losses occur successively and very often. In some cases, 4 packets are lost successively. This means the congestion caused by TCP synchronization is not resolved for a duration in which UDP transmits 4 packets. Therefore, TCP synchronization causes UDP packet loss successively as well as frequently.

As for the effect of the number of existing TCP connections, UDP packet loss is increased, particularly in the TCP synchronization case, as the number of existing TCP connections is increased in figure 6 and figure 7. The reason for this can be described as follows:

With the increase in the number of TCP connections, a larger number of packets can simultaneously arrive at the node, thereby making the buffer severely congested and thus making packet loss occur more often. In particular, in the TCP synchronization case, it is very likely that packets simultaneously arrive at the bottleneck node from all TCP connections. This also degrades the UDP packet-loss performance very much.

In figure 7, the loss rate for UDP packets of 80 bytes is not heavily affected by TCP synchronization compared with UDP packets of other sizes. For clarification, we'll show the TCP congestion window size evolutions of the 6 TCP connection case for 320-byte UDP in figure 10, and 80-byte UDP in figure 11.


Figure 10: Window Size Evolutions of 6 TCP Connections (TCP Synchronization) with 320 Byte-UDP


Figure 11: Window Size Evolutions of 6 TCP Connections (TCP Synchronization) with 80 Byte-UDP

Although the TCP synchronization occurs in both figures, the window size changes over a wider range for UDP packets of 320 bytes than for those of 80 bytes. The average window size in 320-byte UDP case is 10.2 packets and the total throughput of TCP connections is 1.28 Mbps, while the one in 80-byte UDP case is 8.19 packets and 1.24 Mbps, respectively. Therefore, UDP packets of 80 bytes reduce TCP throughput to some extent, and lessen the network use. This is explained in the next paragraph.

UDP packets are transmitted at a rate of 64 Kbps here, and 80-byte UDP packets are transmitted at a rate of 100 packets/sec, whereas 320-byte UDP packet share transmitted at a rate of only 25 packets/sec. As for the bottleneck node buffer, a packet is a unit for management irrespective of its length. Therefore, when UDP packets are of 80 bytes, TCP packets arriving at the node can find the buffer congested more often than when they are of 320 bytes. For this reason, TCP window size in the case of 80-byte UDP packets is prevented from getting as large as in the case of 320-byte UDP packets. UDP packets of 320 bytes in turn suffer from congestion of long duration due to a large window of TCP. That is, small-sized UDP packets result in better packet-loss performance but lead to worse performance of TCP throughput. In determining an appropriate size for UDP packets, there can be a tradeoff between UDP packet-loss performance improvement and TCP throughput performance degradation.

The effect of UDP transmission rate

In this subsection, the effect of the UDP transmission rate on UDP packet loss is examined for the best performance of UDP; the packet size is 80 bytes.

Figure 12 show the UDP packet-loss rate for the transmission rates of 64, 32, and 16 Kbps, in which the UDP stream and 12 TCP connections share the node.


Figure 12: Effect of the Transmission Rate on UDP Packet Loss (UDP: 80 Bytes with 12 TCP Connections)

As shown in figure 12, in the TCP nonsynchronization case, there are not significant differences in terms of UDP packet-loss rates for different transmission rates. On the contrary, in the TCP synchronization case, the packet loss is dramatically reduced with the increase of its transmission rate and is very close to that in the TCP nonsynchronization case for transmission rates greater than 30 Kbps.

This phenomenon can be explained as in Section 3.2.1. Namely, transmitting UDP packets at a high rate prevents TCP connections from increasing their window sizes to large values.

The results of rate control on real-time video have been studied using feedback from packet-loss information[9]. Rate control, reducing the rate in case packet loss is high, is very effective in moderating network congestion. However, our results show that rate control is not advantageous for reducing the packet-loss rate of a real-time application itself. In fact, lower-transmission rate UDP suffered worse packet-loss rate in our simulations.

TCP connections are so greedy that they try to use the entire buffer capacity available on the bottleneck node. Thus, if more buffer capacity becomes available because the UDP transmission rate is reduced, TCP acquires the available bandwidth by increasing window sizes further. As a result, the amount of bandwidth available to UDP is decreased as the UDP transmission rate decreases.

Therefore, on a real-time application using UDP, rate control must produce the opposite effect in reducing packet loss. Moreover, increasing the UDP transmission rate with small packet size is a very effective way to reduce packet loss.

Conclusions

The characteristics of UDP packet loss have been investigated, in terms of the effects of TCP flow control over the UDP and TCP co-existing WAN with the following results:

First, we focused on the case in which only TCP connections use all of the bandwidth of the network. In particular, when network delays of connections are the same, all TCP congestion window sizes change in a synchronized manner (i.e., TCP synchronization). In these cases, the queue length of bottleneck node buffer evolves in a periodic way, and can stay full or almost empty for a relatively long duration.

Second, we treated the case in which UDP streams are added to the TCP synchronization case. UDP packet loss occurs more often and successively in the TCP synchronization case. This is because the TCP synchronization can make the node buffer full for a relatively long duration repeatedly and periodically. Even if the node is filled with packets, UDP packets are still transmitted constantly and must be dropped successively. Therefore, the UDP stream suffers harmful effects of the TCP synchronization.

Third, as for the effect of UDP packet size, UDP packet-loss rate is relatively low when the UDP packet size is small. In particular, packets of small size are very effective in moderating severe packet loss occurring due to TCP synchronization.

Forth, with respect to the effect of UDP transmission rate, we have shown that the packet-loss rate is not reduced by use of lower transmission rates. This feature can be explained as follows. TCP connections are capable of sharing all the available bandwidth among them by using their flow control mechanism. Therefore, even if the UDP stream reduces its transmission rate, the resulting available bandwidth will be consumed promptly by TCP connections. This is why reducing transmission rate of UDP packets will not contribute to improving the UDP packet-loss performance.

In this paper, a single UDP stream with some parameters is dealt with, but we have done the simulations in the case where more than one UDP stream exists and the obtained results are almost the same as shown here, although some UDP streams often affect each other.

From our simulation results, we conclude that, when the real-time applications use UDP as a transport protocol, the best performance can be obtained using small packet sizes at a rather high transmission rate.

However, it should be noted that packets of small size require a relatively large overhead due to UDP and IP headers. This results in the inefficiency of network use. In this sense, in order to send traffic from real-time applications with good quality, much more bandwidth than that generated by the applications will be required, whereas the quality required is not yet guaranteed.

References

  1. http://www.nlanr.net/Flowsresearch/fixstats.21.6.html
  2. L. Zhang and D. Clark, Oscillating Behavior of Network Traffic: A Case Study Simulation, Internetworking: Research and Experience, Vol. 1, pp. 101-112, 1990.
  3. S. Floyd and V. Jacobson, The synchronization of periodic routing messages, ACM SIGCOMM'93, pp. 33-44, September 1993.
  4. S. Floyd and V. Jacobson, On traffic phase effects in packet-switched gateways, Internetworking: Research and Experience, vol. 3, pp. 115-156, September 1992.
  5. D. Sanghi, A. K. Agrawala, O. Gudmundson, and B. N. Jain, Experimental Assessment of End-to-End Behavior on Internet, Proceedings of the Conference on Computer Communications, pp. 867-874, March/April 1993.
  6. J-C. Bolot, End-to-end Packet Delay and Loss Behavior in the Internet, In Proc. ACM SIGCOMM93, pp. 289-298, September 1993.
  7. J-C. Bolot, A. Vega Garcia, Control mechanisms for packet audio in the Internet, Proc. IEEE Infocom'96, pp. 232-239, March 1996.
  8. S. Keshav, REAL: A Network Simulator, Tech. Rep. 88/472, Department of Computer science, University of California, Berkeley, 1988.
  9. I. Busse, B. Deffner, and H. Schulzrinne, Dynamic QoS control of multimedia applications based on RTP, Computer Communications, pp. 49-58, January 1996.