INET Conferences


Conferences


INET


NDSS

Other Conferences


[INET'98] [ Up ][Prev][Next]

The Nature of the Beast: Recent Traffic Measurements from an Internet Backbone

K. CLAFFY <kc@caida.org>
Cooperative Association for Internet Data Analysis
USA

Greg MILLER <gmiller@mci.net>
Kevin THOMPSON <kthomp@mci.net>
vBNS/MCI
USA

Abstract

As described in last year's INET'97 paper, MCI has implemented a high-performance, low-cost monitoring system that can monitor Internet traffic (cell/packet headers) and perform analyses and has deployed them on OC-3 trunks within MCI's backbone and also within the NSF-sponsored vBNS (very High performance Backbone Service). This publicly-available tool facilitates measurement and analysis of high-speed OC-3, and now OC-12, trunks that carry hundreds of thousands of simultaneous flows. As a follow-up to last year's paper, we provide some new data analyses as well as comparisons with last year's data that may suggest trends in changing workload profiles. All the data in this paper is based on recent wide-area MCI Internet backbone traffic as recorded by the Coral monitors.

Contents

Introduction

Everything you've learned in school as "obvious" becomes less and less obvious as you begin to study the universe. For example, there are no solids in the universe. There's not even a suggestion of a solid. There are no absolute continuums. There are no surfaces. There are no straight lines.
-- R. Buckminster Fuller

Sustained, rapid growth, increased economic competition, and proliferation of new applications have combined to change the character of the Internet in recent years. The sheer volume of the traffic and the high capacity of the trunks have rendered traffic monitoring and analysis a more challenging endeavor. As described in last year's INET'97 paper [1], MCI has implemented a high-performance, low-cost monitoring system that can monitor Internet traffic (cell/packet headers) and perform analyses and has deployed them on OC-3 trunks within iMCI's backbone and also within the NSF-sponsored vBNS. This publicly available tool facilitates measurement and analysis of high-speed OC-3, and now OC-12, trunks that carry hundreds of thousands of simultaneous flows. As a follow-up to last year's paper, we provide some new data analyses as well as comparisons with last year's data that may suggest trends in changing workload profiles. All the data in this paper is based on recent wide-area MCI Internet backbone traffic as recorded by the Coral monitors.

We reveal the characteristics of the traffic in terms of flow size by protocol, percentage composition of traffic by protocol and application, distributions of flow sizes, length of packet trains, and statistics on IP fragmentation, prefix length distribution, and address space utilization. Where applicable, we also compare this data to the data from August 1997 as described in the previous study [2].

Coral measurement architecture

The original Coral DOS architecture was described in detail in Apisdorf et al.'s INET'97 paper [1]. The goal of the Coral project is to address three incompatible trends:

  1. Current widely-used statistics-gathering tools, which are largely FDDI- and Ethernet-based, have difficulty scaling to higher speeds.
  2. ATM circuits at OC-3 and higher rates are increasingly used for high-volume backbone trunks and interconnects.
  3. Detailed, flow-based analysis is important for understanding usage patterns and growth trends, but such analysis is not generally possible with the data that can be obtained directly from today's routers and switches.

The current Coral implementation satisfies the need for a high-speed monitoring and flow analysis tool while meeting the project's two driving design constraints of flexibility and low cost. Coral is a programmable data collection and analysis tool that can be easily modified as we codify and refine our understanding of the desired statistics. Furthermore, it is inexpensive to build, which facilitates widespread deployment. Both the flow analysis code and monitor architecture are in the public domain. Work is underway to expand the family of Coral monitors to address different speeds (OC12, OC48, DS3), interface types (e.g., Digital gigaswitch), and operating systems (e.g., Unix).

Coral hardware

The Coral/OC3mon platform is an IBM personal computer clone with 256 MB of main memory, a 166 MHz Intel Pentium processor, an Ethernet interface, two ATM interface cards, and a 33 MHz 32-bit-wide PCI bus. The ATM interface card used in the current Coral/OC3mon implementation is the Fore Systems ATM network interface card (NIC) for the PCI bus. The Intel i960 processor on this interface card allows us to optimize Coral/OC3mon operation with custom firmware.

We attach the two Coral/OC3mon ATM NICs to an OC-3 optical fiber pair carrying IP-over-ATM traffic. We connect the receive port of each ATM card to the monitor port of an optical splitter. The splitter carries a fraction of the light from each fiber to the receive port of one NIC. Attached to an OC-3 trunk that terminates on a switching device (e.g., an ATM switch or a router), one of the Coral/OC3mon NICs sees all traffic received by the switching device, and the other NIC sees all traffic transmitted by the switching device. The Coral/OC3mon NICs capture traffic on the two directions of an OC-3 link independently.

Coral software

The custom-developed Coral/OC3mon firmware is implemented in C++ and assembly code and provides full flexibility in terms of collection and analysis capability. Coral/OC3mon supports three modes of data collection: raw cell trace capture, active flow reporting, and expired flow analysis. In raw trace mode, Coral/OC3mon captures either every cell, or the first cell of every packet (AAL5 frame), that appears on the link. In this mode, Coral/OC3mon does not analyze captured data, but simply produces a time-stamped raw cell trace. The maximum length of a raw cell trace collected by OC3mon is limited to the amount of RAM in the monitor. In the two flow analysis modes, OC3mon collects statistics regarding flows, either active or expired, where the definition of a flow is configurable. Note that Coral processes every cell/packet; there is no statistical sampling involved in the monitoring itself.

Measurement methodology

We define a flow as a unidirectional traffic stream with a unique [source-IP-address, source-port, destination-IP-address, destination-port, IP-protocol] tuple (see figure 1). Any flow for which the monitor has not seen a packet within the last 64 seconds is considered to be an expired flow. A flow for which a packet was seen within the last second is considered to be an active flow. The monitor reports statistics based on expired flows whenever polled by a collector. After reporting the expired flows statistics to the collector, the monitor clears its stored state on all expired flows. It continues to maintain state information on flows that have not yet expired. These continuing flows, which have not yet been reported on, are referred to as known flows. The data presented in this paper is derived from expired flows statistics as reported by the monitors when being polled around the clock on 5-minute intervals. Flows that have been in progress (known) for one hour are artificially expired by the monitor so they can be reported. This mechanism for delimiting long-lived flows can affect the data that is reported by producing artificial traffic spikes.

Figure 1. Flow framework courtesy Hans-Werner Braun, NLANR/MOAT

The monitor does not maintain statistics on expired flows individually but instead aggregates them on a per-protocol basis. For this reason, flow statistics such as duration, byte volume, and length in packets are reported in terms of averages. Future plans include expanding the monitor's functionality to support collection of more detailed distribution information, as we recognize that averages can be of limited value in describing distributions of Internet traffic characteristics because of their wide variation.

FreeBSD Unix port

In response to community feedback, NLANR's (National Laboratory for Applied Network Research) Measurement and Operations Analysis Team (MOAT) [5] and the Cooperative Association for Internet Data Analysis (CAIDA) [4] have ported Coral/OC3mon to FreeBSD Unix (2.2.2-RELEASE). The Unix port is divided into two portions: the kernel-space device driver which deals with the Fore PCA-200E card directly and the user-space code that does the flows analysis and fulfills requests for summary reports. The Unix version differs slightly from the DOS version in the three different modes of data collection; for details see the Coral home page [3]. Briefly, in the packet trace collection mode, Coral/DOS requires a manual process to start, terminate, and copy data from the monitor to a host. The Unix version is more amenable to automation since cron utilities are available from the operating system.

In flow analysis summary mode, Coral/DOS results are available by telnet to a port on the box; Coral/Unix requires that one manually send the process a signal.

Management and security issues also differ on the two platforms. There is as yet no encryption of any communication with the Coral/DOS machine (access, management, or data transfers); management is via out-of-band (terminal-based) access. These characteristics may limit Coral/DOS deployment in a broad environment with needs for remote management. In contrast, Coral/Unix uses a standard LAN-based connection to the Internet. While more conducive to many environments that might want tools for scheduling, encryption, and basic operating system support for a monitoring platform, the Coral/Unix implementation is less stable than the DOS version and is still undergoing testing.

Packet header trace format

The Coral output format for a raw trace of packet headers consists of approximately 1MB concatenated blocks of data. Each data block consists of a 512-byte block header, followed by 17,408 packet entries of 60 bytes each. The block header includes information about which Coral Monitor interface the data in this block is from and how many of the 17,408 entries are actually used. The 60-byte entries are depicted below. The DOS and Unix/Coral versions differ in that the DOS version has the first 64 bits of each 60-byte entry byte-swapped. A flag to indicate byte-swapping resides in the first 16 bits of each block header, with -1 (0xffff) denoting the non-byte-swapped (Unix) version.

Figure 2. Coral packet trace format (byte-swapped version) courtesy Hans-Werner Braun, NLANR/MOAT

CAIDA/NLANR also provides scripts to convert from Coral format to ASCII, from Coral format to libpcap/tcpdump format [10], and to privatize source/destination IP addresses, all available from CAIDA's Web site [3].

OC12mon

As MCI's backbone transitioned from OC-3 to OC-12 speeds, they needed a new version of the Coral monitor. The new ATM OC12 cards, designed by Applied Telecom, satisfy constraints of performance and public code availability. The daughter card on the Applied Telecom PCI card for OC12MON can be replaced with one that does OC-3, including packet-over-SONET, if consistency of monitors across link speeds or the features of the Applied Telecom firmware are desirable.

OC12mons, now undergoing field tests on iMCI links, are designed to collect the first 2-3 cells per packet (3 cells yield 144 bytes of payload including LLC/SNAP or other headers), which is useful for tcpdump/snoop type functionality. Writing to disk while monitoring at line speed, an important feature of the OC12mons to permit the capture of arbitrarily long raw traces, requires disk write speeds of 80MB/s throughput. As a result, under current technology, an OC12mon implementation requires a highly tuned RAID array using FibreChannel or equivalent capabilities.

All other hardware specifications for OC12mon are the same as for OC3mon, including optical splitter, monitor, keyboard, ethernet interface, etc. MCI uses Pentium Pro CPUs for OC12mon, with no perceived problems. (Note that PCI bus is preferable to ISA here, since the latter tend to hold onto the bus longer.) Specifications and code are available from the Coral home page [3].

Measurement points

Two OC3mons are installed on OC-3 links within nodes on the iMCI backbone. Each OC3mon monitors traffic on a fiber pair between a core router and a backbone ATM switch. The first point serves as a junction for several backbone trunks as well as an access point for local customer traffic near a major U.S. East Coast city. The second point includes a U.S.-U.K. transatlantic DS-3 trunk, where the monitor sits on an OC-3 fiber pair between the router to which the international DS-3 trunk is homed and the backbone ATM switch. For both points our measurements are taken on 13 April 1998, from midnight to midnight. Some additional raw trace data was taken from the domestic trunk for in-depth analysis not possible from the flow data alone.

Results

Tables 1 and 2 provide summary statistics on the highest volume TCP and UDP applications for the 24-hour period of Monday 13 April 1998. Table 1 shows the top ten TCP applications from the international measurement point sorted by byte volume. There were 2.3 billion total packets and 0.8 trillion bytes for the day. The top ten TCP applications constituted 122 million flows, 1.867 billion packets, and 708.440 billion bytes. UDP traffic constituted 17 million flows, 0.1 billion packets, and 14 billion bytes.

Table 1. Top ten TCP applications sorted by byte volume (24 hours of Monday 13 April 1998)
Proto Src Port Dest Port Flows Packets Bytes
TCPhttp0 487668576829802235.13e+11
TCP0http 699613159352400397.38e+10
TCP0nntp 78521576218543.82e+10
TCPftp-data0 123702311445232.97e+10
TCP0smtp 1632969456161522.33e+10
TCPnntp0 70606394955411.40e+10
TCP55010 531958945727.27e+09
TCP0ftp-data 180794295313053.68e+09
TCP70700 2745347998483.48e+09
TCPsmtp0 1560102348019802.01e+09

TCP TOTALS 2.054457e+09 packets and 7.775899e+11 bytes
TOTALS 2.337206e+09 packets and 8.144167e+11 bytes

Table 2. Top ten UDP applications sorted by byte volume (24 hours of Monday 13 April 1998)
Proto Src Port Dest Port Flows Packets Bytes
UDPdomaindomain 13830555443108224.72e+09
UDP76487648 627050990811.61e+09
UDP279100 41648188966191.60e+09
UDP027910 613932196720691.21e+09
UDPdtspcdtspc 195932203271831.17e+09
UDP275000 28321179625259.40e+08
UDP2255522555 1565575201269.17e+08
UDPdomain0 164830634769217.25e+08
UDP027500 325784118002346.64e+08
UDP0ntp 9873985811326.52e+08

UDP TOTALS 2.497650e+08 packets and 2.938063e+10 bytes
TOTALS 2.337206e+09 packets and 8.144167e+11 bytes

Packet sizes

Figure 3a. Distribution of packet sizes

Figure 3b. Cumulative distribution of packet sizes, and of bytes by the size of packets carrying them

Figure 3a shows the distribution of packet sizes from a 24-hour time period on both directions of the measured trunk. As with graphs from previous years [1], this figure illustrates the predominance of small packets, with peaks at the common sizes of 44, 552, 576, and 1500 bytes. The small packets, 40-44 bytes in length, include TCP acknowledgment segments, TCP control segments such as SYN, FIN, and RST packets, and telnet packets carrying single characters (keystrokes of a telnet session). Many TCP implementations that do not implement Path MTU Discovery use either 512 or 536 bytes as the default Maximum Segment Size (MSS) for nonlocal IP destinations, yielding a 552-byte or 576-byte packet size [11]. A Maximum Transmission Unit (MTU) size of 1500 bytes is characteristic of Ethernet-attached hosts.

Figure 3b shows the cumulative distribution of packet sizes, and of bytes by the size of packets carrying them. This graph shows that almost 75% of the packets are smaller than the typical TCP MSS of 552 bytes. Nearly half of the packets are 40 to 44 bytes in length. Note, however, that in terms of bytes, the picture is much different. While almost 60% of packets are 44 bytes or less, constituting a total of 7% of the byte volume, over half of the bytes are carried in packets of size 1500 bytes or larger.

Composition of traffic by protocol

Figure 4a. Composition of byte volume by IP protocol. Data from 24-hour period on 13 April 1998.

Figure 4b: Composition of byte volume TCP application (right). Data from 24-hour period on 13 April 1998.

Figure 5a. Composition of packets by IP protocol. Data from 24-hour period on 13 April 1998.

Figure 5b. Composition of packets by TCP application. Data from 24-hour period on 13 April 1998.

Figure 6a. Composition of flows by IP protocol. Data from 24-hour period on 13 April 1998.

Figure 6b. Composition of flows by TCP application. Data from 24-hour period on 13 April 1998.

The graphs in figures 4-6 indicate the composition of the traffic over a 24-hour period on the measured link. Aside from a slightly greater proportion of Web traffic, this data is not significantly different from measurements taken August 1997 [2]. The left-hand plots show traffic in terms of IP protocols; on the right are breakdowns by TCP and UDP applications. Focusing first on IP protocols, we observe that TCP still by far dominates the traffic mix. Over the course of a day, TCP averages about 95% of the bytes, 90% of the packets, and 80% of the flows on the link. UDP is the second largest category, at roughly 5% of the bytes, 10% of the packets, and 18% of the flows on average. The other IP protocols plotted are tunneled IPv6, encapsulated IP (IP-in-IP), ICMP, and an aggregate category for the remaining protocols labeled "other." These other protocols individually make up a negligible percentage of the overall traffic. ICMP constitutes the third highest packet percentage after TCP and UDP, but still makes up less than 2% of the overall packets and 0.5% of the overall bytes.

The figures on the right side depict the proportion of the most prevalent TCP and UDP applications measured over a 24-hour period. For each application, we combine client-to-server and server-to-client (and in the case of DNS, server-to-server) traffic into a single category. We see that the Web is the dominant application on the link, comprising up to 75% of the bytes, 70% of the packets, and 75% of the flows when client and server traffic are considered together. In measuring applications, we end up with a larger "other" category than when measuring IP protocols. The "other" category is spread among a wide range of TCP and UDP port numbers, no one of which represents a significant percentage of the traffic by itself. Among the most common port numbers in this category are 81, 443, 3128, 8000, and 8080, which are all Web-related, indicating that the Web may actually be slightly underrepresented in our measurements.

In addition to Web traffic, we identify five other applications that contribute an appreciable percentage of traffic: DNS, SMTP, FTP (data connections), NNTP, and telnet. In terms of flows, DNS traffic represents the second largest application at nearly 18% of the overall flows. However, DNS flows are small, accounting for less than 3% of the total packets and 1% of the bytes on average. SMTP averages 5% of the bytes, 5% of the packets, and 2% of the flows. FTP data connections, on average, constitute roughly 5% of the bytes, up to 3% of the packets, and less than 1% of the flows. NNTP represents 2% of the bytes and less than 1% of the packets and flows. Finally, telnet accounts for about 1% of the packets and less than 1% of the bytes and flows, a marked decrease from recent years as alternative interactive protocols (e.g., ssh, kerberos, rlogin) have increased in popularity[8,9].

Per flow averages (overall and per application)

Figure 7. Distribution of packets per flow

Figure 8. Distribution of packets per flow by protocol (log-log scale)

Figures 7 and 8 show the sizes of flows as measured in packets. Figure 7 graphs the packet-per-flow distribution for all flows; the distribution is remarkably long-tailed, so we truncate it here for viewability. Figure 8 focuses on the number of packets per flow as a function of specific TCP or UDP application. The vertical lines are box and whisker plots: the x represents the mean number of packets per flow for a 24-hour period. The top and bottom of the vertical lines indicate the maximum and minimum 5-minute averages over the 24-hour period, respectively. This figure shows on a log-log scale how small most of the transaction-style flows, e.g., HTTP, SMTP, DNS, are in contrast to the bulk data transfer-style flows, e.g., FTP-data, NNTP. Note the telnet flows can be composed of large numbers of packets but are much smaller in byte payload (graph not shown here).

Lengths of packet troops

We illustrate the behavior of sequences of packets categorized by packet sizes. Because looking at all packet sizes inhibits useful visualization and does not add significant benefit over bucketing packet sizes (dividing up the range of packet size into contiguous buckets, e.g., 0-44 bytes, 45-90 bytes, 91-180 bytes, 181-260 bytes, 261-576 bytes, 577-1120 bytes, and larger than 1120 bytes), we experimented with different bucket widths as we created histograms of troop sequence lengths. In figure 9, we bucketed packet sizes and analyzed sequence length from a five-minute trace on a domestic iMCI OC-3 link from November 1997.

The figures, one for each direction of traffic on the link, both indicate the presence of a considerable number of trains of packets of sizes in the 261-576 byte range. Of interest to router engineers are the long trains of very short packets, i.e., the top red line on each graph in figure 6 represents trains of packets, all of which were 44 bytes or less. Because a larger proportion of router processing overhead is per-packet than per-byte, sustained streams of relatively short packets pose a heavier workload on the router than when longer packets are interspersed.

Figure 9. Lengths of sequences of packets of approximately same size (0.97 million packets in forward direction [top graph]; 1.14m in other [bottom graph]) (5 minutes, domestic link, 5 November 1997).

IP fragment counts

Several denial-of-service attacks exploit the presence of IP fragments, often found in a heterogeneous wide-area traffic stream [7], and the presence of IP fragments has thus become an area of concern for Internet service providers. Figure 10 illustrates the percentage of total traffic that is composed of fragmented IP packets over the course of a week in April 1998 -- at most, 3% of total traffic.

Figure 10. Percentage of traffic composed of fragmented IP packets (13-20 April 1998)

IP address prefix length distribution

Figure 11. Traffic sourced from and destined to addresses of various prefix lengths (packets and bytes) (5 minutes, domestic link, 5 Dec 1997).

Figure 11 plots the distribution of prefix lengths for source and destination IP addresses of traffic in a five-minute packet trace from a domestic OC-3 iMCI trunk in December 1997. Most of the IP traffic is sourced from and destined to IP addresses of prefix length /16, /24, and /19's, approximately 64.7% and 62.9% of the bytes, respectively, and 67.9% and 61.8% of the packets, respectively. For IP destinations, 30.9% of the packets and 36.8% of the bytes are destined to /16's, 16.2% of the packets and 13.4% of the bytes are destined to /24's, and 14.7% of the packets and 12.7% of the bytes are destined to /19's. For IP sources, 35.9% of the packets and 26.5% of the bytes are sourced from /16's, 16.8% of the packets and 20.8% of the bytes are sourced from /24's, and 15.2% of the packets and 17.4% of the bytes are sourced from /19's. In contrast, the distribution of routing table entries comprises a larger proportion of /24's. Note that /19 is the minimum allocation by the registries of provider-independent address space and that some providers are by policy refusing to forward or support route prefixes longer than /25.

Figure 12. Mean packet size as a function of prefix lengths (5 minutes, domestic link, 5 Dec 1997).

Figure 12 shows the average packet size as a function of prefix length. Note that in this data, prefix lengths of /24 and above tend to send larger packets, presumably to destinations of smaller prefix length, who are receiving packets of larger average length in bytes. The disparity between the average sizes of send and receive for /32 addresses may be due to a BGP route update from a router(s) announced as a /32.

Summary

We have presented statistics of recent iMCI Internet backbone data, taken in mid-April 1998, with some studies of packet traces from November and December 1997. The data was collected with Coral monitors developed by MCI under the auspices of the NSF-funded vBNS project [4] and in collaboration with NLANR/CAIDA [4,5]. The flow data is based on the monitors reporting on all expired flows, which are uni-directional, on back-to-back 5-minute intervals. We reported statistics on packet sizes, composition of traffic by protocol, per flow averages (overall and per application), traffic flow among countries, lengths of packet troops, IP fragment counts, IP address prefix length distribution, and IP address space utilization.

For the 24-hour period of 13 April 1998, a domestic iMCI backbone link saw 2.3 billion total packets and 0.8 trillion bytes, for which the top ten TCP flows constituted 122 million flows, 1.867 billion packets, and 708.440 billion bytes. UDP traffic constituted 17 million flows, 0.1 billion packets, and 14 billion bytes.

Our measurements from April 1998 indicate that almost 60% of packets are 44 bytes or less, constituting a total of 7% of the byte volume; over half of the bytes are carried in packets of size 1500 bytes or larger.

The composition of traffic, in terms of both packets and bytes, is not significantly different from traces measured from last August 1997. TCP still averages about 95% of the bytes, 90% of the packets, and 80% of the flows. UDP makes up most of the rest of the traffic, with IPv6, encapsulated IP (IP-in-IP), ICMP, and other protocols taking up around 3% of the traffic. HTTP is still the dominant application on this backbone link, comprising up to 75% of the bytes, 70% of the packets, and 75% of the flows when client and server traffic are considered together. There is a significant amount of traffic in the "other" category, spread among a wide range of TCP and UDP port numbers, many of which are also Web-related (e.g., port 8080). Notably, the telnet protocol accounts for about 1% of the packets and less than 1% of the bytes and flows, a marked decrease from recent years as alternative interactive protocols (e.g., ssh, kerberos, rlogin) have increased in popularity.

The distribution of flow sizes, as measured in packets, is long-tailed. Our measurements indicate that the majority of flows are still transaction-style, e.g., HTTP, SMTP, DNS, carrying much less traffic than the bulk data transfer-style flows, e.g., ftp-data, nntp.

We analyzed the length of packet trains composed entirely of packets of similar size. Because a larger proportion of router processing overhead is per-packet than per-byte, sustained streams of relatively short packets pose a heavier workload on the router than when longer packets are interspersed. These figures are thus important for router designers and engineers to frame their design specifications.

The percentage of total traffic that is composed of fragmented IP packets over the course of a week in April 1998 is at most 3%.

We also analyzed the distribution of traffic by IP address prefix length and found that the majority is sourced from and destined to IP addresses of prefix length /16, /24, and /19's, approximately 64.6% and 63.1%, respectively, for sources and destinations.

Many of these statistics are of significant interest to Internet engineers and equipment vendors trying to design routing and switching equipment based on current workload profiles predicted trends. We expect such data will become only more important in the future as the Internet relentlessly evolves to higher speeds and more heterogeneous workloads.

Future work

Future possible avenues for Coral development/deployment include

  1. Expansion of the family of Coral monitors to address different speeds (OC12, OC48, DS3), interface types (e.g., Digital gigaswitch), OSes (e.g., Unix), and other requirements (e.g., IP over Sonet);
  2. More extensive performance testing to see when the boxes begin to fail to keep up with the traffic load;
  3. Sustainable security, privacy, and trust model;
  4. Enhancements for remote access, including support for automatic rebooting, remote interactive access via terminal servers, and automatic scheduled data collection in various customizable modes;
  5. Simplified ("plug-and-play") installation and maintenance by the local site to a single box only, with no miscellaneous equipment or phone lines; and
  6. Functionality to support collection of more detailed distribution information.

Acknowledgments

The authors are grateful to Hans-Werner Braun for his commitment to Coral data analysis; Tracie Monk for her commitment to CAIDA; Rick Wilder for his support of the Internet measurement agenda; Nancy Bachman, Daniel McRobb, and David Moore, from CAIDA, and Renato Gragasin from MCI for their swat-team efforts and proofreading and feedback on visualization impact. And none of this analysis would have been possible without the incredible effort of Joel Apisdorf in developing and maintaining the Coral monitor.

This work has been supported through MCI and the vBNS project, sponsored by NSF grant NCR-9321047, and through CAIDA, funded partially by the National Science Foundation through grant #NCR-9711092 but mostly through contributions from its industrial membership.

References

  1. OC3mon: Flexible, Affordable, High-Performance Statistics Collection, J. Apisdorf, k claffy, K. Thompson, and R. Wilder, http://www.isoc.org/INET97/proceedings/F1/F1_2.HTM
  2. Wide Area Internet Traffic Patterns and Characteristics, K. Thompson, G. Miller, and R. Wilder, IEEE Network, Nov. 1997. http://www.vbns.net/presentations/papers/MCItraffic.ps
  3. Coral home page http://www.caida.org/Tools/Coral/
  4. very High performance Backbone Service http://www.vbns.net
  5. Cooperative Association for Internet Data Analysis http://www.caida.org
  6. National Laboratory for Applied Network Research (NLANR)/ Measurement and Operations Analysis Team (MOAT) http://moat.nlanr.net/
  7. "Security Considerations - IP Fragment Filtering," G. Ziemba, D. Reed & P. Traina, RFC 1858, http://ds.internic.net/rfc/rfc1858.txt
  8. "Long-term traffic aspects of the NSFNET," K. Claffy and H.-W. Braun and G. Polyzos, Proceedings of INET'93. http://www.caida.org/Papers/lta.html
  9. A parameterizable methodology for Internet traffic flow profiling, K. C. Claffy, Hans-Werner Braun, George C. Polyzos, IEEE JSAC April 1996, http://www.caida.org/Papers/pmi.html
  10. tcpdump, Lawrence Berkeley Laboratory Network Research Group, http://www-nrg.ee.lbl.gov/tcpdump.tar.Z
  11. TCP/IP Illustrated, Volume 1: The Protocols, W. Richard Stevens, Addison-Wesley, 1994.

Biographies

Gregory J. Miller has been a Senior Engineer in the vBNS Engineering Group at MCI since September 1996. His focus is on network performance measurement, traffic analysis, and IP and ATM Quality of Service mechanisms. Before joining MCI, he was a Senior Member of the Technical Staff at the MITRE Corporation. He received a BS degree from Loyola College in 1988, and the MS and PhD degrees from the University of Delaware in 1990 and 1993, all in computer science.

Kevin Thompson is a Senior Engineer in the vBNS Engineering Group at MCI. He supports statistics collection architecture and implementation for the vBNS. He was employed as an engineer at the MITRE Corporation in the Networking Center until 1995. He received a BS in Computer Science from the University of Virginia in 1987 and an MS in Computer Science from George Washington University in 1992.

k claffy is principal investigator for the Cooperative Association for Internet Data Analysis (CAIDA) and resident research scientist at the University of California, San Diego. kc's research interests include traffic analysis, impact of high-demand (e.g., multimedia) applications on the integrity of current infrastructure, equity among users, and changing financial structure of the Internet.

[INET'98] [ Up ][Prev][Next]