next up previous
Next: technical Up: dimensions of problem Previous: contractual and logistical

academic and fiscal

Many emerging Internet services are offered by companies whose primary business has thus far been telecommunications rather than IP. The NAP providers (as well as the vBNS provider) are good examples. Traditionally phone companies, they find themselves accustomed to having reasonable tools to model telephony workload and performance (e.g., Erlang distributions). Unfortunately, the literature in Internet traffic characterization, both in the analytical and performance measurement domains, indicate that wide area networking technology has advanced at a far faster rate than has the analytical and theoretical understanding of Internet traffic behavior.

The slower and more containable realms of years ago were amenable to characterization with closed-form mathematical expressions, which allowed reasonably accurate prediction of performance metrics such as queue lengths and network delays. But traditional mathematical modeling techniques, e.g., queueing theory, have met with little success in today's Internet environments. For example, the assumption of Poisson arrivals was acceptable for the purposes of characterizing small LANs years ago. As a theory of network behavior, however, the tenacity of the use of Poisson arrivals, whether in terms of packet arrivals within a connectiongif, connection arrivals within an aggregated stream of traffic, or packet arrivals across multiple connections, has been quite remarkable in the face of its egregious inconsistency with any collected data [3,4]. Leland, et al. [5] and Paxson and Floyd [6] investigate alternatives to Poisson modeling, specifically the use of self-similarity (fractal) mathematics to model IP traffic.

There is still no clear consensus on how statistics can support research in IP traffic modeling, and there is skepticism within the community regarding the utility of empirical studies that rely on collecting real data from the Internet, i.e., some claim that since the environment is changing so quickly, any collected data is only of historical interest within weeksgif. There are those whose research is better served by tractable mathematical models than by empirical data that represent at most only one stage in network traffic evolution.

A further contributing factor to the lag of Internet traffic modeling behind that of telephony traffic is the early financial structure of the Internet. A few U.S. government agencies assumed the financial burden of building and maintaining the transit network infrastructure, leaving little need to trace network usage for the purposes of cost allocation. As a result Internet customers did not have much leverage with their service provider regarding the quality of service.

Many of the studies for modeling telephony traffic

came largely out of Bell Labs, who had several advantages: no competition to force profit margins slim, and therefore the financial resources to devote to research, and strong incentive to fund research that could ensure the integrity of the networks for which they charge. The result is a situation today where telephone company tables of ``acceptable blocking probability'' (e.g., inability to get a dial tone when you pick up the phone) reveal standards that are significantly higher than our typical expectations of the Internet.

We do not have the same situation for the developing Internet marketplace. Instead we have dozens of Internet providers, many on shoestring budgets in low margin competition, who view statistics collection as a luxury that has never proven its utility in Internet operations. How will statistics really help keep the NAP alive and robust since traffic seems to change as fast as they could analyze it anyway?

We are not implying that the monopoly provider paradigm is better, only observing aspects of the situation that got us to where we are today: we have no way to predict, verify, or in some cases even measure Internet service quality in real time.

There is some hope that some of the larger telecommunication companies entering the marketplace with will eventually devote more attention to this area. The pressure to do so may not occur until the system breaks, at which point billed customers will demand, and be willing to pay for, better guarantees and data integrity.

cost-benefit tradeoff: undertake enough network research to secure a better understanding of the product the NAPs sell, without draining operations of resources to keep the network alive.

failing that, fund enough research to be able to show that the NAPs are being good community members, contributing to the `advancement of Internet technology and research' with respect to understanding traffic behavior, at the least cost in terms of time and effort taken away from more critical engineering and customer service activities.



next up previous
Next: technical Up: dimensions of problem Previous: contractual and logistical



k claffy
Sat Apr 29 09:03:22 PDT 1995