The Network Is the Market: Financing Internet Bandwidth
John du Pre GAUNTT <email@example.com>
The nineteen-hour shutdown of America Online (AOL) on Wednesday, 7 August 1996, brought home to many decision makers the challenge of matching public expectations of speed and reliability to the underlying telecomms infrastructure upon which the Internet runs. Basically speaking, unless large users can effectively fix the speed, reliability, and cost -- without having to build or lease infrastructure -- of Internet bandwidth, it is dubious that many of the extravagant claims of interactive commerce or entertainment can be realized. Such content or service providers will be in the same position as an airline that cannot fix its fuel cost but tries to compete nonetheless.
But if we apply some of the characteristics of how we supply fossil fuel energy to telecomms resources, several things become clear. There is no public forum where producers and consumers of bandwidth can effectively fix future prices. There is an effectively "closed" market for bandwidth, dominated by telecomms operators, where large users must either build their own infrastructure, lease telecomms lines, or try to bargain for the best deal from network providers.
This is changing slowly through competition and the introduction of still embryonic forums for trading telecomms capacity. This paper will explore some of the possibilities for writing financial contracts that can be traded publicly where the underlying asset is bandwidth. It will draw from previous research on the conceptual basis of bandwidth trading and then highlight actual developments in the City of London.
There are two main reasons to trade bandwidth: (1) to hedge risk; and (2) to speculate on price changes. Hedgers and speculators in such a commodity can unlock the real costs of communications for electronic markets. These actors need not be telecomms operators, just as investment banks that trade petroleum futures do not own filling stations. Instead, we are talking about investors who are assuming the risk of price movements in order to control a resource that underlies an economic paradigm. In that sense, the management of a bandwidth-based derivative such as a future or an option need not be different from that of any other financial contract.
This paper will concentrate on specific examples from Europe where the European Virtual Private Network Users Association (EVUA) has launched an organizational shell to buy bulk capacity on behalf of the EVUA membership. It will also track the development of telecomms arbitrage operations as well as dedicated bandwidth brokerages such as Band-X, which is developing a spot market for international minutes while publishing an index of international outbound prices. Between these efforts, certain important pieces for constructing public markets for bandwidth capacity are slowly emerging.
The aim of this paper will be to highlight for an interdisciplinary audience some of the constraints at work when attempting to "price" bandwidth, show how the Internet is impacting the traditional model, and introduce some of the possibilities for network-based financial instruments with real examples.
The 15-day strike in August 1997 by workers at United Parcel Service (UPS) holds valuable lessons for the Internet. Within minutes of the walkout, UPS warehouses began filling with packages that were unable to continue to their final destination. During the strike, the company moved about 500,000 parcels per day, considerably less than the pre-strike average of 12 million. Additionally, many of those packages were sensitive to delay -- with sensitivity ranging from the failed overnight delivery promised by a mail-order retailer, to human skin graft material that had to be rushed to a surgeon.
Compare that situation with the events of 17 July 1997, when huge swaths of the global Internet suddenly became inaccessible. In this case, the Internet's domain name system (DNS) that translates complex routing numbers into user-friendly names such as economist.com failed because of a software glitch compounded by human error. Again, the core of an immensely influential system depended on the fortunes of a single company -- Network Solutions Inc. (NSI) of Herndon, Virginia.
The UPS strike and the NSI fiasco provide a glimpse of just how dependent modern societies have become on the smooth functioning of networks. Such dependency is, for the most part, invisible. However, it is when these networks fail, become congested, or are disrupted that society learns just how much of its well-being rides upon so little.
But there are some important differences between the way in which people perceive their dependency on UPS and their growing dependency on the Internet. People can see a correlation between the performance of the UPS network and the price structure that supports it. For its part, notwithstanding strikes, UPS knows that its business is delivering parcels -- so it knows how much that should cost. As such, there is a feedback system that informs users what their actions are going to cost and also tells UPS where it should invest its resources.
Likewise, people have been educated about the Internet but in a different manner. They are being fed a seductive idea by technologists and policymakers that technical advance and market competition will ensure that information is "free" and that the future will be an age when bandwidth flows like water and becomes as easy to access as electricity.
And as with water or electricity, all an observer need do is survey existing bandwidth markets to uncover startling price differentials and areas where service simply does not exist. In the meantime, informed observers estimate that the demand for bandwidth has been doubling every three to four months; in the past 20 years, bandwidth demand has increased one million times; fax transmissions make up fully half of what is counted as voice traffic; and in a few years, given certain trends, 99% of all traffic will be carried on Internet Protocol networks.1
But for the price of bandwidth, it remains the situation that the rest of the information economy must wait as most of the main Internet infrastructure owners -- incumbent telecommunications companies -- try to work out how they can migrate away from a business model that must distort per-bit prices for bandwidth in order to preserve narrowband revenue streams while making broadband service attractive to large users.
Yet there should be a point where bandwidth can be dynamically unbundled from the network without recourse to building or leasing dedicated infrastructure. While engineers look to protocol solutions for discriminating among classes of traffic on the Internet, work on pricing models that are both robust and simple tends to concentrate on either theoretically elegant though difficult auction systems or else pre-reservation schemes that add complexity to already-crowded header fields on Internet packets.
The move towards traffic discrimination and bandwidth pre-reservation illustrates a larger shift in Internet engineering towards performance over ubiquity. Whether or not this threatens the homogeneity of the Internet, in effect creating an Internet for the wealthy and an Internet "for the rest of us," is not known. Nor is there a price mechanism that indicates the cost that pre-reserving bandwidth imposes on the network as a whole.
Worst of all, most pricing scenarios operate at the level of national or regional markets. There is no effective global benchmark for comparing bandwidth prices in the same fashion as, for example, Brent Crude is invoked in global petroleum markets. In essence, if the Internet is to become the Global Information Infrastructure (GII), it would seem plausible that the price of bandwidth should evolve along lines similar to the prices of other global commodities such as energy, primary food stuffs, or textiles. This implies a new type of participation by markets, which have been hitherto limited to investing in network companies as opposed to network resources.
This paper will explore certain assumptions and possibilities for public trading of bandwidth. It will examine how the Internet is impacting traditional telecommunications pricing models and sketch some of the elements and future research required for quasi-public trading of network resources.
If one remembers the role of fossil fuel energy in the well-being of industrial civilization, one can imagine future perceptions of telecommunications platforms in the prosperity of information civilization. As more production and consumption decisions become predicated on the speed and reliability of interactive telecommunications infrastructure, the shift towards widespread commercial transactions over the public networks could redefine bandwidth as a volatile resource.
Rapid price swings are possible given that the communications costs for transacting in electronic markets made possible by the Internet are as yet unknown because the majority of present telecomms charging formulas are connection-oriented. Each call has a set-up phase during which a connection is established and maintained for the length of the call.2 Conceptually speaking, this model assumes that no one else can use the circuit. Thus, only a single accounting record is needed regardless of the session's duration.
It is also useless for an electronic market. In a packet-switched environment, a communications session is broken into discrete packets which traverse the network separately.3 Accounting for server usage on the WWW requires a separate record for every "hit" which adds up rapidly even if a user perceives a continuous session. If telephone-style accounting were used, the equivalent of a one-minute call could generate over 2,000 accounting records and a ten-minute call could entail accounting for over 20,000.4
As such, capacity planning on telecommunications systems has become an even more esoteric art than before. The former method of interrogating voice switches to build a model does not work for Internet traffic. Aside from the oft-heard stories of Internet connections that last for hours instead of minutes, on a more fundamental level, engineers are finding that data networks often exhibit fractal, or self-similar, characteristics. Basically, this means that data traffic is not periodic and therefore will not "smooth out" over time. Average throughput may look acceptable when analyzed over five-minute intervals yet exhibit unacceptable variability over five-second intervals. The upshot for the short-term is that network designs must be more conservative (in other words, expensive) in order to provide a consistent high-quality service.5
This is happening while the most important question facing Internet commerce has yet to be resolved -- how communications costs are to be allocated in a transaction setting. If a customer accesses the site of a seller, then the customer is paying for the communications. If that customer then decides to make a purchase of a product or service, should the final price of the item include their communications costs -- especially for an entertainment site providing bandwidth-hungry audio or video files? Would there be a discount for a customer who made a quick decision and thereby conserved resources or an extra charge for dithering over a purchase? How would network congestion in downloading the requested item be treated -- as part of the overall price or would there be scope for express delivery? What of wide-area collaborative applications such as gaming? Who pays the communications costs for returning an information-based item? Is there scope for a vendor to extend extra communications capability as part of a loyalty program?
It follows, therefore, that a transaction for even a simple item involves a complex communications dialogue that alternates between secure and insecure, priority and nonpriority, flat rate or usage sensitive and so on. Thus, it is likely that future charging mechanisms will need to have knowledge of the application protocol that is being used.
Yet, historically, telecommunications prices have been based on distance rather than the nature of an application. Theoretically, the further a signal had to travel, the higher the switching and transmission costs. This not only gave a basis for a pricing structure, but it also provided the means for cross-subsidizing telephony service. Whatever the past merits of using high international and long-distance rates to keep the cost of local service affordable, the practical reality is that this regime for pricing bandwidth assumed the predominance of voice telephony, the willingness of corporate users to pay higher bills, and isolated national markets -- assumptions all of which are now unraveling.
To stave off competitive threats by nontraditional bandwidth suppliers such as electricity companies, most telecommunications operators have taken the idea of "cost-based pricing" to the center of the bandwidth debate. New entrants are flashing their equipment invoices to show that they "know" their costs, while telecommunications economists are furiously publishing book and journal articles on how one can perform a proper cost estimation for an ex-monopoly provider migrating to an open market.
Be that as it may, it remains the case that the cost and price of bandwidth are more of a function of the alternatives rather than the actual cost of delivering it. In other words, the willingness to pay is going to be negotiated from a valuation point of view. "We have a service. They have a need and they are willing to pay a price," says one European supplier, "but that price will be compared to what other options are available and what value customers can make using our facilities."6
Granted that this sentiment may be widespread in the present telecommunications industry, the fundamental fact is that the cost of bandwidth has entered into the overhead of almost any business that hopes to use the Internet for electronic commerce. Therefore, it seems unlikely that the future price of bandwidth will be determined solely by the infrastructure or marketing costs of network operators.
So if we accept that the transaction model for Internet commerce will be electronic networks, and if we accept that the price of bandwidth in such a setting becomes a central business cost, it is feasible that bandwidth could be capitalized and traded in a public setting.
There are two main reasons to trade bandwidth: (1) to hedge risk; and (2) to speculate on its price. Hedgers and speculators not only create a new investment market, but can unlock information about the real cost of communications for an electronic market. A trader with undisputed access to network resources is not going to be a network operator just as an investment bank trading petroleum futures does not own filling stations. Instead, what we are talking about are investors who are acting on behalf of non-facilities-based carriers or even infrastructure owners in order to control a resource that is useful, difficult to substitute, and demanded by all who transact in an electronic market.
Yet before a "bandwidth futures" pit can be realized, there is a host of questions that must be addressed. The primary question would involve the degree to which it is possible to not only model network demand, but also model price changes over time across multiple operators and markets in such a way that potential bandwidth contract buyers -- telco or not -- are able to base their decisions on the investment risk as opposed to the operational risks of actually delivering a network service. Should that admittedly major hurdle be overcome, there is the additional need for exchange mechanisms, underwriting, and risk analysis.
As far as modeling network traffic patterns, certain companies have started producing maps of international switched voice traffic, while others are attempting to do the same with Internet traffic. The two main organizations producing these cognitive maps are Telegeography: http://www.telegeography.com and Matrix Information and Directory Services (MIDS): http://www3.mids.org.
Washington, DC-based Telegeography is a leading publisher of reports on international telecommunications flows. It publishes statistics on the number of minutes of public switched traffic (in millions) for over 100 countries to indicate the top twenty telecommunications routes. It does this through its direct relationship with major telecommunications carriers who provide the raw data for Telegeography's annual reports. By aggregating this data over time, Telegeography produces maps of telecommunications traffic flows over the past decade in over 50 major telecommunications markets, while tracking tariff changes in the last five years. The consultancy claims that the world's cross-border telephone traffic grew 13% to reach 70 billion minutes in 1996. In value terms, the global market for international minutes increased to US$ 61 billion, an 11.5% jump over 1995.
While Telegeography tracks the flow of international minutes, Matrix Information and Directory Services (MIDS) attempts to track Internet usage itself with a service known as the Internet Weather Report (IWR). The IWR produces maps of the global Internet by using the "echo request" element of the Internet Control Message Protocol (ICMP) -- often represented by a user program called "ping" -- to query various Internet domains from its Austin, Texas-based servers. Taking its list of Internet domains from the Network Wizards Internet Domain Survey http://www.nw.com, the IWR is a sort of radar scan of the Internet where the round-trip of a ping between Austin and a particular domain is modeled into a latency map. The size of the circles indicates the latency of a particular site with the size running from small (low latency) to high (high latency). The IWR sends pings to the various domains five times and takes the average latency per node. MIDS then collects all of the average latencies for all nodes for each scan and makes a geographic map. To see the change over time, MIDS uses Java to animate six scans for the day.
In addition to the IWR, MIDS has launched a further Internet visualization tool called Tracemap: http://mids.alexa. com/test/tracemap/. Tracemap enables a user to visualize the route that packets take from a server in San Francisco to a domain name specified by the user. Each tracemap of a destination shows graphically and textually the number of hops and time between hops in milliseconds of the IP path from one host to another, as well as the time it took to get to its intended destination. As of February 1998, Tracemap can be used by anyone on the WWW as part of a beta test.
Granted the importance of traffic- or tariff-mapping services such as those supplied by Telegeography and MIDS, it is still the case that the only working spot markets for capacity exist in just three locations: 60 Hudson Street in New York, 1 Wiltshire Avenue in Los Angeles, and the Telehouse in London. All of these "carrier hotels" offer facilities co-location and disaster recovery service as well as providing ISPs with links into Internet backbone nodes. As such, Web-based contracts for international minutes or bandwidth are largely restricted to using these major nodes. This does little for improving local access to bandwidth. However, the more trading that goes on spot markets, the closer the world comes to "standard" prices for large bandwidth chunks.
Not surprisingly, there have been several attempts at starting bandwidth trading exchanges. Three companies in particular: Arbinet http://www.arbinet.com, Band-X http://www.band-x.com, and RateXchange http://www.rateXchange.com are using the Web to enable buyers and sellers of international minutes or bandwidth capacity to browse multiple bids and offers before meeting.
Band-X and its direct copy RateXchange share a business model in which users register with the respective services and then are allowed to browse bid/offer prices that specify international routes, the connection points, or special technical data, as well as the price. If a buyer or seller wishes to pursue a bid or offer to completion, he or she is introduced by the broker who receives a percentage commission on the final agreed price.
Band-X has taken the concept of the neutral broker one step further through the launch of its index of UK outbound traffic in September 1997 and the launch of a US-based index soon afterwards. The Band-X indices reflect movements in the wholesale prices of international telecommunications minutes.
The Band-X index is created on each of the top twenty routes by volume of international minutes. Launched with a base value of 100, the indices are combined to create a country composite index, within which each individual route is weighted according to its proportion of total outgoing international traffic.
The data for the indices are provided to Band-X by no less than five international carriers who submit their wholesale selling prices for the last week of the previous month. The weighting for the composite index is calculated according to data provided by Telegeography. The composite figures are released at the end of each month. The individual route indices are released one month later, allowing privileged access by the contributing carriers.
Arbinet plans to be a bit more ambitious. A technology vendor as well as a possible bandwidth broker, Arbinet proposes to build an overlay network of programmable switches on carrier backbones that can intelligently route traffic according to the pricing and quality rules established by the individual carriers. Operators post their network availability and the prices they are offering at any given time to an Arbinet server or Central Local Node (CLN). Arbinet customers who have miniature versions of a CLN running in their networks can query the main server for the least-cost route based on a call's particular requirements. The Arbinet overlay system calls for a "Universal Switch" that when connected to individual carrier networks will comprise the virtual "Clearing Network." If an individual carrier wishes to join the Clearing Network, it will publish full information on route quality, times, rates, and restrictions to the Clearing Network database.
The Clearing Network operator (Arbinet) will manage all of the clearing aspects. Arbinet believes that the number of universal switch operators is unlikely to exceed a few thousand with a series of replicated servers. It is thought that the main argument for joining a Clearing Network is that while the marginal cost of carrying additional traffic is almost nil, the marginal cost of adding capacity is high. Between those factors, carriers who currently are exploiting less than half of their existing capacity have an incentive to manage their network bandwidth and costs by publishing to the Clearing Network the rates and times for which they want to transit and/or terminate other carriers' traffic.
The upshot of the emergence of network mapping services, along with exchange schemes such as Band-X and Arbinet, is that several key elements for managing bandwidth-based contracts exist. Although it is the case that the data for the Band-X index are provided largely on the goodwill of telecommunications carriers, while the Arbinet system looks for a radical overhaul of the present settlement system, these are still early days of bandwidth trading. The key will be how these systems scale and how the price and volume information they are generating can be aggregated, analyzed, and integrated into workable risk models for investors -- telco or not.
The thrust of these comments by Peter Bernstein is that we can assemble big pieces of information and we can assemble smaller pieces of information, but we can never get all the pieces together. As such, when information is lacking, we have to fall back on inductive reasoning. This is another way of saying that we must try to guess the odds.
Some of the most impressive research on inductive reasoning was done by the Nobel Prize-winning economist Kenneth Arrow. Early on, Arrow became convinced that people tend to overestimate the amount and value of information available to them. He drew this conclusion based upon his experience as a weather forecaster during the Second World War.
Arrow and some other officers were ordered to forecast the weather one month ahead but the statisticians found that such long-range forecasts were no better than pulling numbers out of a hat. Naturally, the forecasters asked to be relieved of this duty whereupon the reply noted that "The Commanding General is well aware that the forecasts are no good. However, he needs them for planning purposes."8
If we look at the current state of bandwidth prices, we see many of the same tendencies at work. For undersea cables -- arguably the ultimate Internet backbones -- it is obvious that capacity forecasts by carriers and actual demand have lived in different worlds. Japan's KDD still tries to live down its late-1980s assertion regarding TPC-3's ability to handle all traffic until the year 2000 (its trans-Pacific route is now home to TPC-4, TPC-5, NPC, and other major systems).
Granted that KDD predicted TPC-3's capacity while the Internet was a rather minor factor, it is clear that demand for Internet bandwidth is collapsing the traditional investment model for telecommunications infrastructure. This leaves direct cable investor/operators in the uncomfortable position of attempting to bet on what they know to be a "sure thing" with a "not-sure-at-all" idea of how long it will take before they are paid back. As such, participation by non-telco investors in information infrastructure projects or in managing the price risk resulting from the completion of those projects is more important than ever. But those investors cannot be drawn without third-party information and the tools with which to model demand without directly operating a network.
The question facing network mappers or exchange providers is how well their systems scale not only in volume but in speed. The maps of Telegeography take almost one year to produce while the IWR shows latencies over five samples taken within a single day. Tracemap may be better in its granularity, but the fact remains that these mapping services remain mainly research tools and are not the kind of industrial-strength data-gathering networks that are needed to produce more educated guesses about how bandwidth demand changes over markets and time.
Likewise, the existing bandwidth exchanges must scale by many orders of magnitude so that global bandwidth price movements can be analyzed. Somewhere along the line, one can expect a major investment house to take a risk by underwriting trades made on these exchanges based on mapped traffic flows. As the number of licensed telecommunications carriers or ISPs increases from thousands to tens of thousands, it is within their interest to let aggregators -- capacity and/or finance -- deal on their behalf.
Whether the involvement by investment houses will be restricted to financing straightforward swaps between carriers or more exotic instruments that resemble futures or options begin to emerge is not clear at the moment. One can expect, though, that a major area for research will be developing risk models based upon data generated by the exchanges and/or maps. Should that transpire, the hitherto-limited capital and talent that have been analyzing trends in bandwidth demand and pricing will expand considerably. As the experience of UPS and NSI have shown, there is simply too much at stake.