FRANCAIS

ABOUT THE
INTERNET SOCIETY
ISOC Mission Statement
Membership

CONFERENCE OVERVIEW
Working Party
RealVideo Broadcast
Mbone Broadcast
At-a-Glance
Program
Conference and Program
   Committees

Geneva and Palexpo
Call for Papers
Plenary Speakers
Evening Events
Internet Access Room
BOF Meetings
Pre-Post Tours
Chapter Activities
Internet Related Meetings
Reports From The   Conference

PRE-CONFERENCE EVENTS
K-12 Workshop
Developing Countries Networking Symposium
Technical Tutorials
Network Training Workshops

SPONSORSHIP + EXHIBITION
Invitation to Sponsors
INET'98 Sponsors
Previous INET Sponsors
Sponsor Benefits
Exhibition Hall

MEDIA/PRESS
Press Releases
Media Accreditation and Form
Official INET'98 Publications

REGISTRATION, HOUSING, TRAVEL
Registration Information and Form
Hotel Information and Form
Tour Information and Form
Airline Travel

HELP PROMOTE INET'98

Organizations/Companies Displaying the INET'98 Logo

FREQUENTLY ASKED QUESTIONS

INET'98

Abstracts for Track 1 - New Applications

Internet Security

LDAPv3Versus x.511 DAP Security: A Comparison and How to Sign LDAPv3 Operations - Paper 132

Vesna HASSLER
Technical University of Vienna, Austria

In this paper we give a comparative overview of the X.511 DAP (Directory Access Protocol) security features and the LDAPv3 (Lightweight Directory Access Protocol, Version 3) security features. We also propose a method to implement digital signature for LDAPv3 since this functionality has not been discussed in
LDAPv3 documents so far.

Smart Access: Strong Authentication on the Web  - Paper 256

Ton VERSCHUREN
SURFnet, Netherlands

Today's Internet is moving away from its original academic credo: free access to everything for all. Popular mechanisms to protect a (or part of a) Web site from public access are filters on the IP address or username/password combinations. The first prevents the identification of a single individual from any PC on the Internet. The latter suffers from sniffing (the passwords travel unencrypted) and from publication; lists with usernames and passwords are popular. In short, there's a strong need for better identification (tell me who you are) of individuals and for better authentication (prove to me who you are). One solution to this problem is the use of public key cryptography, whereby both a Web server and a client possess a private/public key pair that is used to create an encrypted communication path. An example is Netscape's Secure Sockets Layer (SSL). Although the technology has been available for quite some time now, the use of client certificates is minimal. Main reasons for this are the US crypto export regulations (export of 40-bits instead of 128-bits keys makes the communication vulnerable to attacks) and the fact that Certificate Authorities (the issuers of the certificates) are not yet deployed on a large scale and do not interwork well. Therefore, another approach was chosen. In the Netherlands more and more college and university passes are being implemented on a multifunctional smartcard, the Student Smart Card, "Studentenchipkaart" (SCK). Multifunctionality here means the combination of several logical functions, both physical (the print on the
card) and electronic (the data on the card): visual pass for identification, access, library; electronic purse, electronic identification, telephone card, etc. Could this smartcard also be used as a means in a strong authentication process for online services? The answer is yes. With the help of a team of students under
supervision of IBM staff, a protocol was developed and implemented, whereby a user with a smartcard, a smartcard reader, a PC, and a Web browser can authenticate himself to a Web server serving sensitive (i.e., nonpublic) data. The main advantage of this approach over the one based on public key cryptography
is the fact that no separate registration process is necessary to obtain, say, a key. All necessary data are already on the card when it reaches the student. The applications above use a so-called two-party authentication mechanism, whereby the client talks directly to the server for its authentication. Consequently, every server needs a copy of the secret (triple DES) key on the smartcard. Obviously, this approach will not scale in a secure way. Therefore, a three-party authentication service is currently under development. SURFnet will act as the Trusted Third Party (TTP) for its customers who want to authenticate their users
before they access their data.

How to Organize Companywide Authentication and E-Mail Encryption - Paper 313

Manfred BOGEN
Michael LENZ
Andreas REICHPIETSCH
Peter SIMONS
German National Research Center for Information Technology, Germany

In the last three years, encryption utilities like Pretty Good Privacy (PGP) and Privacy Enhanced Mail (PEM) have matured to a point where they have begun to receive widespread acceptance among users of electronic mail on the Internet and intranets. Many employees of research institutes, universities, and companies have started to use encryption and digital signatures to protect and to authenticate their e-mail.

To achieve a maximum benefit from these security measures, though, the organization has to provide an infrastructure for its employees which includes trusted or untrusted key servers, a key certification authority and a definite policy about the utilization of the new technology.

In this paper, the authors present a skeleton security policy on which others can base their custom made solutions to the authentication problem. Experiences are also described from establishing a certification authority within the German National Research Center for Information Technology (GMD) and from maintaining a certification authority for the individual network domain rhein.de.

The foundation of the work is the policy for certification authorities as issued by the German Research Network (DFN), which will be discussed and extended so that it suits the requirements of middle-size to large companies or organizations.

The paper also addresses the problem of handling different authentic keys for different applications -- like encryption of electronic mail, Secure Shell (SSH) host keys, and Secure Socket Layer (SSL) certificates for Web browsers -- as well as giving various practical hints which will help to avoid the pitfalls that lurk on the way.

The intention of this paper is to serve as a base for a security handbook for other organizations wishing to establish such an authentication infrastructure.

Return to Abstracts by Tracks

Return to INET'98 Programs

Brokered Relationships

A Meeting Scheduling System for Global Events on the Internet - Paper 246

AHMED Ashir
Glenn MANSFIELD
Norio SHIRATORI
Tohoku University Japan

The Internet has made remote conferencing a widespread reality. This has added a new and challenging dimension to the problem of scheduling meetings and conferences. The stress on a participant due to the geographical time difference needs to be taken into account while scheduling a meeting.

In this work, a scheduling algorithm for both "open" conferences (which anyone from any part of the globe is free to join) and "closed" conferences (in which the participants are from a closed group) is considered.

The concept of "quorum" is introduced into the scheduling algorithm to make the algorithm more flexible and efficient. To make the system capable of processing multiple meetings concurrently with minimal wastage of timeslots, the bidding method of the traditional contract Net protocol is extended. 

A case study has been carried out with actual data from an international conference. It is shown that the proposed algorithm generates schedules that will cause significantly less stress and do so more efficiently and with a higher success rate.

NYU Home: Combining Internet Tools into Personal Digital Agents - Paper 361

David ACKERMAN
Drew HAHN
Randy WRIGHT
New York University USA

The Internet allows for a custom-made world that teaches us to reinvent bureaucracies, even those of higher education which are rich in conservative tradition. Too often, information services are focused on the needs of the bureaucracy and ignore personal utility. This paper will describe a new application at New York  University centered on the individual.

NYU Home creates personal digital agents that individual students and faculty can adapt to their particular needs. These digital agents search, organize, retrieve, and send information that is custom-designed to fit each student and faculty member.

NYU Home revolutionizes the way members of the NYU community get their information, making it one of the most useful and practical campus communication protocols. NYU Home's primary strength is its utility to the individual. Empowering individuals not only facilitates achieving the mission of an organization, but also allows the organization an opportunity to profile its customer and staffing needs and respond to a constantly changing marketplace.

NYU Home draws on the utility of basic Internet tools (authentication, e-mail, e-mail lists, Web and discussion groups). In this paper we describe combining and integrating them in creative ways, building a modular, scalable system: an SQL database containing user attributes; a kerberos and SSL-enabled Web server for encrypted authentication; e-mail notifications for those selecting the option; Web submission and presentation of both public and personal information; and access to e-mail lists and discussion groups selected by the system or individual. The architecture and process, while particular to a New York University project in progress, contain a number of examples of general interest, where the combining of Internet tools creates synergies resulting in powerful information services.

Building Online Communities for High-Profile Internet Sites - Paper 378

Lee M. LEVITT
Laird POPKIN
David HATCH
News Internet Services USA

Corporate Internet sites, entertainment sites, and online services are all moving away from the static presentation of information to interactive communities, involving the members of the community in ongoing public dialog. The benefits to the site are twofold. First, the involvement engages the user more deeply with
the site, bringing visitors back more often and keeping them on the site longer. Second, the users themselves can generate a substantial portion of the site's content.

However, while community-building activities bolster the potential payback and profitability of advertising-supported sites and contribute to the brand development for sites that support products or services, they also present a number of social and technical challenges.

This paper and presentation explore these social and technical challenges inherent in designing, building, and managing online communities. The discussion is based in part on the News Corporation experiences in building and managing such communities for several high-visibility sites, including TV Guide (http://www.tvguide.com) and the UK online service LineOne (http://www.lineone.net).

Return to Abstracts by Tracks

Return to INET'98 Programs

Indexing and Searching the Web

Using META Tag-Embedded Indexing for Fielded Searching of the Internet - Paper 180

Philip COOMBS
Washington State Library USA

Full-text searching on the Internet has run its course. A new approach adding fielded searching is vital to the effectiveness of information discovery and retrieval in the years ahead. This paper presents the results of one year of operation of a statewide government locator service employing indexing embedded in META tags,
common attribute schema, and combined full-text and fielded searching applications. It provides evidence that author-indexed information is practical, viable, and powerful when embedded into the source files available on the Internet. This method has drawn interest and acclaim from governments and industry as it demonstrates the critical role META tags will play in the Internet of the next few years.

Internet Information Retrieval: The Further Development of Meta-Search Engine Technology - Paper 041

Wolfgang SANDER-BEUERMANN
Mario SCHOMBURG
Computer Center of Lower Saxony and University of Hannover Germany

This paper first describes the state of the art of the meta-search technology. It defines criteria for evaluating such applications and investigates existing meta-searchengines. Secondly it outlines our approaches to solve some problems of Internet information retrieval, which were undertaken at Hannover University. We have been running high-traffic meta-searchengines for nearly two years (http://mesa.rrzn.uni-hannover.de/   and http://meta.rrzn.uni-hannover.de), and we are describing our experiences and the developments we have made to gain a higher degree of completeness and quality of Internet information retrieval.

One of the mainstream ideas we are following is the combination of the overwhelming mass of Internet data with manually reviewed information sources and own ranking algorithms. Combining these leads toward high quality results based on an Internet search as complete as possible.

A Framework for Developing Information Indexing, Filtering, and Searching Tools on the Internet - Paper 371

José CARDOZA
Pedro F. GONÇALVES
Alessandro LIMA
Luciana VALADARES
Cynthia TERCERO
Silvio L. MEIRA
Ana Carolina SALGADO
Fabio Q.B. da SILVA
Universidade Federal de Pernambuco Brazil

Many information-indexing, filtering, and searching tools have been and continue to be independently built for the Web and intranets, with redundant software development efforts and low intersystem runtime cooperation.

Runtime cooperation among such tools represents an important possibility for saving both local and global communication and computational resources, whereas development-time cooperation should reduce project costs. This paper presents an object-oriented development framework for this class of tools that explores
both development-time and runtime cooperation. We present the structure and implementation of the proposed framework, show how it can be used to support the development of cooperative systems, and discuss case studies of its use.

Panel: Can Standards Survive the Success of the Internet?

Return to Abstracts by Tracks

Return to INET'98 Programs

World Wide Web Application Management Systems

Palantir: A Visualization Tool for the World Wide Web - Paper 087

Nektarios PAPADAKAKIS
Evangelos P. MARKATOS
Athanasios E. PAPATHANASIOU
Foundation for Research and Technology - Hellas (FORTH) Greece

World Wide Web traffic increases at impressive rates reaching up to several million hits (requests/clients) per day for busy Web servers. To serve all these clients effectively, it is necessary to have a good knowledge of their geographic distribution and access patterns. Understanding the geographic distribution of an organization's Web clients is essential in making important decisions that will reach the client base more effectively. For example, replication, caching, and advertisement have been widely used to improve information dissemination. However, these methods will be productive only if made at strategic places on the
Web, places that are close to the client base.

In this paper we present the design and implementation of Palantir, a tool that animates World Wide Web traffic. The tool displays the origin and magnitude of a Web server's hits either in real-time or in batch mode. It can synthesize the traffic to several Web servers so as to provide a global view of the hits in a multisite
organization. Using Palantir, a user can get a deep understanding of where a server's clients are located and thus how to reach them more effectively.

A Platform for the Development of Extensible Management Applications - Paper 374

Noemi RODRIGUEZ
Michele E. LIMA
Ana L. MOURA
Michael STANTON
Catholic University of Rio Brazil

This paper presents a platform for the development of extensible management applications based on the interpreted language Lua and several associated libraries. Applications developed on this platform can be easily extended, with no need for recompilation. A specific library developed for TCP/IP network management is described in detail. The paper discusses the flexibility provided by the resulting platform and its relationship to other similar projects.

Design and Implementation of an Agent System for Application Service Management - Paper 323

Yutaka IZUMI
Tomoya NAKAI
Suguru YAMAGUCHI
Nara Institute of Science and Technology Japan

Yuji OIE
Kyushu Institute of Technology Japan

SNMP-based network management systems are widely used; however, managed application servers are often passive in the sense that they inform their status only in response to requests from a network management station. These passive systems cannot detect and fix some troubles in network service quickly, and the SNMP-based systems cannot be easily attached to some managed nodes, such as an application server. Therefore, these systems seriously limit flexibility in managing application servers. For this reason, we propose an agent system for application service management called an SMS (Service Management System).
The SMS is designed as a daemon that wraps the managed application, in order to provide some functions, e.g., monitoring and controlling of application servers in access control instead of operators who operate management systems in a network management station. The series of these functions is called "control
management" in this paper. Control management allows the SMS to detect and fix troubles according to an action scenario and to attach easily to application servers. The action scenario of the SMS is given as a script file, which is described by SMAP (Service Management Agent Programming) language. The SMS
also provides SNMP interface for the ordinary SNMP-based network management system. In this paper, the design and implementation of an SMS is presented. In particular, we apply the SMS to firewall and World Wide Web services, and evaluate its effectiveness.

Return to Abstracts by Tracks

Return to INET'98 Programs

New Applications on the Internet

Earth Observation Data and Information Access: Internetworking for an International Application Demonstrator - Paper 049

Hermann Ludwig MOELLER
European Space Agency USA

Roberto DONADIO
European Space Agency Netherlands

The present paper describes how the synergy of Internet-centric middleware solutions based on CORBA and a supporting TCP/IP-based communications infrastructure can benefit large-scale, internationally distributed information services in the domain of earth observation from space.

Earth Observation Information Systems (EOIS) handle tens of millions of data and information items worldwide, representing several hundred of terabytes of archived data, with new data from spacecraft entering a federation of systems at a rate above one terabyte per day and with user retrieval rates expected to
exceed that rate considerably.

In preparation for future EOIS, a proof-of-concept initiative, based on different technology research and demonstration projects, has been set up by ESA and the EC, under participation of Japanese partners (NASDA, for example), and European industry has created a system consisting of

  • a CORBA-based Earth Observation Information System Facility (EOCF)
    an internetworking infrastructure based on mixed terrestrial/satellite communications platforms comprising 
    • a European backbone network
    • a Eutelsat/Intelsat-based interconnection within Europe and with Japan
    • a European data distribution and user access network based on digital video broadcasting (DVB) technology

As an example of an environmental application, oil-pollution monitoring in the Mediterranean Sea, the paper illustrates how the demonstrator system is being applied to the requirements of earth observation. The paper concludes with an outlook on possible extensions to new partners (in the United States,for example),
focus on the further validation of selected demonstrator components, and planned standardization efforts.

InternetCAR: Internet-Connected Automobiles - Paper 347

Keisuke UEHARA
Yasuhito WATANABE
Keio University Japan

Hideki SUNAHARA
Nara Institute of Science and Technology Japan

Osamu NAKAMURA
Jun MURAI
Keio University Japan

This paper describes the concept, experiments, and research of the InternetCAR (Connected Automobile) project operated by the WIDE Project. The goal of this project is to connect automobiles to the Internet to provide general Internet connectivity among automobiles and fixed nodes. One of the assumptions is that
all of the (several hundred million) automobiles in the world are connected to the Internet. An automobile is a mobile object that provides space for a human being, it has an electric power supply from batteries. The object has various valuable sensor information, such as thermometer and speedometer values. Thus, when an automobile becomes an Internet object, a very large number of mobile sensors exist.

A hardware to retrieve the sensor information -- such as geographic location, velocity, switch status (light, wiper position, air conditioner, brake, cruise control, and so on) was designed and implemented. The design and implementation of a software and communication structure to support stable wireless connectivity to
the Internet were also achieved. The prototype system provides general Internet connectivity in an automobile, and allows clients on the Internet to access information from automobiles. A sample application is a rain condition monitoring system by the information retrieved from wiper positions of the Internet
connected automobiles together with their locations. This paper discusses the communication architecture, the hardware design, and the evaluations of the prototype systems. The plan for the future experiments also is described.

Integrating Front-End Web Systems with Back-End Systems - Paper 382

Mitchell COHEN
IBM T.J. Watson Research Center  USA

This paper presents the implications of different storage and replication techniques for the key business objects being used by both front-end and back-end systems simultaneously. Much work has been done on data synchronization and replication in the database area as thoroughly discussed by Bernstein,
Hadzilacos, and Goodman. This paper will concentrate on the implications that the different data synchronization and replication paradigms have on common business objects needed for both a merchant server running on the Web and internal business systems. First, the need for integrating front-end and back-end systems from a business-object standpoint is established. Second, five paradigms of data storage and replication are defined. Finally, what each of the paradigms means for the different business objects is discussed.

Panel: Unexpected Outcomes?

Panel: Directories

Return to Abstracts by Tracks

Return to INET'98 Programs

Web Caching -- New Techniques

Combining Client Knowledge and Resource Dependencies for Improved World Wide Web Performance - Paper 409

John H. HINE
Victoria University of Wellington New Zealand

Craig E. WILLS
Worcester Polytechnic Institute USA

Anja MARTEL
Victoria University of Wellington New Zealand

Joel SOMMERS
Worcester Polytechnic Institute USA

Performance is an important area of World Wide Web research. We investigate ways of exploiting knowledge about client behavior and resource dependencies to improve performance. Our work explores two directions for making use of this information. One direction examines combining knowledge of client behavior with resource dependencies to reduce latency by prefetching selected resources. The other direction uses knowledge of dependencies to influence cache replacement decisions. Results of trace-driven simulations indicate the degree of performance improvement that can be obtained in both cases.

A Top 10 Approach for Prefetching the Web - Paper 276

Evangelos P. MARKATOS
Catherine E. CHRONAKI
Foundation for Research and Technology - Hellas (FORTH) Greece

In the World Wide Web, bottlenecks close to popular servers are very common. These bottlenecks can be attributed to the servers' lack of computing power and the network traffic induced by the increased number of access requests. One way to eliminate these bottlenecks is through the use of caching. However, several recent studies suggest that the maximum hit rate achievable by any caching algorithm is just 40% to 50%. Prefetching techniques may be employed to further increase the cache hit rate by anticipating and prefetching future client requests.

This paper proposes a Top-10 approach to prefetching, which combines the servers' active knowledge of their most popular documents (their Top-10) with client access profiles. According to these profiles, clients request and servers forward to them, regularly, their most popular documents. The scalability of the approach lies in the fact that a Web server's clients may be proxy servers, which in turn forward their Top-10 to their frequent clients which may be proxies as well, resulting in a dynamic hierarchical scheme, responsive to users' access patterns as they evolve over time. We use trace-driven simulation based on access logs from various servers to evaluate Top-10 prefetching. Performance results suggest that the proposed policy can anticipate more than 40% of a client's requests while increasing network traffic by no more than 10% in most cases.

WebHint: An Automatic Configuration Mechanism for Optimizing World Wide Web Cache System Utilization - Paper 327

Hiroyuki INOUE
Takeshi SAKAMOTO
Suguru YAMAGUCHI
Nara Institute of Science and Technology Japan

Yuji OIE
Kyushu Institute of Technology Japan

A distributed cache system for World Wide Web (WWW) infrastructure is introduced in order to reduce both network traffic and the latency of page retrieval for the users. In this kind of system, the administrators of cache servers must determine by themselves how and which neighboring cache servers should be incorporated, based on their level of expertise. However, it is very difficult for them to update the related management in response to changes in the state of the network and/or the neighboring cache servers in a dynamic and timely manner.

In this paper, we propose the use of the WebHint system, including "hint server," which will automatically update the information associated with neighboring cache servers in a way to reduce both the number of packets exchanged among cache servers and the response time for end users. We extended the ICP
(Internet Cache Protocol; RFC2186) messages between cache servers and the hint server to communicate with each other. We added three types of messages: "hint notification," "hint query," and "hint reply." When the cache servers have fetched or removed a WWW object from the cache, they notify the hint server of
updates and details of cache contents. For this update, the ICP notification messages are used. Our extended ICP query message is used for searching the content database that the hint server is managing. In case a user accesses a WWW object via its local cache server, the cache server sends our extended ICP query message to the hint server. The hint server searches its contents database and then lets the cache server know which cache server has the object (or answers that no server has the object). The hint reply ICP message is used for this reply.

We've implemented the WebHint prototype and made a preliminary evaluation. The extended ICP message contains the cache status and detailed information about the object indicated by the URL. We also added the hint data on the payload of the ICP packet. We compared the cache system composed of a hint server and several cache servers with the conventional system composed of several neighboring cache servers. We received the same hit rate on each system. When a new cache server is added, we confirmed that the hint server then includes its information in the hint data and all cache servers working together. Similarly, when a cache server is out of service, it discards its related information. We measured the performance of the hint server itself. It shows that the hint server program requires significantly fewer resources on the computer compared with a WWW cache server program. Therefore a dedicated hint server can be easy to install even on a low-performance computer such as a PC Unix or old workstation.

Distributed Multimedia

A Web-Based Real-Time Multimedia Application for the MBone - Paper 258

Myung-Ki SHIN
Electronics and Telecommunications Research Institute Korea

Jae-Yong LEE
Chungnam National University Korea

This paper presents a multicast, multimedia Web application that allows Web users to join an MBone session and receive audio/video as well as text/html MIME media seamlessly. We attempt to identify a solution for applying multicast media to the Web that can be deployed immediately. This is accomplished by generating an HTML session page using SDP/SAP for session discovery and integrating RTP/RTCP into the Web for transmission of real-time stream. In addition, we describe the architecture for distribution of HTMP pages via multicast. Our prototype implementation is built on HotJava, which makes it platform-independent, scalable, and easy to deploy. And there is no longer a need for new programs installation to multicast various types of real-time media. This approach provides an ideal and scalable solution for the new media or application "x" proposed on the Internet Multicast Infrastructure in the near future.

Shared Window System for Large Groups Based on Multicast - Paper 278

Lassaâd GANNOUN
Jacques LABETOULLE
Institut Eurécom France

IP Multicast, lightweight sessions, and application-level framing are principles that guide the design of multimedia conferencing applications, but they do not provide specific solutions. In this paper, we use these design principles to guide the design of a shared window system over IP Multicast. In contrast to previously
implemented systems, our shared window system addresses issues related to a very large group size of participants and dynamic leaving and joining of a high number of participants. Our shared window system offers a joint session protocol that scales to a large number of latecomers and also provides a control floor
policy adapted to a large group size. A first version of the system implementation justifies the viability of the design decisions followed.

Virtual Emergency Task Force (VETAF) - Paper 212

Norbert SCHIFFNER
Fraunhofer Institute for Computer Graphics Germany

The universal advancement of network and graphics technology, new business models, and global infrastructure developments are transforming the solitary, platform-centered 3-D computing model. With the availability of global information highways, 3-D graphical intercontinental collaboration will become a part of
our daily work routine. Our research focuses on determining how computer networks can transform the distributed workplace into a shared environment, allowing real-time interaction among people and processes regardless of their locations.

The Fraunhofer Institute for Computer Graphics (IGD) in Germany and the Fraunhofer Center for Research in Computer Graphics (CRCG) in the United States are preparing for a new age of telecommunication and cooperation by concentrating their research efforts on implementing and using computer graphics technologies over a transcontinental network.

This paper describes the Virtual Emergency Task Force (VETAF) application, which combines the use of 3-D graphics with advanced network technology. With this application, a group of experts located throughout the world can meet to discuss a global crisis in a virtual environment specially designed to support their cooperation.

Return to Abstracts by Tracks

Return to INET'98 Programs

To Cache or Not to Cache

Web Caching Meshes: Hit or Miss? - Paper 255

Ton VERSCHUREN
SURFnet Netherlands

André de JONG
Henny BEKKER
Utrecht University Netherlands

Ingrid MELVE
UNINETT Norway

This paper describes a cost-benefit analysis that has been done for the SURFnet Web-caching mesh, consisting of around 20 "children" that connect to a single "parent." The parent itself has "neighbor" relationships with several other Dutch Internet service providers (ISPs) and several foreign ISPs. The analysis shows that on every level of the mesh -- institutional cache, top-level cache, and the whole mesh -- the benefits of caching exceed the costs in an economic sense. Furthermore, a significant reduction in latency can be achieved.

Is the Internet Heading for a Cache Crunch?  - Paper 230

Russell Baird TEWKSBURY
Marketworks Corporation USA

Today, the development of an international hierarchical cache system (global mesh) is well under way. These cache systems are designed to be interconnected to national, regional, local, and LAN-based caches, creating a worldwide caching infrastructure. As the Internet evolves into this international global mesh of caches, the Net's decentralized architecture will become centralized around these cache clearinghouses, providing far fewer points of access to the network. Proxy caching and the impending implementation of an international hierarchical cache system sets the stage for abuses such as individual monitoring and surveillance, tampering, identity theft, censorship, intellectual property/copyright infringement, invasion of privacy, and governmental taxation.

The purpose of this paper is to objectively address these issues and to identify what possible impact/effect proxy caching and global mesh will have on Internet users, content providers, and the future of electronic commerce and the Internet in terms of security, privacy and censorship protection. The objective of the
session is to present information as to proxy caching's long-term effect on the responsible development of the Internet and to discuss alternative solutions.

With each passing day, the Internet is growing and becoming more congested. It is estimated that by the end of December 1996, the Internet consisted of more than 100,000 networks, connecting more than 13 million computers to tens of millions of users worldwide. Today, the number of networks and the number of computers connected to the Internet have more than doubled since last year. Subsequently, Internet traffic jams and bottlenecks, or what are known as flashpoints and hot-spots, have become daily occurrences.

Network administrators are faced with the difficult challenge of how to provide more efficient bandwidth and server utilization. In order to meet this challenge, many are turning to proxy caching as the solution. Some of the many Web cache projects include NLANR (National Laboratory for Applied Network Research) (United States); CHOICE Project (Europe); HENSA (United Kingdom); Academic National Web Cache (New Zealand); W3 CACHE (Poland); SingNet (Singapore); CINECA (Italy); and Japan Cache/JC (Japan). The negative byproducts associated with a proxy cache solution for the management of network congestion could prove to be detrimental to the advancement of the network itself.

This paper asserts that with a global mesh in place, the integrity of information will decline and data security risks will escalate, given that there is nothing to stop a cache owner from altering the source code of a Web document and then passing on the counterfeit version as if it were the original (cache poisoning, identity
theft); reducing the quality of service (QOS) of access to Internet resources that do not support proxy cache; profiting from the exploitation and/or sale of confidential/ proprietary information obtained through their cache; charging money for access to their cache content (taxation); and refusing to accept content or allow access to content (censorship).

When interviewed, Peter G. Neumann (Principal Scientist of SRI International; a Fellow of both the ACM and the IEEE; and moderator of the RISKS Forum) stated: "The problem of assuring data integrity is enormous. Worse yet, there will be all sorts of pointers to less than the most recent updated corrected versions and unverifiable bogus copies, which will enable misinformation to propagate. Cryptographic APIs, digital signatures, authenticity and integrity seals, and trusted third parties will not help." When asked, "Will these 'clearinghouses' of cached Web pages become primary targets for tampering, censorship and abuse?"
Neumann replied, "Absolutely yes. You might also expect that the FBI would want guaranteed surreptitious access to all of caches (for example, for setting up stings and for monitoring all accesses), much as they are seeking key-recovery mechanisms for crypto."

The apparent need on the part of network administrators for immediate gratification for congestion control should not take precedence over good judgment in the responsible development of the Internet. The theme of this paper suggests that privacy and security do not occur by happenstance, but by design; thus network
speed and security/privacy issues should not be considered mutually exclusive components in the design of the network. This paper presents industry expert opinions as to possible standards, procedures, or processes that may be implemented.

Monitoring the Performance of a Cache for Broadband Customers - Paper 303

Bob WARFIELD
Terry MUELLER
Peter SEMBER
Telstra Australia

Replication, or caching, of Web content offers benefits to both ISPs and users. Much of the traffic coming from the Web into an ISP's network is redundant in the sense that exactly the same content is being requested by a number of customers. The redundant traffic can be reduced by caching, hence saving costs for the ISP. Files served from the cache are served, on average, faster than from the Web. This improved speed can save customers time and money. Balanced against these two benefits are the problems of optimizing freshness of the cache contents and managing expansion of the cache as traffic demand grows.

This paper reports on work done to monitor the performance of a cache serving a group of users with broadband access via a hybrid fiber-coax cable system in Australia. Monitoring the cache performance concentrated on the following four major dimensions:

  • Improved speed
  • Freshness of cache contents
  • Cost savings through reduced traffic
  • Managing expansion of the cache capacity

The choice of parameters to monitor was tied closely to our understanding of quality of service for this group of customers. In particular, reducing the time required to download large files is an important aspect of using broadband access to the Internet.

A management system with browser interface was developed to examine cache performance on a daily basis, with archival data and drill-down facilities. Actual performance is discussed in the paper, including improvements that were achieved.

The conclusion of the study reported in the paper is that cache performance can be improved by monitoring performance and fine-tuning caching parameters.

Panel: The Future of Killer Applications

Return to Abstracts by Tracks

Return to INET'98 Programs

About the Internet Society | Conference Overview | Pre-Conference Events | Sponsorship + Exhibition |

Registration, Housing, Travel | Media/Press | FAQs | Home

 

All graphics and content copyright ©1998 Internet Society.

The INET'98 Web-site was designed by Designright, Inc. http://www.designright.com