[Help] Last update at http://inet.nttam.com : Mon Aug 7 21:40:07 1995

Abstract -- Intelligent Caching for WWW Objects Application Technology Track
A6: Engineering the Web

[Previous] [Table [Next]
[Paper [Paper


Intelligent Caching for WWW Objects

Wessels, Duane ( wessels@colorado.edu)

Abstract

Background

The World-Wide Web is growing at an amazing rate. Analysis of byte counts collected on the NSFNET backbone indicate that WWW traffic is growing at 25% per month. Currently it is second in quantity to FTP, which continues to grow at 5%. If the NSFNET backbone were to continue in its current operation, WWW traffic should exceed that of FTP sometime around May of 1995.

Also at issue is server load. The developers of NCSA Mosaic hard-coded their home page as the default startup URL. This accounts for the majority of NCSA's more than three million accesses per week. The NCSA server load is so excessive that they require nine dedicated workstations, and a ``rotating DNS'' which returns a different IP address for successive queries of www.ncsa.uiuc.edu.

A lot of wide area Internet bandwidth can be saved by placing caches inside organizational networks. The WWW model makes this relatively easy to do. Organizations with networks behind a firewall can easily implement caching because all external traffic goes through a single host.

Traditionally, caches are responsible for maintaining state information on their objects. The cache must query the server to learn if an object is out-of-date. In general server sites have very little control over how their data may be managed by Internet caches.

An Improved Caching System

The work described here attempts to improve wide area caching in two ways: by reducing wide area network traffic and improving response time to the user. This is accomplished by separating the caching system into three components: a proxy server, a cache manager, and a remote cache daemon.

The proxy assumes that any object in its cache is valid. This eliminates the delay in issuing an "If-Modified-Since GET" request for cached data. Most of the cache maintenance functions are implemented outside of the proxy. This allows the proxy to be simple, small, and fast. Unlike the CERN server (and most other TCP/IP servers), this proxy does not fork a new process for each connection. Instead, threading techniques are used to implement a single-process, non-blocking server which can greatly reduce system load.

Objects are stored in a two-level cache. The first is a short-term cache designed to hold objects for no longer than one day. The proxy copies eligible objects into the short-term cache as it retrieves them. Objects are removed from the short-term cache by the cache manager. Either the object will be deleted, or moved into the long-term cache for better management.

The main role of the cache manager is to maintain the long-term cache. Whereas all eligible objects go into the short-term cache, the long-term cache is more selective. Objects are ranked by a metric which is based on when the object was last accessed, how frequently it has been accessed, and its size. Cache administrators can set parameters to place more or less emphasis frequency, recency and size.

The third component of this caching system is the remote cache daemon. This program runs in conjunction with an HTTP daemon at remote server sites. The cache daemon provides a mechanism whereby information providers can grant or deny permission to cache their data throughout the Internet. The other function of the daemon is to keep track of sites which have cached the server's objects. When the cache manager places an object into the long-term cache, the object is registered with the cache daemon at the remote site. The daemon regularly checks objects on the local filesystem to see if any have changed. If so, the daemon informs cache managers of changed objects.