Last update at : Thu May 4 7:34:11 1995

A Tool for Configuring Multicast Data Distribution over Global Networks

A Tool for Configuring Multicast Data Distribution over Global Networks

April 28, 1995

Robert Voigt
Robert Barton
Shridhar Shukla


This paper addresses the challenge of constructing center specific multicast trees in a global network. The described approach is an interaction specific algorithm for data distribution center location using asymmetric traffic loads on network links. A fast and scalable algorithm for locating distribution centers based on the network load and participant's locations is given. This algorithm is presented as part of a tool which interactively allows a user to build networks, specify the group of multicast participants, select manually or generate algorithmically a center for the multicast tree and evaluate the tree quality. Simulation results on various topologies are presented showing that, with the above center location mechanism, algorithmically located center-specific trees yield lower tree cost and delay than center-specific trees with randomly selected centers.


1 Introduction

2 Existing Multicast Protocols

3 The Center Location Protocol

4 MAST - A Multicast AnalysiS Tool

5 Simulation Results

6 Conclusions and Future Work


Author Information

1 Introduction

Recently, high bandwidth applications, such as video conferencing, have penetrated the internet with requirements for a host to multicast the same information to multiple destinations. The implications of introducing such technology are just now being realized and yet, at the same time, developers are trying to exploit this new functionality and extend it to the global internet. Examples of applications that use multicast are:

The standard method for an internetwork to provide a multicast service is to construct a multicast delivery tree upon which information is routed amongst a known group of members. The purpose behind using a delivery tree is to minimize the number of copies of a single packet that the network must transmit, especially with the high volume of data originating from video sources. It is imperative that whatever method is used to accomplish this data distribution, it must be done efficiently from both the network and the end user's point of view.

There are two general approaches to multicast tree construction. The first uses source specific trees(SST) which involves finding the shortest path through the network from each sender to each receiver. The use of multiple source specific trees provides the minimum delay from each sender to every receiver but incurs high overhead. Center specific trees(CST), or shared trees, use a distribution center(s) in their construction of a single multicast tree. The use of a single minimum weight tree to connect a set of nodes in a graph is the well known Steiner Tree Problem [10]. A center specific tree is a low overhead method that sacrifices minimal end-to-end delay [21].

The current draft-standards for network level multicast being considered by the Internet Engineering Task Force (IETF) are Protocol Independent Multicast (PIM) and Core Based Trees (CBT). In the case of these two proposed approaches, both PIM and CBT use centers to accomplish CST construction. Neither PIM nor CBT specify an algorithmic method to select the location for the center which, it will be shown, can affect the quality of the resulting tree. Presently, the draft standards choose to have these locations selected administratively.

1.1 Motivation and Objectives

The efficiency of multicast data distribution from the user's vantage point is the minimization of end-to-end delay. The network would prefer to minimize overhead, or the amount of state information needed to accomplish the task while not creating traffic problems. A primary concern in a global network is the scalability of the solution. The solution lies in the efficient routing of the multicast data which is directly dependent on the quality of the delivery tree. With these options as a basis, we identify the desirable features for multicast data distribution service in a global network.

The motivation for this work is as follows. The choice of a center location should not be a random process but should be based, to the extent possible, on some readily available a priori information about the participants. The tree centers should then be selected algorithmically based on this information. Until such a protocol can be integrated into the internet protocols, a tool is proposed to allow system administrators the ability to enter a topology, place some participants and allow an algorithmic solution to assist in the administrative placement of the centers. The specific objectives of this paper are:

  1. Describe a center location protocol that contributes to the construction of efficient center-specific trees using readily available knowledge about the participants.
  2. Describe a tool that allows for the flexible, interactive construction of networks which can be used to evaluate the existing protocols with intelligent vs. randomly placed centers.
  3. Compare the quality of the resultant center-specific trees with intelligently located centers to those with random placement.

1.2 Organization

The paper is organized as follows. Section 2 provides background in existing multicast protocols and their origins. Section 3 describes the protocol for center selection. Section 4 describes an interactive tool used to manually construct or randomly generate a network model and construct multicast trees. Section 5 details simulation results using the output of the topology/tree generation tool. Section 6 concludes with general results and directions for future research.

2 Existing Multicast Protocols

As stated above, the two general approaches to multicast tree construction are source specific trees and center specific trees. Much of the work in this area has focused on source specific trees, partly because they deliver packets at minimum delay and the required routing information is readily available. Multicast protocols to date have been unicast protocol specific. The newer versions that have been proposed remove this dependence allowing the multicast protocols to evolve at their own pace, independent of the unicast protocols.

For unicast routing, there are two distinct types of distributed routing algorithms, distance vector and link state. Distance vector routing requires that each node maintain the distance from itself to each possible destination. It does this by gathering information from its neighbor's distance vectors [16]. In link state routing, each router keeps a complete map of the topology and computes routes to each destination. It does this by sending advertisements to all routers updating the network topology as it changes.

Multicast protocols have been based on work done on multi-destination delivery by Dalal and Metclafe [4] and center based trees by Wall [19]. From these two works, respectively, source specific and center specific trees have emerged. Other areas of research continue to examine the distributed Steiner tree problem [9, 13] and the proposed solutions focusing on applicability to a specific application area [12].

The main focus in this paper is on the center specific tree and tools to locate a center. Since the quality of the tree is directly related to the proper placement of the distribution center, a strategy for finding the center is required. Some work has been done to locate the center prior to construction of the tree[17]. In addition, work has been done to "adjust" the center location after the tree has been constructed [22]. In these cases, assumptions had to be made either about a priori group knowledge or complete group topology information that may not be easily obtained or available.

2.1 Distance Vector Multicast

The present IP multicast is based on work done by Steve Deering [5] where the routers use the distance vector method for unicast route calculation thus leading to the name Distance Vector Multicast Routing Protocol (DVMRP) [18]. DVMRP consists of constructing a source specific tree for each sender in a group, that is each router maintains an entry for a source, group (S,G) pair.

The multicast tree is constructed when the first multicast packet is sent to the group using a truncated reverse path broadcast (TRPB) [6]. This is done by using a modification to reverse path forwarding described in [4] where an incoming packet is forwarded out on all outgoing links except the one it arrived on. TRPB reduces duplicates of multicast packets on those subnetworks that have more than one router attached and in the case where no members of a group exist on their attached subnets.

In DVMRP, the first multicast packet is flooded out, using TRPB to all nodes who in turn decide if their attached subnets contain any members. If they do not, the router sends a prune message back towards the source of the multicast. For Reverse Path Multicast (RPM), the prune message is in the form of a non-membership report (NMR) [6]. If intermediate routers receive NMRs from each of their outgoing links, they in turn generate an NMR and send it back up the tree towards the source. This flood and prune strategy is one of the reasons why DVMRP is not considered a scalable method for multicast tree construction. Flooding of data is not a desirable feature in an internet environment, because it creates unnecessary overhead for uninvolved nodes.

2.2 MOSPF Multicast Tree Construction

Another approach to multicast is based on the Open Shortest Path First (OSPF) unicast routing algorithm[14], called Multicast Extensions to OSPF (MOSPF) and is described in [15]. OSPF is a link state routing protocol which provides routers a database describing the network topology and updates the database through the use of advertisements. In MOSPF, a new OSPF advertisement is added describing multicast locations [15]. Similar to OSPF link state changes, group information is broadcast across the network so that each router can maintain current state table information. Due to the broadcast nature of this information, MOSPF is also not considered to be a desirable internetwork wide solution and is not considered scalable.

2.3 Protocol Independent Multicast

Protocol Independent Multicast (PIM) [4] is one of the two draft standards being considered by the IETF to solve the scalability problem of SSTs. Unlike DVMRP and MOSPF, PIM is designed to be independent of the underlying unicast protocol. The PIM architecture takes advantage of the natural hierarchy in the network and introduces two modes, dense and sparse. These two modes address the type of interaction desired relative to group composition and the size of the network. Dense mode PIM (PIM-DM) is similar to DVMRP and is a source specific tree based protocol. This method was found to be undesirable for groups whose members are distributed sparsely across a wide area [4]. Sparse mode PIM (PIM-SM) uses a center specific tree construction designed to address the scalability of its dense mode counterpart. The distribution center of PIM-SM is called a Rendezvous Point (RP). Sparse mode PIM tries to constrain the data distribution to minimize the number of network routers that receive it [4]. PIM-SM multicast tree construction also allows a hybrid mode, which includes both center and source specific trees, when the receivers request it.

PIM tree construction revolves around the selection of the RP. All senders for a group must register with the RP in the network. Receivers requesting to join the group set up the path from themselves to the RP. This is also how they learn about the senders in case they should later desire to form a shortest path tree between them and eliminate the RP for that source and receiver pair.

2.4 Core Based Trees

Core Based Trees (CBT) is a protocol for multicast tree construction which also uses a center specific or shared delivery tree [1]. CBT is similar to PIM-SM in that they both initially choose a center from which they build the tree, however, in CBT, multiple centers are allowed for one group. In CBT a center is called a core, a group initiator (a user or administrator) sends out a CORE-NOTIFICATION message to all core routers for the group. This message initiates the building of a core tree which connects the all the group's cores as a core backbone. A join request is then unicast out to a known core address. Any core router for this group that receives the request can act on it and reply with a join acknowledge. No action is taken by intermediate routers until a join acknowledge is sent back to the requesting node. The join acknowledge creates the tree branch [1].

At present, the IETF IDMR working group is undecided as to which of these two methods is the best choice for a scalable, wide area multicast protocol.

2.5 Missing Pieces and their Impact

Placement of the distribution centers, RPs and cores, is presently described by the PIM and CBT drafts as a task to be done "administratively." The PIM architecture document states that an RP can be any addressable entity in the internet and may be explicitly configured, discovered via address mapping, or looked up from a directory service [8]. CBT has adopted the "hand selection" approach to selecting the core which entails an administrative decision based on the network topology and multicast group information [2]. In both cases, some administrative interaction is required. Recent work in the IETF has proposed that the group initiator in a PIM multicast select its designated router as the RP for the group with others as backups in an ordered list. This method should perform well if initiator is a highly active sender, but this is not guaranteed. The placement of the distribution centers has a direct effect on the efficiency of the multicast tree constructed. In turn, the efficiency of the tree affects the overhead of the network and the end-to end delay for the user. At present the network administrator has no tools to assist in the placement of these centers.

3 The Center Location Protocol

There are several reasons for administrative location of the center to become problematic. If the center is located far from the participants, a longer packet delay is experienced and excessive resources are consumed. If a good center is selected for a set of participants which later changes drastically, the center may severely impact the quality of the tree for the new participants. Administrative center selection is also likely to become difficult to manage when a large number of groups is involved. Also, to avoid traffic concentration it is preferable to distribute the centers over the topology instead of allowing one center to service multiple groups. Finally, the probability that the administratively located center will be a good one for all the participants will decrease with the size of the participant set.

Considering that the large number of independent multicast groups to be eventually supported by these protocols are expected to vary widely in size, dynamics, geographical distribution, and end-user requirements, administrative location of centers appears to be a temporary solution at best. Ideally, centers should be located algorithmically in a group-specific manner based on the participants' locations and network load distribution to produce high quality distribution trees.

In an attempt to choose an ideal center, it is desirable to choose one which minimizes the distance from senders to the center and from the center to the receivers. A problem with finding such a center is that the information needed to pick this location is not easily obtained in a distributed network. In [17], a proposal for a center location protocol is given which uses a priori knowledge of the participant's relative distances. This information is usually not available or time consuming to obtain.

3.1 General Approach

Our approach is based on the existence of a network-wide entity called a scheduler, similar to the session directory (sd) [11] tool, with which the participants of an interaction register prior to the start of the interaction. Using the registered information, the scheduler can determine the data distribution requirements of an interaction. In particular, the scheduler classifies the participants by collecting them into groups referred to as critical sets of participants (CSP). The exact method for this grouping can be used for setting policy such as how many senders share a single center or how many concurrent senders the network is capable of supporting given the senders' data rates.

3.2 The Tournament

One center is located for each CSP. A pair-wise selection process, referred to as a tournament, among the network nodes corresponding to the participants in a CSP is set up. The winner is determined to be the distribution center (to be used as an RP in PIM and core in CBT). The use of CSPs permit the location of multiple centers for a single interaction.

The tournament mechanism relies on the scheduler to assign and communicate, to all participants in a CSP, a participant id, a CSP id for that CSP, the group id of the interaction, and a pairing of the participants for the initial round of the tournament for that CSP. The participants enter into a tournament with others with the same CSP id and group id. Although different types of initial round participant pairings are possible, our results indicate that the tournament outcome is not very sensitive to the initial round pairing chosen.

The paired participants play a match that results in one network node (which could be different from either of the player's node) advancing to the next round. The method used by the paired nodes to determine this intermediate winner offers the second avenue permitted by this protocol to impose a policy on the center location. One point to note is that the locations of nodes participating in the subsequent rounds are not known a priori. Thus, the locations for the second round onward are determined by the outcome of the previous tournament round. The difference between a regular tournament and this mechanism is that, in this mechanism, the winner of a match does not have to be one of the two players, but can be some third location.

The tournament mechanism arrives at the decision of the center's location quickly, in log2N rounds, where N is the smallest integer, which is a power of two, that is larger than the number of participants in a CSP. This feature permits the protocol to scale to CSPs with a large number of participants.

When the number of participants is not a power of two, the protocol inserts the required number of bye's in the pairing for the initial round. The manner in which these bye positions are determined ensures that there are no bye's offered to the same participant in consecutive rounds. The reason for this feature is that if a participant or intermediate winner receives more than one successive bye, it will have a greater influence on the eventual center location.

3.3 Policy Input and Skill Types

The selection of the winner for a match takes place according to some skill-type. Determining the exact nature of the skill-type permits a policy-related control input for this protocol. Maximum distance to any of the two participants of a match is an example of a skill-type with the skill-level being higher if this maximum distance is lower. In general, skill-type refers to some attribute for individual network nodes with respect to the participants and the number that quantifies this attribute for a particular node refers to skill-level. For the given skill-type, each pair selects an intermediate node on the shortest path between them that represents the middle of the shortest path between the pair. The process is repeated for all rounds until a single node remains. The location selected as the winner for the CSP informs the scheduler of the CSP id and group id for which it won. The scheduler maintains a map of group ids, CSP ids, and their associated centers. When the winner from each CSP for the interaction has reported, the scheduler sends the registered participants a complete list of center locations for all CSPs in that group.

The protocol, as described here, makes no provision to relocate distribution centers during an interaction. It is assumed that the number of unanticipated participants in a multi-party interaction does not change excessively throughout the interaction. If this is not the case, a method to periodically execute the protocol with an associated hand-off mechanism between the old and the new centers could be used.

4 MAST - A Multicast AnalysiS Tool

We have designed a tool which will simplify the task of center selection and allow the network administrator the ability to make an informed decision on distribution center placement. In the absence of an automated and integrated internet solution, and until such a protocol gets adopted and implemented, this tool provides an interim method for intelligent center location. MAST is a menu driven tool with a graphical user interface (GUI) that allows the user to generate an internetwork topology, specify participants in a group interaction and their role (sender/receiver/both), construct different types of multicast trees and finally analyze multicast data distribution efficiency. In order to accurately describe a global network, MAST supports a two level hierarchical network like the present internet. Clusters of local nodes are tied together by inter-domain links to reflect an intra and inter-domain routing architecture.

4.1 The Topology Generator

MAST supports two methods for topology generation. The first is a random generation of a network given inputs as shown in Figure 1. MAST supports asymmetric link loads to more accurately reflect the present internet link loads. The topology generator uses the algorithm described by Waxman [20] to connect the nodes. In keeping with a hierarchical network, the connections for the intra-domain networks are calculated separately from the inter-domain connections. Given the parameters for alpha and beta, a desired connectivity and node degree can be obtained.

Figure 1 Inputs to the Random Topology Generator

This randomly generated topology can then be customized by adding and deleting links. Several views of the network are also available to help visualize the connectivity of the network including three dimensional views of the domains and the loads on the links.

The second major part of this tool is the ability to be able to manually build a custom network. In this tool, the user is given a blank page and is allowed to create domains, add routers and connect them with intra-domain and inter-domain links. The delays in this tool are based upon the Euclidean distances between the nodes. The participants in a group can be specified in the domain editor as either senders, receivers or both and a center can be manually selected using this part of MAST.

The output of both of these tools are sets of adjacency matrices used to build the network objects which allow for the accurate portrayal of an internetwork.

4.2 Multicast Tree Construction

The MAST tool incorporates a multicast tree construction component which allows for the construction of both center specific and source specific trees. Other trees which represent specific multicast protocols can also be constructed, such as DVMRP, PIM and CBT. Since the tool incorporates the concepts of routing tables with unicast routing information and multicast routing entries, any protocol which relies upon information found in these resources can be implemented in MAST through a standard set of templates.

4.3 Multicast Tree Distribution Center Selection

Another component of the tool is the ability to execute the center location protocol as specified above and select a center location based upon the outcome of the tournament. This is an important feature of the analysis of a center location strategy. Since the center can be placed manually in the custom topology generator, the outcome of that and other center placement techniques can then be evaluated and compared. The modularity of the tool allows for new strategies to be implemented and tested on the same topology.

4.4 The Object Oriented Model

The tool is, itself, an object oriented model of a realistic hierarchical network implementation. The basic classes and their relationships are describe below. A key design goal was to make MAST objects look as much like actual network entities and resources as possible. The end result is then be easier compare and contrast with an actual implementation.

Figure 2. MAST Object Hierarchy

Network tools were implemented to facilitate the addition of multicast protocols. These tools include a least cost path function which computes a least cost path between any two nodes using Dijkstra's algorithm and then updates the appropriate unicast routing tables. Also, a trace route function which uses the information available in the routing tables is provided to return the path information to the calling node. All of these tools were designed to facilitate the implementation multicast protocols as separate modules requiring only the information that would be available in an actual network, such as in the unicast routing tables. Border gateway routers are supported differently from internal routers to efficiently handle searches across domains. When calculating a least cost path and a border router is encountered, the internal network is treated as a cloud unless the destination address resides within that domain.

4.5 Multicast Tree Performance Analysis

Unicast routing table information is entered through the use of least cost path calls or through physical adjacencies. Once this is done, distribution trees can be computed using the shortest forward paths from the sender to each receiver in case of SSTs and from the center to each receiver in case of CSTs. The trees can then be analyzed for the case of multiple concurrent senders for a group. Flows are computed for each link, where one flow corresponds to one upstream sender. To account for the effect of multiple concurrent senders on the delay and cost correctly, the number of flows on a link for a given number of concurrent senders is scaled down by the ratio of the total number of senders using the link in their distribution tree and the total number of potential senders of the interaction. It is assumed that all senders are equally likely to send at any given time in the interaction. Thus, the number of flows on a link in the case of a given number of concurrent senders is scaled down by the probability that all the concurrent senders are on the sending side of the link.

The number of flows obtained in this manner is used for computing the delay and cost for the distribution tree. The link-level delay model used is that of an M/M/1 queue. The link-level cost model used is linear with the cost of sending a flow over an intra-domain and inter-domain link being unity and ten respectively. These models permit the use of delay as a metric applied by the end-user and cost as a metric applied by the network provider(s) for the quality of the distribution trees set up. Each link is assigned an initial delay based on the degree of asymmetry chosen in MAST and the Euclidean distance between the endpoints before any distribution trees are mapped on it.

5 Simulation Results

The main purpose of this section is to show the outcome of the topology generation tools, the multicast tree building tools and to demonstrate the kinds of analysis possible using MAST. First, random topologies were generated using the MAST random topology generation tool, and two representative samples were chosen to exhibit the features and potential benefits. Second, to demonstrate the utility of the custom topology generator, a portion of the MBONE was modeled using a combination of the random generation tool and the custom tool. The custom generation tool was then used to place and connect the 25 domains in a manner that would resemble major MBONE sites in North America.

5.1 Performance on Random Topologies

Two topologies were generated with 50-nodes, 5-domains, 10 nodes per domain and an overall average node degree of 3 and 4. The inter-domain average node degree in each case was 2.4. Interactions among a sparse and a dense group were mapped on the generated topologies. A sparse (dense) interaction corresponded to a relatively low (high) density of participants in a domain with participants spread out over most of the domains.

The sparse interaction had 5 potential senders and 15 participants distributed among the five domains as {3, 0, 0, 0, 2} and {5, 4, 2, 1, 3} respectively. The dense interaction had a total of 10 potential senders and 40 participants distributed as {4, 2, 2, 1, 1} and {9, 8, 8, 7, 8} respectively. In both cases, each interaction was considered to form a single CSP.

Three types of distribution trees were constructed for interaction/topologies from the setup described above. They were as follows.

  1. CST based on a center mapped on a randomly selected sender: This was constructed to illustrate how a default administrative location of the center (as suggested in [7]) performs.

  2. A CST based on the center given by CLP using the random pairing of all participants: This was constructed to determine the effectiveness of the way the CLP is expected to be implemented. It represents minimal information needed by the scheduler to perform the initial pairing.

  3. SSTs for each of the potential senders: This was constructed to determine a base case for comparison between the different center-based schemes and also to determine how the center-based schemes compare with the source-specific trees.

The normalized average delay per receiver per sender is given in Figure 3. The average delay seen per receiver per sender in the case of a single concurrent sender on an SST is used as the normalization factor.

Figure 3(a) Sparse Interaction - Node Degree 3

Figure 3(b) Dense Interaction - Node Degree 4

Figure 3 Delay Comparison

It can be seen from the above representative runs of the multicast tree analysis portion of MAST, that while the SSTs provide the lowest delays as expected, a CST based on a center at a random sender clearly performs worse than the CSTs with centers located using the CLP.

5.2 Simulating the MBONE

The MBONE [3] is considered to be a fast growing and challenging environment for present and future multicast protocols. The number of multicast capable routers has grown immensely in the past year alone. It was decided that to accurately predict the performance of the multicast distribution trees in a large internetwork, that the MBONE would be appropriate and desirable to model. It is believed that the MBONE has somewhere between 1000 to 1500 routers in the United States alone. Using these numbers, we generated a 25 domain, 1250 node internetwork with a spatial distribution that resembled the approximate layout of the MBONE. Each of the 25 domains contain 50 routers in a randomly generated network. This was to show the interaction between the random and custom generation parts of MAST. The domains were connected by hand in the custom tool. A sample group was placed on the topology to try to represent a typical MBONE session. The layout of the domains in the internetwork is pictured in Figure 4.

The group membership was placed in domains whose geographic locations were in proximity to those areas where there is typically MBONE activity. The selected domains where active participants resided, who were both senders and receivers, were D1, D3, D4, D5, D10, D15 and D22. Receivers only were located in domains D2, D3, D23, D24 and D25. The total number of participants was 20, 10 of whom were potential senders.

Figure 4 Distribution of MBONE Domains

The domains were generated with an average node degree of 3 with 50 nodes per domain. An example of a domain is shown in Figure 5. The nodes which act as border routers are denoted by the label "GATE" in the figure.

Figure 5 Typical Intra-Domain Connections

The same three types of distribution trees that were discussed in section 5.1 were again constructed for this topology. Again, similar results were obtained for the delays and costs of the resulting trees. The results for the delays and costs of the three methods are shown in Figure 6. The legend from Figure 3 applies.

The results presented verify the need for group-specific center location not only because it performs better than a randomly selected center, but also because the delays for the resulting CSTs for dense interactions and low concurrent senders compare well with the delays for SSTs.

Figure 6(a) Delay Results

Figure 6(b) Cost Results

Figure 6 Results for MBONE Topology

The normalized cost of the distribution trees for the three types of trees described is given in Figure 6(b). The cost for an SST with a single concurrent sender is used as the normalization factor. It is of significance to note that, given our cost model, when there are few potential senders and the number of concurrent senders is low, SSTs perform as well or better than CSTs. However, when there are a large number of potential senders, SSTs have a slightly higher cost for low numbers of concurrent senders. When the number of concurrent senders rises, the cost for SSTs falls below that of CSTs significantly.

6 Conclusions and Future Work

This paper describes a tool that can be used to model a two level hierarchical internetwork, construct multicast distribution trees on the model and evaluate them for delay and cost characteristics. In addition to this, a scalable center location mechanism and protocol for locating a distribution center was included to assist the network administrator in the intelligent selection of the center. The protocol was motivated by the need to determine a location for cores in CBT or RPs in PIM in a group-specific manner instead of administratively. The protocol permits various policy inputs to make the center location protocol determine the type and number of centers that are most suitable for the interaction being set up. The use of CSPs and the appropriate definition of skill-types to be used in the tournament permit enforcement of these policies. The results generated for random and custom topologies establish the need for such a protocol and provide comparison with SSTs with a varied number of concurrent senders. In addition the results for a large topology, similar to the MBONE, clearly demonstrate a need for an informed decision for center placement.

The results that have been presented are somewhat limited by the size and variety of the group make-up and must be verified for larger number of participants in follow-on work. Other attributes that can be used as skill-types and their impact on the types of trees created need to be investigated. The impact of the use of policy-related inputs to the CLP must be characterized in realistic interactions.


Anthony Ballardie, A New Approach to Multicast Communications in a Datagram Internetwork, PhD thesis, January 1995.

Tony Ballardie, Core Based Tree (CBT) Multicast, Architectural Overview and Specification, Internet Draft, November 1994.

Steve Casner, Frequently Asked Questions (FAQ) on the Multicast Backbone (MBONE), available from, December 22, 1994.

Yogen K. Dalal nd Robert M. Metcalfe, Reverse Path Forwarding of Broadcast Packets, Communications of the ACM, Vol. 21 No, 12, December 1978.

Stephen E. Deering, Host Extensions for IP Multicasting, Technical Report RFC 1112, Internet, Network Working Group, August 1989.

Stephen Deering and David Cheriton, "Multicast Routing in Datagram Internetworks and Extended LANs," ACM Transactions on Computer Systems, May 1990.

Stephen Deering, Deborah Estrin, Dino Farinacci, Van Jacobson, Ching-Gung Liu, and Liming Wei, An Architecture for Wide-Area Multicast Routing, in ACM SIGCOMM, August 1994.

Stephen Deering, Deborah Estrin, Dino Farinacci, Van Jacobson, Ching-Gung Liu, and Liming Wei, Protocol Independent Multicast (PIM): Motivation and Architecture, Internet Draft, draft-ietf-idmr-pim-arch-, January 11 1995.

J.G. Gallager, P.A. Humblet, and P.M. Spira, A Distributed Algorithm for Minimum Weight Soanning Trees, ACM Transactions on Programming Languages and Systems, Vol.5, No. 1, January 1983.

F. K. Hwang, D. S. Richards, P. Winter, The Steiner Tree Problem, North-Holland, 1992.

Van Jacobson, The Session Directory Tool, Lawrence Berkeley Laboratory, 1992.

Vachaspathi P. Kompella, Joseph C. Pasquale and George C. Polyzos, Multicast Routing for Multimedia Communications, IEEE/ACM Transactions on Networking, Vol.1, No. 3, June 1993.

L. Kou, G. Markowsky, and L. Berman, A Fast Algorithm for Steiner Trees, Acta Informatica 15, pg 141-145, 1981.

J. Moy, OSPF Version 2, Technical Report RFC 1583, Internet Engineering Task Force, Network Working Group, March 1994.

J. Moy, Multicast Extensions to OSPF, Technical Report RFC 1584, Internet Engineering Task Force, Network Working Group, March 1994.

Radia Perlman, Interconnections, Addison-Wesley, 1992.

Shridhar B. Shukla, J. Eric Klinker and Eric B. Boyer, Multicast Tree Construction in Network Topologies with Asymmetric Link Loads, NPS-EC-94-012, September 30, 1994.