INET Conferences


Conferences


INET


NDSS

Other Conferences


[INET'98] [ Up ][Prev][Next]

Multicasting on British Telecom's Futures Testbed

Margarida CORREIA <margarida.correia@bt-sys.bt.co.uk>
Kevin SMITH <kevin.smith@bt-sys.bt.co.uk>
Chris GIBBINGS <chris.gibbings@bt-sys.bt.co.uk>
Mark BARRETT <mark.barrett@bt-sys.bt.co.uk>
Uma KULKARNI <uma.kulkarni@bt-sys.bt.co.uk>
Abdulrahman ADDAS <abdulrahman.addas@bt-sys.bt.co.uk>
BT Laboratories
United Kingdom

Abstract

In this paper, we describe recent experiments involving the delivery of IP multicast services on BT's "Futures Testbed" network. The Testbed is a broadband platform incorporating IP and ATM technology and supporting 700 users within BT's Research Department at Martlesham Heath. The Testbed also has broadband IP/SDH links off-site to London, Cambridge, Colchester, and Norwich, allowing a number of collaborative network and application experiments. This paper describes how multicast was implemented in our network and explains how important it is for the new type of multimedia applications that work on a one-to-many basis. There is also a description of multicast services being deployed in the Futures Testbed network, such as delivery of high-quality video and audio and multipoint conferencing. These services are being used for the live distribution, with interactive facilities, of conferences, meetings, and Business TV as well as collaborative work through the sharing of applications. A study of different mechanisms to maintain QoS for these services is presented. We have also tested the delivery of multimedia applications using different multicast mechanisms.

Contents

1. Introduction

The past few years have seen a significant growth in the use of multicast services over internetworks, particularly for real-time applications. Multicasting allows the same data stream to be replicated and sent to a group of users, reducing the load on both the network and servers. TV quality is now easily achieved over local-area networks (intranets), and such applications present a view of what will soon be possible over the wide area, demonstrating the potential use of TCP/IP protocols to realize the true information superhighway.

The "Futures Testbed" is a live network that currently serves over 700 users with 1300 machines (PCs, Unix workstations, and Apple Macintosh) at BT Laboratories with a combination of switched Ethernet and direct ATM connections. This network has been used for a wide range of future network studies. The network supports the delivery of live high-quality MPEG-1 video and audio to the desktop, as well as multipoint conferencing using M-JPEG and H.263 codecs.

In this paper we describe recent experiments involving the delivery of IP Multicast Services, including live video distribution and multiparty conferencing. This paper also discusses the implementation and monitoring of Quality of Service (QoS) for these applications. In section 2 an overview of the Futures Testbed network infrastructure is given. This is followed in section 3 by a description of IP multicast and how the Futures Testbed network has been adapted for multicast traffic. Section 4 describes the different types of multimedia applications running over the network using multicast. Section 5 discusses the issue of QoS delivery -- its implementation and management. These aspects are then combined to present an approach to building an intranet which supports full multimedia services with scalability.

2. The Futures Testbed Network

The Futures Testbed is a broadband platform incorporating IP and ATM technology [1]. The network covers offices in nine buildings separated by up to 1 km. At present we provide good quality video and interactive working applications to each user's desk. An experimental area is used to test beta and prereleased versions of software and to integrate new network technologies before they are introduced to the rest of the network. The Futures Testbed is not restricted to the Martlesham site, but also has broadband IP and ATM over SDH (Synchronous Digital Hierarchy) links off-site to London, Cambridge, and Colchester (Essex University), allowing a number of collaborative network and application experiments. This extended network is called LEANet (London East Anglia Network).

The on-site network has an ATM backbone providing 155 Mbit/s links between multiprotocol routers that are interconnected with a Switched Virtual Circuit (SVC) mesh. The ATM backbone consists of Cisco LS1010 ATM switches with single-mode fiber between buildings and multimode fiber within buildings. The Cisco Catalyst 5000 Ethernet switches are connected to the routers using LAN emulation (LANE) (155 Mbit/s) or Fast Ethernet (100 Mbit/s). Most of the users have a 10Mbit/s or 100Mbit/s switched Ethernet link. Some ATM desktop connections of 25 and 155 Mbit/s are available for high-end workstations and multimedia servers (see Fig. 1).


Fig. 1. The Futures Testbed network topology

The experimental test area consists of a further 8 multiprotocol routers, gigabit Ethernet, and other flavors of IP over ATM technology such as cell-switch routers and IP switching.

The LEANet network connects the BT Laboratories network to various university campuses and other BT sites and provides high bandwidth enabling all kinds of multimedia services to be deployed. This network consists of a SDH dual ring (2.48 Gbit/s) to which are connected ATM switches (ATM over SDH) and high-speed IP routers, which run IP directly over SDH.

This network forms a platform for the investigation of new emerging protocols such as Protocol Independent Multicast (PIM), Real-Time Protocol (RTP), Real-Time Control Protocol (RTCP), Distance Vector Multicast Routing Protocol (DVMRP), Resource reSerVation Protocol (RSVP), and Internet Protocol Version 6 (IPv6). Also, the high capacity of the backbone and efficient multicast using point-to-multipoint SVCs allow high-quality multimedia services and scalability.

3. Multicast on the Futures Testbed Network

For applications that need to deliver data on a one-to-many basis, multicast is the most suitable type of delivery due to its use of group addresses. Group addresses are used in the destination field of the IP header where they replace the standard class A, B, or C unicast address with a class D address, which range from 224.0.0.0 to 239.255.255.255. With this group assignment, multicast delivery and membership are bounded only by the scope of the multicast-enabled network and the Time-To-Live (TTL) parameter.

Hosts can dynamically join and leave multicast groups via IGMP (Internet Group Management Protocol) [2]. IGMP is the protocol used between hosts and first hop router. When a host wants to join a group it transmits an IGMP report with the destination IP address set to 224.0.0.1 (a reserved address for the attention of all systems). On receipt of an IGMP report, a router will join the interface where the report was received, on to the distribution tree for the specified group. For each multicast-enabled interface, the router sends an IGMP host query message periodically to learn and maintain the multicast group distribution tree. When a host receives an IGMP query, it responds with IGMP reports for each group it requires.

At present three versions of IGMP have been specified, with most hosts currently only supporting version 1. Version 1 has a "fast join" facility where the host wishing to join a multicast group immediately sends an IGMP report for the group rather than waiting for the next periodic query from the first hop router. When a host wishes to leave a group it simply stops responding to the IGMP query messages for the group. If, for a preconfigured number of times, the router does not receive an IGMP report for a group, and no downstream neighbors require that group on the specified interface, then the group is pruned. Version 2 adds a "fast leave" facility where a host wishing to leave a group sends an IGMP report to 224.0.0.2 (a reserved address for the attention of all routers). IGMP version 3 adds support for group-source reports where a host can choose the sources it wishes to receive data from. For audio and video multicast streams where each receiving host is also transmitting RTP [3] (Real Time Protocol) messages, this facility will help to conserve bandwidth and save CPU cycles.

As previously stated, IGMP is only used between hosts and first hop router; for router-to-router distribution, a routing protocol is used to build and maintain group distribution trees (see Fig. 2).


Fig. 2. Multicast protocols usage

There are a number of multicast routing protocols and the decision of which to use will be determined by the Vendors' equipment deployed plus the topology of the network, type of applications, number of transmitting sources, and the number of receiving hosts. Some Multicast routing protocols are

  • DVMRP (Distance Vector Multicast Routing Protocol)
  • PIM-SM (Protocol Independent Multicast-Sparse Mode)
  • PIM-DM (Protocol Independent Multicast-Dense Mode)
  • MOSPF (Multicast Open Shortest Path first)
  • CBT (Core Based Trees)

MOSPF, which is basically the OSPF routing protocol with extensions, makes use of the SPF (Shortest Path First) algorithm. All the other multicast protocols can be split into two types: "broadcast and prune" and "explicit join." Broadcast and prune protocols (DVMRP, PIM-DM) typically build source-specific distribution trees (Fig. 3-A) while explicit join protocols (PIM-SM, CBT) build shared trees (Fig. 3-B) from a Rendezvous Point (RP).


Fig. 3. Building of multicast source based tree and shared tree

On the Futures Testbed the choice of multicast routing protocols is limited to those supported by our Cisco routers (PIM-SM, PIM-DM, and DVMRP). Currently we use both DVMRP and PIM-SM.

DVMRP is used in our connection to the "MBONE" which is a multicast network over-laid on the public Internet. The Internet does not natively support multicast so the MBONE provides connectivity by tunneling the multicast data within unicast packets. However, for deployment on our ATM backbone, DVMRP did not seem the best choice because multicast usage started with a small number of participants and is only growing as the number and quality of applications increase. This initial "toe in the water" approach naturally lends itself to explicit join protocols rather than broadcast and prune protocols such as DVMRP. For this and other reasons we use PIM-SM [4], as it uses the unicast routing paths to deliver traffic with no dependency on which unicast routing protocol is used. It is an explicit join protocol and supports both shared and source-specific trees, with the switch being determined by the traffic level (packets per second) transmitted on a group basis. To provide a contiguous multicast network, the RP was located on the same router as the MBONE connection. However, due to issues with PIM-SM, the join period was slow (up to 60 seconds).

A solution to this (which is the configuration on the Futures Testbed) is to have multiple RPs. To allow this and still maintain our MBONE connectivity, we implemented the "Administratively Scoped" [5] class D address space. From this address space we are using the portion allocated for local/site usage with each of our routers having a unique sub-range. With this configuration each router becomes the RP for its address block while leaving all the globally reachable addresses with an RP of the router with our MBONE link. So how does this help? Well, it means that the first hop router to a source becomes the RP for that group. The router by default will use a source-specific tree, resulting in data delivery being immediate rather than being dependent on periodic updates.

Packet replication can be carried out in the router (layer 3) or by using the switch fabric (layer 2). Both Ethernet and ATM switches can copy packets effectively, but there are different factors to consider in the two cases.

The original means of connecting the routers on the Testbed ATM backbone was a mesh of PVCs. When a stream was requested by several routers, separate copies had to be copied by the originating router and sent out from its ATM port. Although link capacity had not been seriously affected by these duplicate copies flowing out over the same link, the processor load associated with layer 3 copying was gradually increasing. Moving from PVCs to point-to-multipoint SVCs, where copying is carried out in the ATM switches, gave a significant reduction in processor load. We present quantitative results in section 5. The current implementation has a single shared multipoint SVC for all streams so that they are delivered to all downstream routers whether they want them or not. The next improvement is to move to separate VCs for large streams, so that joins and prunes can be carried out at the ATM level. Although this will not have a huge impact on CPU load because routers can throw away unwanted streams very effectively, it will increase network capacity on some links.

Unlike ATM, Ethernet is good at broadcasting packets, and this has been the traditional approach to dealing with multicast, as the Ethernet switch has no idea of the negotiations for multicast stream delivery taking place at layer 3. However, flooding packets to all ports is not at all compatible with a network carrying, say, 10Mbit/s of multicast traffic, and a new means of ensuring that multicast addresses are mapped correctly to switch ports is needed. Some Ethernet switches listen in to IGMP packets, thus duplicating some router functionality. Cisco has taken an alternative approach and developed a new protocol called CGMP (Cisco Group Multicast Protocol). The router is responsible for sending a message (over a well-known multicast address) to the switch to give it the multicast address to MAC address mapping. The absence of active "leave" messages in IGMP version 1 is a problem in that all port mappings will stay in place until the last host leaves, penalizing channel hoppers who may end up with a lot of unwanted multicast traffic. However, more hosts are beginning to use IGMP version 2, where this problem does not arise.

4. Multimedia applications

At present the Futures Testbed network is able to provide various multimedia applications including videoconferencing, Video-On-Demand (VOD), and high-quality Intranet TV. For the purpose of this paper we will describe the multicast applications used on our network.

White Pine's CU-SeeMe [6] is a videoconferencing product that allows simultaneous multipoint video and audio conferences, with the ability to use multicast traffic (see Fig. 4). This application allows different video codecs such as M-JPEG and H263.


Fig. 4. CU-SeeMe

As a solution to deliver high-quality live and pre-recorded video and audio to the desktop, the Futures Testbed is using Precept's IP/TV [7] software.

IP/TV is a software product that enables real-time communications of multimedia content over LAN and WAN networks to every desktop using IP Multicast. For the Internet, different codecs may be used such as Vxtreme video, GSM audio, and H.261. On intranets better results are achieved through the use of MPEG-1. The software is PC-based and consists of three modules: viewer, server, and program guide.

The viewer (Fig. 5) displays the list of programs available including MBONE sessions and the user-selected channel and allows the user to send questions (e.g., while watching a live transmission). It also provides a QoS indicator for the application and the network.


Fig. 5. IP/TV viewer

The server transmits its content based on parameters specified in the Program Guide. Slidecast is an additional feature used when transmitting lectures or presentations to show any slides being presented to viewers in a separate window.

The Program Guide is a Web-based tool that is part of a Web server (NT or UNIX) for administrating and managing the IP/TV servers.

The services listed above have all been deployed on the Futures Testbed. About 80 people are using these applications on a daily basis for greater ease of work, acquisition of information, and collaboration between different areas. We have 12 IP/TV servers on the network providing live and pre-recorded content. Their location varies from lecture theaters, seminar rooms, and university campuses, among others. There are also some channels with general information about the company and the world, as well as channels showing previous talks and lectures.

5. Quality of Service (QoS)

Irregular delivery of multimedia data has effects that are easily detected by users, such as jerky movements or blockiness as video data is presented on their screen. These symptoms can be caused either by an overloaded PC or a network problem, making desktop support much more difficult than conventional non-real-time applications. Measuring QoS parameters is a way of distinguishing these effects and is also important in developing a network that can perform adequately.

RTCP [3] is one means of providing real-time feedback on QoS parameters (for example, packets received/lost), reception quality, user names, etc. This enables participating users to be logged, along with their session quality statistics. RTCP sends feedback to the sender and to other recipients of a multicast stream. They can then compare their reception quality with that of others to help isolate problems.

Although the need for QoS measurements is clear, there is no consensus on the benefits of prioritizing or reserving capacity for individual data flows in campus networks. As high-quality audio and video are very sensitive to network latency, it might be thought that there is a need to reserve network resources for this traffic. RSVP is one solution to this problem, allowing the dynamic reservation of bandwidth and assignment of priorities to various traffic types on IP networks. Routers and hosts act on the QoS request along the distribution path. However, it is thought that RSVP may not scale to allow fine-grained control over a campus network [8].

Another approach is to prioritize traffic according to bandwidth requirement or protocol type and then to try to ensure that delay-sensitive traffic is given a high priority. Various queuing mechanisms are available, but their impact on router processor load and the need for manual optimization are possible disadvantages.

Reservation or priority mechanisms are likely to be very important for wide-area links, but in the campus network, there is the attractive alternative of upgrading link capacity. As well as being relatively cheap, it also benefits all users rather than just those with delay-sensitive applications. A network with minimal prioritization and no reservation is also likely to be simpler to manage. The Futures Testbed is currently a completely "best effort" network with 155Mbit/s or 100Mbit/s backbone links and is therefore a good place to see how far this simple approach can be taken. This section describes our experience with broadband (2 to 3 Mbit/s) multicast applications.

The primary multicast application used was Precept IP/TV. At any time, 12 to 15 Mbit/s of MPEG traffic are being transmitted over the network. The reception quality has been monitored using RTCP and Precept's StreamWatch software. It has been shown that high-quality video can be received over the entire Testbed. Sometimes low specification PCs have shown jerky video with low frame rate, but RTCP messages confirm that this is not caused by the network, but is due to problems higher in the application stack.

CGMP is running on all Ethernet switches, ensuring that multi-megabit streams go only where they are wanted. Before CGMP, even the relatively low level of multicast traffic from the MBONE could cause problems (mainly to dial-in servers) when it was broadcast by the switch to all ports.

Over the ATM backbone the router interconnection has been changed from a PVC mesh to a point-to-multipoint SVC mesh. This change was made because of concerns over router processor load, and any reduction in link utilization has been incidental. The following experiments compared Layer 3 (IP) and Layer 2 (ATM) copying directly.

A 2Mbit/s stream from a video source was sent over LANE to a Cisco 4700 router. This then distributed the video to downstream routers via Cisco LS1010 ATM switches using point-to-point (unicast) SVCs, as shown below in Fig. 6. Copying was carried out in the router.


Fig. 6. Layer 3 multicast packets replication

The experiments were then repeated with a point-to-multipoint SVC between the routers, which means that the Layer 2 device (ATM switch) was copying the stream. For both experiments, CPU loads were recorded for 1-4 subscribed routers and are shown in Graph 1.


Graph 1. CPU load when router is using fast switching

As expected, copying at Layer 2 is much more efficient than copying at Layer 3. As well as the low router CPU load with L2 copying, it should be noted that the switch CPU load did not increase significantly. Similar results were also obtained with the router configured for process switching, but all the loads are higher because the router is examining the whole packet rather than just the headers. Note that the benefits of Layer 2 replication apply equally to Ethernet switches, which also tend to have very efficient copying mechanisms.

Using the multipoint SVCs for delivery of IP multicast has reduced CPU load to <15% on the core 4700 routers, and we conclude that the Futures Testbed can deliver multimedia services at high quality to all users, independent of their location on the network. The fact that our network is still working on a best-effort basis and is able to support these highly demanding multimedia services indicates that the "best effort, upgraded link" strategy may be viable over the next few years, especially as both ATM and Ethernet at >600Mbit/s become widely available.

For these experiments we did not have enough traffic to load the network sufficiently to see degradation of quality. Future work will include tests with background traffic from a traffic generator as well as the implementation of the QoS techniques described earlier. Our experiments have shown that Layer 2 replication can drastically reduce router processor load, especially with 1:3 or greater fan-out, and this is an important consideration in scaling IP multicast.

6. Conclusions

We have shown that good quality video can be sent over a campus network to many desktops using IP multicast. Two essential components -- a high capacity backbone and switched Ethernet to the desktop -- are becoming standard in new installations anyway. IP multicast of large streams also requires efficient packet replication and a mechanism to ensure that streams are only distributed to hosts that require them. Efficient replication is largely a matter of economics -- we have shown that it is better to copy at Layer 2 in Ethernet and ATM switches than to upgrade a router -- but in some cases may involve using network capacity effectively, which also favors Layer 2 replication. We have used Cisco's CGMP to ensure that streams do not degrade network performance for other network users.

Although Quality of Service guarantees for video and audio streams are important where a large amount of traffic is contending for limited link capacity, this appears to be mainly a problem for wide-area links. In the campus backbone it is relatively easy to upgrade desktops to full duplex 100Mbit/s Ethernet and the backbone to 622Mbit/s ATM or (soon) 1Gbit/s Ethernet. On the other hand, QoS mechanisms require more router processing power and potentially an expensive upgrade. We have used a strategy of keeping link utilization low rather than introducing complexity into routers and switches. So far this has been successful, but network-measuring tools are required to monitor performance as network load grows.

Our experiences with applications have shown that video makes high demands on PC processors, and users will often perceive PC problems as network problems. Desktop support staff will therefore need a range of tools to investigate problems. RTCP feedback from multicast clients, used by Precept's StreamWatch monitor, is likely to become increasingly used. Video encoding quality is also important, and users are very sensitive to certain defects. The trade-off between encoding time and bit rate will make delay-sensitive applications like video-conferencing use greater bandwidth than "one-way" video. Implementing a new generation of real-time workgroup products is an exciting challenge for both application and network designers.

References

[1] J. W. R. Barnes, J. Chalmers, P. Cochrane, D. Ginsburg, I. D. Henning, D. J. Newson, and D. J. Pratt, "An ATM network futures testbed," BT technology J., vol. 13, no. 3, July 1995.

[2] S. E. Deering, "Host extensions for IP multicasting," IETF RFC 1112, August 1989.

[3] S. Casner, R. Frederick, V. Jacobson, "RTP: A Transport for Real-Time Applications," IETF RFC 1889, January 1996.

[4] D. Estrin, D. Farinacci, A. Helmy, D. Thaler, S. Deering, M. Handley, V. Jacobson, C. Liu, P. Sharma, L. Wei, "Protocol Independent Multicast-Sparse Mode (PIM-SM): Protocol Specification," IETF RFC 2117, June 1997.

[5] D. Estrin, D. Thaler, A. Helmy, "PIM Multicast Border Router (PMBR) specification for connecting PIM-SM domains to a DVMRP Backbone," IETF Draft, February 1997.

[6] White Pine Cu-SeeMe Web page, www.cuseeme.com

[7] Precept IP/TV Web page, www.precept.com

[8] I. Henning, S. Sim, C. Gibbings, M. Russell and P. Cochrane, "A Testbed for the Twenty-first Century," Proceedings of the IEEE, vol. 85, no. 10, October 1997.

[INET'98] [ Up ][Prev][Next]