Fujikawa Kenji <magician@kuis.kyoto-u.ac.jp>
Kyoto University
Japan
Ohta Masataka <mohta@necom830.hpcl.titech.ac.jp>
Tokyo Institute of Technology
Japan
Ikeda Katsuo <ikeda@kuis.kyoto-u.ac.jp>
Kyoto University
Japan
Keywords: IP, ATM, PLASMA, IP multicast, RSVP, CSR, QoS, point-to-point link, OLU network.
Asynchronous Transfer Mode (ATM) networks are promising for broad bandwidth and QoS, and several IPs over ATM models that support IP multicasting are proposed and are being developed. However, they require some kinds of servers, so the LAN is not autoconfigurable like an Ethernet or FDDI LAN. In addition, the IP multicast mechanism of such a LAN is much more complicated than that of Ethernet or FDDI. The requirement of servers and the complication of IP multicasting impair the applicability and reliability of a LAN. Utilization of the UNI/NNI (User Network Interface/Network Network Interface) signaling[1] is the main cause of this problem.
We propose a method, named Point-to-point Link Assembly for Simple Multiple Access (PLASMA), which provides a simple and straightforward multicast mechanism in a subnet, such as an IP/ATM subnet. PLASMA also assures QoS of transportation using Resource ReSerVation Protocol (RSVP)[2] over IP/ATM networks. For this purpose, we add a feature to the RSVP protocol and introduce Cell Switching Routers (CSRs)[3, 4], which are routes with cell switching fabric, into IP/ATM LANs.
We briefly review two sorts of IP multicasting in LAN Emulation (LANE)[6] and in the Multicast Address Resolution Server (MARS) model[7], each of which provides an IP multicast mechanism over ATM.
LANE is a method that implements applications of the current LANs over ATM. A LAN based on LANE behaves as if it were an Ethernet or FDDI LAN. In LANE, a LAN is managed by a LAN Emulation Configuration Server (LECS) and a LAN Emulation Server (LES), and broad-/multicasting are provided by a Broadcast and Unknown Server (BUS). A host wanting to broadcast or multicast IP packets has to send IP packets to a BUS, and subsequently the BUS transmits the packets to all the hosts in the LAN over point-to-point VCs (Virtual Channels) or a point-to-multipoint VC.
A MARS server is a multicast extension of an ATM-ARP server. In the MARS model, each sender arranges one VC against one multicast address, that is, a set of point-to-multipoint VCs from different senders is dedicated to one multicast address (There is an approach that employs a MultiCastServer (MCS)[8], the discussion on which is almost the same as that on LANE.) A MARS server holds the information on which hosts join which IP multicast address, and is responsible for notifying hosts sending data to a multicast address of receivers' ATM addresses. In addition, a MARS server has to re-send the information to the senders every time the information changes. In order to keep a set of point-to-multipoint VCs toward all the receivers, every receiver must notify the MARS server of the multicast address it joins. Every sender must keep the information of all the receivers and must send SETUP and ADD PARTY signaling messages.
Thus, multicasting in both LANE and MARS requires some kinds of servers. Such superfluous elements are not favorable with respect to LAN's applicability and reliability. Autoconfiguration of a LAN is almost impossible, and configuration of a LAN becomes more complicated. In addition, initial setting up of servers is required, and many types of bottlenecks emerge. For instance, introducing a server like BUS prevents utilizing ATM's cell switching fabric. In such an IP/ATM subnet, QoS can hardly be assured because a VC cannot be assigned to each data flow path.
PLASMA utilizes layer 2 (L2) label switching architecture, which detects the destination(s) of an L2 data frame from its L2 label and forwards the L2 frame to the destination(s), alternating the value of the L2 label. In ATM, an L2 frame and its L2 label correspond to a cell and the VPI/VCI value of the cell, respectively. In addition, ATM switches provide hardware-level L2 label switching architecture. For other point-to-point link networks, we are proposing a method for placing L2 labels in Point-to-Point Protocol (PPP)[9] frames and designing software-level L2 label switching architecture.
Data flow paths are created as a result of L2 label switching at the nodes en route. Each node employs the PLASMA Protocol (PLASMAP), which advertises L2 label switching information in a single IP subnet. Here a "node" is defined as an entity that sends and receives PLASMAP messages and sets up L2 label switching fabric. A PLASMA node does not have to be assigned its own identifier for processing PLASMAP if it is not an end in terms of data transportation on layer 2. Therefore, PLASMA enables autoconfiguration of IP subnets, that is, all users have to do is connect PLASMA nodes. For example, in the case of IP/ATM networks based on PLASMA, ATM switches are PLASMA nodes just as ATM hosts are. However, they are not required to have their own identifiers.
In a PLASMA network, where PLASMAP messages are exchanged, nodes can be connected in any topological manner. Of course, a PLASMA network is allowed to contain some loops, thus improving network flexibility.
Table 1 shows the key fields of the PLASMAP messages.
Message | Key fields |
---|---|
JOIN | Join addresses |
NOTIFY | Source address, Flow ID, Hop count, Destination address, Flow spec |
ACCEPT | Source address, Flow ID, L2 label |
Nodes create a data flow path, in other words, begin to receive and/or to send the data when they are sending NOTIFY messages related to the data and receive related ACCEPT messages. If nodes are pure receivers, then they are not required to receive ACCEPT messages. Each node that is not one of the ends makes use of L2 label switching fabric for forwarding the data.
Nodes send PLASMAP messages periodically. They expire a data flow path after a defined period of time for which they are not receiving related PLASMAP messages. Therefore, data flow paths in PLASMA are "soft-state."
Each node is required to process NOTIFY, ACCEPT, and JOIN-ALL messages, and is recommended to process JOIN messages. If a node that cannot process JOIN messages receives JOIN messages, then it simply discards them.
A node that cannot send JOIN messages is required to send at least a JOIN-ALL message to its peers. From a different point of view, this implies that a node can send a JOIN-ALL message to any peer at any time instead of sending a JOIN message.
The join addresses placed in a JOIN message that is to be transmitted from one interface are determined from the join states of the node and at the other interfaces. That is, the join addresses are the merged ones of the node and at the other interfaces. If a join state at another interface holds all addresses (this means that this interface is receiving a JOIN-ALL message), a JOIN-ALL message is sent from the interface.
(Join D) +--------*[N4] | D (Join D) (Join A,B) [N1]*------*[N2]ABCD---*[N5]AB------*[N7] * * BC | | | | * | (Join B,C) +--------*[N3] +---------*[N8] A | (Join A) +--------*[N6] N1, N2, N3...N8 : Node A, B, C and D : Address * : All addresses (Join A,B) : N7 joins A and B [N7] [N5]AB-- : N5 has a join state of A and B at this interface
Figure 1: Managing join states.
Figure 1 shows sample join states in a PLASMA network. In this network, for instance, Node N5 joins Address D, and is receiving a JOIN message of all addresses (*), a JOIN message of Addresses A and B, and a JOIN message of Addresses B and C from Nodes N2, N7, and N8, respectively. As a result, Node N5 has the join states shown in the figure, each of which corresponds to one of the receiving JOIN messages, and is sending a JOIN message of Addresses A, B, C, and D to Node N2 and a JOIN message of all addresses (*) to Nodes N7 and N8.
There is a case in which some of the join states of all addresses (*) become join states of Addresses A, B, C, and D. Assuming that the link between Node N1 and N3 breaks and resumes after some period of time, such a case will occur. Even in this case, PLASMAP works correctly and make the state converge to the one illustrated in the figure in time. PLASMAP very simply supports this function as follows: When a node receives the same NOTIFY messages from different interfaces, it makes join states at those interfaces to hold all addresses (*). This function avoids loops of redundant JOIN messages.
A node sends an ACCEPT message to the peer that sends a related NOTIFY message to it if it joins the destination address placed in the NOTIFY message or receives a related ACCEPT message from a downstream node.
(Join D) +--------*[N4] (discarded) | -----> D -----> (Join D) (Join A,B) [N1]*------*[N2]ABCD---*[N5]AB------*[N7] * * ^ BC -----> | | | | | <----- * | | ----->(Join B,C) +--------*[N3] +---------*[N8] A | <----- (Join A) +--------*[N6] (Notify B) (a) Sending a NOTIFY message (Join D) +--------*[N4] | D <----- (Join D) (Join A,B) [N1]*------*[N2]ABCD---*[N5]AB------*[N7] * * | BC <----- | | | | | * V | <-----(Join B,C) +--------*[N3] +---------*[N8] A | -----> (Join A) +--------*[N6] (Notify B) (b) Sending ACCEPT messages
Figure 2: Creating data flow path by NOTIFY and ACCEPT messages.
Figure 2(a) shows a sample network, where Nodes N7 and N8 join Address B and N6 is sending a NOTIFY message so that it can send data to Address B. The NOTIFY message is finally delivered to Nodes N7 and N8 according to the above-mentioned procedures. Discarding the NOTIFY message from N1 to N2 avoids creating a loop of the NOTIFY message. Consequently, the ACCEPT messages are delivered as shown in Figure 2(b), and the data flow path is created along the reverse path of the ACCEPT messages.
In IP/ATM networks based on PLASMA, IP unicasting and multicasting are simply implemented by using IPv4 (or IPv6) uni-/multicast addresses as PLASMA addresses. Servers like MARS, LECS, LES, and/or BUS are not required. Therefore, PLASMA enables straightforward IP multicasting in IP/ATM networks without any kinds of servers. In addition, autoconfiguration of an IP/ATM LAN is provided, since ATM switches do not need to have their own identifiers.
In PLASMA with RSVP, QoS-specified transportation is implemented by utilizing an independent data flow path for each service, while non-QoS-specified transportation (i.e., best-effort transportation), is supported with a shared data flow path. Thus, PLASMA assign an independent data flow path to each RSVP flow. The nodes on the way arrange a queue for the data flow, distinguish the data by the L2 label, and queue it in the dedicated queue.
Each RSVP sender transmits an RSVP PATH message, which is transferred via a non-QoS specified data flow path, placing the flow ID of a PLASMA data flow path in the LIH field. A router detects that the ingress RSVP flow corresponds to an ingress data flow path by the LIH field placed in the PATH message. Consequently, the router detects the correspondence between the ingress and egress data flow paths, since it already knows the correspondence among the ingress PATH message, the egress PATH message, and the egress data flow path.
Cell switching routers (CSRs) are proposed in [3, 4], which are routers that can forward data in cell switching as well as in packet forwarding. Since this function equals that of our PLASMA routers in IP/ATM networks, we introduce CSRs in IP/ATM networks based on PLASMA, and add the LIH extension to CSR's features.
Waseda Osaka Nara Inst. of Univ. Univ. Sci. & Tech. Fujitsu [R]---------[R]---------[R]---------[R]-----------[R] Kobe | | Univ. | | Univ. of [R] +------------------[R] Kyoto Tokyo | | | Univ. | | | Univ. of [R] | [R] Kyushu Inst. Elec.-Comm. | | | of Tech. | | | NTT [R]-------[R]-------[R]-------[R]-------[R]-------[R] Hiroshima NEC Tokyo Inst. Nagoya Tohoku | Univ. of Tech. Univ. Univ. | [R] Hirshima City Univ.
Figure 3: OLU network.
The topology of the OLU network is basically a ring. Each node is connected to two adjacent nodes. Some nodes have extra connections to nonadjacent nodes in the ring, thus creating cut-through paths. Figure 3 shows only 15 nodes that run an IP router.
We have been developing three types of PLASMA nodes for IP/ATM, an ATM host, an ATM switch, and a CSR, which all employ PLASMAP for establishing VCs. Either a PLASMA ATM switch or a PLASMA CSR consists of an ATM switch and an ATM host that controls the switch. Each PLASMA CSR runs routing daemons, a gated and a mrouted, on it.
On some nodes, PLASMA ATM hosts, switches, and CSRs are introduced. The ATM hosts can make use of IP multicasting as well as IP unicasting without any kinds of servers, so all of the current IP services are available in the OLU network. In addition, PLASMA allows autoconfiguration of LANs, since ATM hosts do not have to be preconfigured.
In the OLU network, hosts can possess an independent VC per service using RSVP, regardless of whether intra-subnet or inter-subnet. The OLU network also shows how to set up VCs automatically in a VP exchange environment. We are also making an experiment in which end-to-end MPEG2 video streams of more than 6 Mbps are transmitted via RSVP flows through several CSRs. This will provide QoS-assured video conference systems of higher quality.
We proposed PLASMA, which provides straightforward IP multicasting and autoconfiguration of an IP subnet in an environment where a network is constructed with point-to-point links. PLASMA can be easily applied to IP/ATM networks, derives the best performance of ATM's cell switching fabric, and also assure QoS using RSVP. Particularly for enabling QoS assurance across subnet boundaries, we proposed an RSVP LIH extension method. Finally, ongoing experiments over the OLU network were presented. IP/ATM networks based on PLASMA are suitable for future advanced applications, supporting IP multicasting and QoS assurance.