MirNET: Linking Russian and American Science using Next Generation Networking Technologies

 

Valerii A. Vasenin (vasenin@msu.ru)
Moscow State University
Moscow, Russia

Steve Goldstein (sgoldste@nsf.gov)
National Science Foundation
USA

Alexei P. Platonov (plat@ripn.net)
Russian Institute of Public Networking
Moscow, Russia

Ms. Natasha Bulashova (natasha@alice.ibpm.serpukhov.su)
Friends and Partners Foundation
Moscow, Russia

Konstantin Scherbatyhk (kent@msu.ru)
Moscow State University
Moscow, Russia

Michael Kulagin (ql@ras.ru)
Russian Academy of Science
Moscow, Russia

Joe Gipson (joe-gipson@utk.edu)
University of Tennessee
USA

Greg Cole (gcole@solar.rtd.utk.edu)
University of Tennessee
USA

 

 

Table of Contents

 

ABSTRACT

MirNET is a new high performance network infrastructure, sponsored by the US National Science Foundation and the Russian Ministry of Science and Technology, to support collaborative applications between the US and Russian scientific and educational communities. MirNET is to begin operation in March 1999 as a 6 Mbps ATM service between the US vBNS (terminating at STAR TAP) and the Russian High Performance Network (terminating in St. Petersburg and Moscow) Telecommunications transport services are provided by Teleglobe and the Russian provider Rascom.

The MirNET initiative is managed by the MirNET Consortium, a group of US and Russian higher education and non-profit organizations dedicated to creating and managing the MirNET infrastructure and services. Consortium leadership on the US side is provided by the University of Tennessee, Knoxville, and on the Russian side by Moscow State University. Key Russian consortium partners include the Russian Institute for Public Networking, the US-Russian Friends and Partners Foundation, and the Russian Academy of Science.

This paper describes the development of MirNET, its services, and plans for its continued development. In the course of doing so, it describes the state of Russian Internet development generally, and the emerging high performance network specifically. It will provide an overview of the National Science Foundation’s High Performance International Internet Services Program (HPIIS), the science, technology and research access point (STAR TAP) and current developments in international high performance networking. The paper will provide a detailed technical description of the MirNET telecommunications link, the management structure for the project, and services provided by the MirNET team.

 

  1. INTRODUCTION

    MirNET is a joint US-Russian project to provide next generation Internet services to collaborating US-Russian scientists and educators. The project is jointly funded by the US National Science Foundation and the Ministry of Science and Technology of the Russian Federation. A $6.5 million five year budget initially provides for a terrestrial six (6) Mbps ATM service between Moscow/St. Petersburg and the STAR TAP facility in Chicago. Over the course of the five-year project, additional funding and an expected decrease in the cost of telecommunications capacity will yield greater capacity &emdash; providing for a larger and richer set of collaborative possibilities. Applications will include data visualization, remote instrumentation and control, medical imaging, distance learning, tele-medicine, and high quality multi-point audio/video conferencing.

    The establishment of MirNET represents an important enhancement of a basic infrastructure supporting mutually beneficial cooperative endeavor between the US-Russian scientific communities. It leverages the enormous investment in the R&E establishments in both nations by more effectively linking the enormous, diverse, and important collaborative projects involving Russian and US researchers and covering almost all areas of scientific pursuit. These areas encompass everything from high-energy physics and the control of nuclear materials to the logical studies of the earth’s crust and environmental engineering. Given the high caliber of both scientific program and the importance of increased cooperation, the establishment of MirNET represents an important goal.

    MirNET’s primary focus will be on the delivery of enhanced international network services for Russia’s high performance networks and on enabling their access to US researchers. It will do so connecting the emerging ATM cloud and the Moscow, St. Petersburg areas to US high performance network infrastructure via the STAR TAP switch in Chicago. The projects’ broad objectives include:

    Supporting Established R&E Collaborations &endash; MirNET will provide essential high performance network services to support significant cooperative projects in research and education between authorized MirNET institutions in Russia and their R&E partners on the U.S. high-performance network infrastructure.

    Fostering New Opportunities for R&E Collaboration &endash; MirNET’s organization will work to identify, support, and help develop new and innovative research and education partnerships between Russia and the U.S., based on the power of MirNET’s leading edge network services and next generation collaborative applications to support such partnership opportunities.

    Facilitating Development of Advanced Services and Applications &endash; MirNET will help facilitate the research, development, and implementation of advanced network services and collaborative applications.

    While the first stage of the project is to provision and test a six (6) Mbps ATM link between the STAR TAP international switch and related facilities in Moscow and St. Petersburg and ensure its stable operation, the goal of the project is to increase the capacity of the initial link to support high performance networking applications in the tens of Mbps by a growing community of Russian and US researchers. Success will be measured by the rapid growth and total capacity, the performance and stability of the link, the number of research programs serviced (including the addition of Russian academic centers outside of Moscow and St. Petersburg), the quality of research being supported, and the satisfaction of users with its ability to meet application needs.

    To ensure realization of its goals, the US-Russian MirNET team has put into place the necessary investigative, engineering, and staff support; the advisory structures for policy development, fund raising, and decision support; the appropriate management systems and procedures to handle trouble shooting, scheduling, and authorization issues, and communications and information services for advising interesting parties of MirNET’s status, applications and growth.

    The trans-Atlantic portion of the MirNET link is provided by Teleglobe, Inc., and is a terrestrial 6 megabit per second ATM service from STAR TAP through New York and to Blaajberg, Denmark. The Russian telecommunications provider Rascom provides service to St. Petersburg where the link connects to sites in the area and also uses an existing ATM OC-3 link to interconnect to the M9 international switching facility in Moscow.

    A permanent virtual path (PVP) is established between the MirNET switch in Chicago (Teleglobe &emdash; Excel POP) and the MirNET ATM switches in St. Petersburg and Moscow. MirNET routers are located at the three MirNET POPs (Chicago, St. Petersburg and Moscow). These policy routers exchange routing information with the gateway routers from other authorized networks, such as vBNS, NASA and Esnet, to insure that only HPIIS authorized traffic traverse the link.

    IP connectivity will be provided to all HPIIS authorized institutions by a total PVC of 2 Mbps. This 2 Mbps will be shared between St. Petersburg and Moscow. Will the remaining part of the link is not in use, it will be able to use the available bandwidth. MirNET will also provide native end-to-end ATM connections by working with the ATM backbone providers to schedule the path. Since many of the sites in Russia will initially only have IP connections to MirNET, IP tunneling will be used to route the traffic within the public networks of Russia. MirNET will also provide multicast routing for Mbone. MirNET will work with current efforts underway to test, understand, develop and implement Ipv6, RSVP and QOS.

    The proposed link is jointly managed by the 24x7 MirNET Network Operation Centers at the University of Tennessee (UT), Moscow State University (MSU) and RIPN with assistance from Teleglobe. The University of Tennessee and MSU will serve as the point of contact for the users in each country. UT will also take on the responsibility of serving as the contact point for other HPIIS authorized institutions outside the United States except those that are in Russia. All usage, performance and availability data for the proposed link will be published in each country via the WWW and/or LISTSERV discussion groups on a regular basis (daily report, monthly summarization and yearly evaluation). In addition, UT will leverage existing network research activities at the National Laboratory for Applied Network Research (NLANR) for the engineering, monitoring and performance measurement of the proposed link. Close cooperation with other HPIIS initiatives is helpful in managing a robust network.

    The development of high performance network applications in the US and Russia is still new &emdash; as is the experience of US and Russian researchers in utilizing these capabilities for collaborative projects. The establishment of MirNET is not possible without an enormous personal and institutional energy and investment. However, the promise of MirNET - the new and enhanced collaborative efforts between our scientific communities &emdash; represents a clear and important goal.

    The key to the project's development thus far lies in the diverse strengths of the MirNET consortium &endash; the high performance networking (ATM) experience of The University of Tennessee and the Oak Ridge National Laboratory, the networking activities (across Russia) of the Russian Institute of Public Networking, Moscow State University and the VUZTelecom Center in St. Petersburg, and the successful U.S.-Russian Friends & Partners initiative representing over five years of close cooperation fostering and supporting over 100 US-Russian exchange projects and building many important relationships among the academic and scientific communities. This combination provides a solid foundation of networking expertise, organizational strengths and a proven track record of US-Russian cooperative effort. Such a foundation is essential to building and advancing MirNET's advanced high performance network infrastructure and applications.

     

     

  2. HPIIS PROGRAM/STAR TAP

 

Starting in the late 1980's, the National Science Foundation (NSF) NSFNET Program established connections with national research networks (NRNs) of other countries. The international activity was formalized in 1991 with the award of NSF's International Connections Management (ICM) Project to Sprint. Between 1991 and 1997, ICM helped to "wire the world," by assisting about 25 countries with connections to the NSFNET. In 1993 NSF cooperated with the International Science Foundation to implement a general-purpose science and education connection, a 64 kbps link between Moscow and the U.S., including the NSFNET. As another example, a communications satellite teleport that the ICM project set up in Homestead, Florida still serves as a point-of-entry for several countries in Latin America. NSF decommissioned the NSFNET in 1995, and started the "very High-Performance Backbone Network Service" (vBNS, http://www.vbns.net ) in partnership with MCI.

Since 1995, primary NSF focus has been on high-performance networking, and the international focus has turned to the establishment of a persistent point for high-performance networks from around the world to interconnect. In April, 1997 NSF made an award to the University of Illinois at Chicago for the Science, Technology and Research Transit Access point (STAR TAP, http://www.startap.net ) for that purpose.

 

 

Canada's CA*net had connected to the vBNS through STAR TAP in January, 1997, even before STAR TAP was officially established. Also, NSF issued a solicitation for "High-Performance International Internet Services" (HPIIS) in 1997 to provide cost-sharing with consortia of NRNs for connections to the vBNS and other Next Generation Internet (NGI, http://www.ngi.gov ) networks via the STAR TAP. The first HPIIS award was made in June, 1998 to the University of Tennessee on behalf of the MirNET consortium (http://www.MirNET.org). The second HPIIS award was made in August, 1998 to Indiana university on behalf of TransPAC (http://www.transpac.org), a consortium representing Asia-Pacific Advanced Networks (http://www.apan.net ).

In addition to Canada, Singapore connected its advanced network, SingaREN (http://www.singaren.net.sg/) to the STAR TAP in November, 1997. TransPAC connected in September, 1998, Taiwan's TAnet2 in October, 1998, and we expect the MirNET connection to be made in February, 1999. TransPAC, by the way, includes Korea, Japan, Australia, and a second connection from Singapore. By April or May of this year, if not sooner, we expect to see NORDUnet (http://www.nordu.net), France's Renater2 (http://www.renater.fr/), Netherland's SURFnet (http://www.surfnet.nl/), Israel's new networking initiative, yet to be named, and CERN (http://www.cern.ch) to have connected to STAR TAP. NORDUnet serves the five Nordic countries: Denmark, Iceland, Finland, Norway and Sweden. So, soon we shall have about 20 advanced networks connected at STAR TAP, about 15 international connections plus four NGI networks and the new Internet2/Abilene network (http://www.internet2.edu/abilene/).

STAR TAP is implemented on a commercial ATM switch in downtown Chicago operated by Ameritech Advanced Data Services (AADS, http://nap.aads.net/main.html). Many commercial ISPs also use the same switch. Therefore, any high-performance network that connects to the STAR TAP can also direct "commodity" Internet traffic to the commercial ISP of its choice by means of a separate Permanent Virtual Circuit (PVC) set up in the switch by AADS. In fact, STAR TAP itself is intentionally free of any acceptable use policy (AUP), and all peering between connecting networks is by mutual bi-lateral consent and over a separate PVC between them. This means, for example, that if Israel and Singapore desire to exchange traffic between their advanced networks through the STAR TAP, they make their own agreement to do so, with no need to consult STAR TAP other than for assistance in identifying responsible officials in each network and requesting that the PVC be implemented by AADS.

  

 

  

Both the STAR TAP, from the international point of view, and the Mid-West NGI Exchange Point (NGIX, the term that refers to the use of the AADS switch to connect the NGI networks) from a U.S. domestic point of view serve the vision of the STAR TAP Principal Investigators Tom DeFanti and Maxine Brown for a "persistent infrastructure to facilitate the long-term interconnection and interoperability of advanced networking in support of applications, performance measuring, and technology evaluations."

 

III. US HIGH PERFORMANCE INTERNET INFRASTRUCTURE

 There are several high speed networking initiatives that are currently operating within the United States. Most of them have been funded and managed by the Federal Government. The Department of Energy (DOE) started passing data on its Energy and Science Network (ESnet) in January 1988. In 1994 DOE implemented an ATM, DS-3 backbone. It is currently made of OC-3 backbone links, with OC-12 links being installed at this time. This network is primarily used to interconnect DOE facilities across the nation (www.es.net). MirNET will be able to inter-connect to ESnet at STARtap. Both IP and ATM connections will be possible via STARtap

 

The National Aeronautical and Space Agency (NASA) maintains an ATM network, NREN (www.nren.nasa.gov), that connects its sites around the world. This network is also built around an ATM OC-3 backbone. This network also peers at the STARtap NAP via an OC-3 ATM connection. Therefore, MirNET sites will be able to connect to NASA NREN sites via IP or ATM connections.

 

In the mid-1990s the National Science Foundation (NSF) created the very high performance Backbone Network Service (vBNS) in conjunction with MCI. This network was designed jointly for scientific and research communities and originally provided high speed interconnection among NSF's supercomputing centers and connection to NSF-specified Network Access Points. Today vBNS connects to supercomputing centers, universities and NAPs. vBNS is built on an ATM OC-12 backbone, it recently added an OC-48 link. It interconnects to the STARtap with an ATM connection. This will allow MirNET users to interconnect to vBNS sites with IP or native ATM connections.

All of these networks inter-connect at the STARtap as shown below.

 

 

 

 

IV. RUSSIAN HIGH PERFORMANCE NETWORK INFRASTRUCTURE

The construction and development of a high performance network infrastructure in Russia has been underway since 1996 within the framework of the interdepartmental (between several state agencies) program, Creating the national computer telecommunications network for science and higher education. The initiators and participants of this program were the Ministry of Science and Technology, Ministry of Education, Russian Academy of Sciences (RAS), Russian Foundation for Basic Research and State’s Committee on Communications and Information.

One of the aims of this program was to create the backbone infrastructure for The National Scientific and Educational Network. This network acts to consolidate many efforts within Russia and will be the basis for the development of information resources of science and higher education.

Another aim of this program was to construct and develop the high performance network infrastructure. This network allows experimentation and mastering of a new generation of telecommunication and information technologies, as well as remote supercomputing applications. The first project in this direction was created in 1996 &emdash; 98. It was an experimental ATM test bed at Moscow State University (MSUnet). MSUnet used ATM to deliver high speed Internet connectivity.

The ATM based network built in 1996 was studied in great detail. Every aspect of the new technology was scrutinized in a heterogeneous network. This network was built with the latest technology from the largest vendors in networking. The vBNS was being designed and built on the same technologies in a similar timeframe. The results of this work, along with similar work around the world, have validated these designs. It has also allowed MSU to be on the cutting edge with these technologies.

The backbone infrastructure of the experimental MSU ATM-network is based on a fiber optic cable plant that exceeds 50 km in length. This cable plant links three main communication nodes in different regions of Moscow: MSU campus on Vorobjovy hills, M-9 (Moscow international telephone exchange M9), MSU campus on Mokhovaja Street. This backbone network has connections with all of the significant Russian ATM projects: the ATM kernel of the Moscow telecommunication corporation, Comcor network; the ATM segment of the Southern Moscow Backbone (SMB); Steklov Mathematics Institute and RAS Presidium. Comcor is the largest SDH network in Moscow. Through the fiber optics to Northern Moscow Backbone (NMB) and ATM channel Moscow &emdash; St. Petersburg, which belongs to RBnet, This experimental MSUNet ATM segment is linked to St. Petersburg ATM network via the Northern Moscow Backbone (NMB) and the Rbnet Moscow-St. Petersburg ATM link. The RBnet infrastructure will also be used for connecting Moscow to the emerging ATM activities in Ekaterinburg, Novosibirsk and Samara.

  

 

The newly built high speed network in MSU and ATM testbed, as it’s backbone kernel, have allowed already in early 1997 to go over to studies, approbation and introduction to the education and research processes in Moscow university of new generation information technologies and integrated network services on their base. Between them are research using real time audio- and video applications, including distributed multicasting, multimedia databases for education purposes, tasks of telemedicine. Similar studies are conducted in number of scientific centers of Russian Academy of Sciences.

  

V. MirNET TECHNICAL INFRASTRUCTURE

Channel level structure

The MirNET connection between STARtap and Russia will be provided by Teleglobe. From the STARtap to the Teleglobe Excel POP will be a DS-3 (45 Mbps). This will connect the MirNET ATM switch. The ATM switch will then connect to a Teleglobe DS-3 connecting to an ATM network to New York. In the New York the link is carried over three E1s to St. Petersburg. In St. Petersburg the signal is IMUXed back to DS-3. The DS-3 connects to a Metrocom ATM switch.

The ATM connections of MirNET will be regulated by the MirNET switches in Chicago, St. Petersburg and Moscow. A PVP will be established between Chicago and each of the two Russian switches. The terminal switches will perform outgoing traffic shaping to the 6 Mbps bandwidth to help ensure maximum performance in the intermediate channel area.

 

IP

The MirNET routers will be co-located with the ATM switches in Chicago, St. Petersburg and Moscow. These routers will enforce MirNET routing policy. These routers will also act to connect sites that currently do not have native ATM connectivity.

 

VI. MirNET ORGANIZATIONAL INFRASTRUCTURE

 

 MirNET Consortium Management

The MirNET initiative is based on many collaborative ties and relationships between Russian and American partner organizations. The fact that one of the key organizations in this effort is a jointly managed US-Russian organization focused on Internet related networking has made much simpler the task of integrating other consortium members into a productive organization. The following are the members of the MirNET Consortium:

• The University of Tennessee, Knoxville

• Telecommunications and Network Services (TNS)

• Center for International Networking Initiatives (CINI)

• Moscow State University (MSU)

• Russian Institute for Public Networks (RIPN)

• Friends and Partners Foundation, Moscow

• Russian Academy of Science

• VUSTelecom Center of St. Petersburg

As a charter member of the Internet2 Consortium and recipient of an NSF vBNS connection grant, the University of Tennessee, Knoxville, is an active participant in the development and use of advance network information technologies. The "Friends and Partners" initiative has been active for over five years using Internet as a means of devoting networking between individuals and organizations in the US and Russia. Its US-Russian activities have been supported by such agencies as the Ford Foundation, International Science Foundation, US State Department, NATO, Sun Microsystems, and others. Its base within UT’s Center for International Networking Initiatives and close ties with Telecommunications and Network Services gives the US team within the University of Tennessee both advanced network experience and extensive experience with US-Russian network applications and organizations.

In addition to being one of Europe’s leading quality education institutions (ranked #2 in the Gourman Report, National Education Standards, USA) Moscow State University has been one of the original and key players in the development of the Russian Internet. The Russian Institute for Public Networking coordinates Internet networking across Russia, working with 800 organizations in over 100 Russian cities to further academic and public networking. It manages the RBNet, a primary Russian academic and scientific network. The Friends and Partners Foundation manages all "Friends and Partners" project activity within Russia and works closely with UT's Center for International Networking Initiatives on using modern Internet based services for bridging the many barriers (language, cultural, political, commercial) between US and Russian individuals and organizations. The VUZTelecom Center of St. Petersburg is responsible for operation, maintenance, and development of the Russian Federal University network RUNNet. VUZTelecom Center has been a major player in the development of high performance networking in Russia. Russian Academy of Science is . . .

 

MirNET Consortium Management

The administration and support requirements for MirNET present many challenges including overall management and rapid growth of a project between two countries with myriad language and cultural differences with additional complications from the joint funding arrangement for the project. Others include the research and engineering challenges dealing with the development of high performance networking, the "bleeding edge" nature of the technologies and applications which are supported and which are made more difficult by the initial low link capacity (6 Mbps per second) for which there is pressure to accommodate high demand applications. The MirNET project itself is a rapidly changing project. MirNET establishes a foundation with the current funding base for which the goal is to expand quickly to increase capacity and infrastructure to support servicing collaborative research efforts requiring total capacity in the tens of Mbps per second.

The combination of these challenges requires a very determined commitment, heavy institutional investment in addition to the NSF and Ministry of Science funding and a very capable, compatible, flexible team with proven ability to work cooperatively across the many barriers presented to any US-Russian activity. An important aspect of the MirNET Consortium is the experience of over five years of working together on network-related US-Russian cooperative initiatives. The Friends and Partners project and organization represents over five years of cooperation in an atmosphere of shared and mutual respect and one with a long history of shared decision making and shared management of projects funded by such agencies as NATO, Ford Foundation, US State Department, Sun Microsystems, and others. This experience of working together coupled with the advance network experience of both teams (and the years of experience managing Internet development in Russia) provides the foundation on which the management of the project is built.

The following diagram illustrates the organizational scheme of the MirNET Consortium.

The following describes the organizational structure.

First, the bottom of the diagram depicts the user applications upon which the MirNET project is built. Network services are to be scheduled for user applications via a scheduling mechanism comprised of a relational database system and a set of rules governing the prioritization of applications requiring classes of network service. The scheduling mechanism is integrated with a user/application registration/approval procedure. While this system, currently being implemented, is also rules based, intervention by MirNET management is required to initially approve institutions and applications for use of MirNET.

The next level up in the diagram is the MirNET network itself.

The top of the diagram illustrates a Senior Advisory Board which is currently being established which will include representatives of various government organizations with interest in the MirNET link, various scientific associations, higher education representatives, and representative from telecommunications and computing industries with interest in high performance network applications. This is to be a high level advisory group which will make recommendations and provide some guidance to the central MirNET management team.

The management team itself is made up of representatives from the primary organizations in the US and Russia. In addition to managing and developing MirNET services, the MirNET team is responsible for the funding provided by the US National Science Foundation and the Russian Ministry of Science and Technology and for related sponsor reporting requirements.

The management team oversees both US and Russian operations, including the Network Operation Centers (NOC), Network Information Centers (NIC), and User Services. An Engineering Advisory Board will be established in Russia and in the US and will provide guidance and support for network operations. While two separate Advisory Boards are to be established, there is to be rather continual communications between members of each group.

The management team also provides direction for development efforts including the HPIIS, STAR TAP information services provided through the Network Information Centers in the US and in Russia, for publicity and public relations about project, federal and corporate development, responsibility for the annual MirNET/HPIIS meeting, and for fund raising activities. The MirNET management team is also responsible for establishing policy and for managing the central applications scheduling system. The management team is responsible for developing a sustainable financial plan to maintain MirNET services after the term of the NSF/Russian MinSci project is complete.

 

 VII. MirNET SERVICES

The MirNET organization will provide fundamental network monitoring and troubleshooting. This network monitoring will be performed by using HP Openview, Cisco Works, Cisco Netflow and OC-3MON. These statistics will be gathered and analyzed by the MirNET team. These statistics will be available on an almost real-time basis on the MirNET webpage.

MirNET will also track troubles with a trouble ticketing package. The Remedy ARS system will be used to track reports of problems. The systems will be hosted by The University of Tennessee NOC with complete access available to the MSU NOC and the Rbnet NOC. This system will track and categorize all trouble reports. All adds, moves and changes will also be tracked by the Remedy system.

ATM Services

Native ATM service (layer 2) will be provided by the predefined PVCs (with UBR service unless required otherwise by the application/researchers) over the 6Mbps ATM virtual path between STAR TAP and RBnet. ATM PNNI-1 (private network to network interface) signaling will be used for topology management initially (only known vBNS or HPIIS authorized ATM routes will be defined). ATM PNNI-2 signaling will be implemented when this service is available from STAR TAP either by obtaining an ATM prefix from network providers in the US or Russia or from the appropriate ATM addressing authority for Russia. Layer 2 services that can not be accommodated by predefined PVCs will be scheduled with the MirNET NOCs, which will coordinate with vBNS and/or STAR TAP in allocating the required network resource.

IP and Internet Services

Layer 3 IP connections will be achieved by peering of the MirNET policy router in Russia with the vBNS router in Chicago. The PVC that will be used by the peering routers will be defined for UBR service. This means that layer 3 services will be able to use as much bandwidth as possible without the involvement of the MirNET NOC. Through mechanisms of Cisco route-map/policy based routing, packet filtering and physical network connection administration, MirNET will enforce the rule of only allowing traffic between HPIIS authorized institutions and vBNS authorized institutions to transit the MirNET network. Policy based routing will be defined on the policy router at RBnet to only allow for authorized institutions in Russia to be routed to STAR TAP, and vice versa, only traffic destined to authorized institutions will be permitted to transit the RBnet policy router. It is expected that initial IPv6 and IP multicast will be implemented via IP-in-IP tunneling. Experiments with these protocols as well as RSVP and QoS will be part of the advanced network services offered on the proposed link.

Monitoring and Performance Analysis

The implementation of monitoring and performance analysis will largely be drawn on the experience of UT from the management of the large campus and wide area network. UT is a active participant of the vBNS network as well as one of the founding Internet 2 members. As such, MirNET will utilize and install existing tools such as OC3MON developed by the Measure and Operations Analysis Team (MOAT) from NLANR at STAR TAP and on the RBnet network.

Reports on router interface statistics as well as cell rates on ATM switch ports will be reported on a regular basis via WWW and email to user groups maintained by MirNET. This will be extremely valuable information to the application developers as well as researchers in understanding behaviors of their application.

Advanced Network Services

MirNET is in full support of research of advanced network services by HPIIS and vBNS authorized institutions on the proposed link. Following advanced network services are to be offered on the proposed connection:

MirNET will maintain its existing partnership with Cisco Systems, Inc. through its lead US institution (the University of Tennessee) as well as Russian institutions in developing and experimenting with such advanced network services as listed above. Additional partnerships with the National Center for Network Engineering (NCNE) from within NLANR will be sought for network research and implementation.

UT is also evaluating Cisco's WebCache product because it is capable of providing transparent cache (no client re-configuration necessary) that will ease the implementation of the web-caching services.

MirNET recognizes the fact that application developers may require assistance from MirNET engineers in providing feedback to the performance of the application, as well as how to fine tune applications. Moscow State University (MSU) and UT will provide the application support to researchers on MirNET. MirNET will also seek partnerships with the Distributed Application Support Team (DAST) within NLANR to assist with application development needs.

 

VIII. INITIAL APPLICATIONS

The following are the list of project from Moscow as well as brief annotations of the projects proposed from St. Petersburg.

1. From Bauman Moscow State University of Technology

"Use of contemporary telecommunication nets for accompanying information Russian-American projects in the field of the newest biomedical technologies".

2. From Russian Academy of Sciences &emdash; the proposal from N.D.Zelinsky Institute of Organic Chemistry of RAS entitled "The Academic Network of Russia".

From Moscow State University:

3. Physics and Geology Departments -- "University and School Education Telecommunication and Information Resources";

4. Faculty of Medicine -- Educational component of telemedicine in preparing the physician of the XXI Centure",

5. Philology Department -- Organization of the distant learning system on Philology based on high performance network technologies.

Overview of advanced collaborative projects, proposed from St. Petersburg in the framework of MirNET initiative.

1. Institute for High-performance Computing and Data Bases Institute for High-performance Computing and Data Bases (IHCPDB), St.Petersburg.

IHPCDB was founded in St. Petersburg, Russia, in February 1996 as the scientific and research institution of the Ministry of Science and Technical Policy of the Russian Federation.

It was established on the base of the International Institute for Interphase Interactions (IIII) and its Supercomputer Center (CSA). The IHPCDB is equipped with the most powerful supercomputers' cluster in Russia.

All the Institute activity directions are connected with the challenge problems that need the usage of high-performance techniques for its solution as well as with utilization of the computational and informational resources of IHPCDB. Main directions are:

Among projects that will benefit from MirNET connection one could mention the project on creation of distributed computational and information resource with NCSA, USA. This will be used for joint research, distributed computing on supercomputers of different architectures, digital libraries, high performance databases and knowledge bases.

2. A.F.Ioffe Physico-Technical Institute, St. Petersburg

This is one of the largest physical institute of the Russian Federation. Scientists from Ioffe Institute take part in many international projects in physics. Now a number of such joint scientific projects is close to 50. Approximately half of them is conducted in the close cooperation with american scientists.

Among projects that will benefit from MirNET connection one could mention:

3. Central R&D Institute of Robotics and Technical Cybernetics, St. Petersburg

It is one of the largest State Scientific Center in Russia. Its activities are focused on research and development in the field of technical cybernetics for space, air and marine devices. The Institute has modern Internet/ATM communications links with the rest of Western Europe through a fiber optics link with Helsinki. This link has existing Internet ties with ESA, DASA, and Alenia. In October 97 at the IAF conference in Torino Italy, the Institute conducted an Internet based, two-way video conference and remote command capability of the Buran robotic arm, and a telescience conference demonstration between the University of Michigan, ESA, and Alenia from Turino.

The MirNET-related projects of the Institute are focused on high-speed networking for space applications, including remote control and monitoring of robotics devices. The Institute has collaborative ties with NASA, USA.

4. St. Petersburg International Center for Preservation.

The Center activities are devoted to the preservation of the cultural heritage, help improve and strengthen the conservation capabilities of every museum, library, archive and historic building in the region. It provides an infrastructure for training, information exchange, and interdisciplinary research. Its programs insure that professionals throughout the region have a mechanism for collaboration with their colleagues nationally and internationally to develop, promote, and apply new approaches to conservation problems.

The main american partner of the Center is the Getty Conservation Institute, an operating program of the J.Paul Getty Trust in LA, Calif. Possible projects targeted for MirNET connection include distributed multimedia databases and videoconferencing.

5. The Experimental High Energy Physics Department (Professor P.F.Ermolov)

The department has participated in a number of experiments on the world's highest-energy accelerators for several years. In particular, it is a member of international collaborations D0 (FNAL, USA), E852 (BNL, US) and SELEX (FNAL, USA). Many universities from the United States (such as Kansas State University, University of California (Riverside), Michigan State University, Florida State University, NorthWestern University, University of Rochester, State University of New York (Stony Brook), Indiana University, Carnegie-Mellon University) and several Russian institutions (IHEP, ITEP, JINR) are the members of these collaborations as well. INP MSU personnel carries out collaborative research projects with these universities and institutions. The experiments mentioned above will be performed during several years. In year 2000 Tevatron collider at FNAL will start running after significant upgrade. It will be the world's highest-energy hadron collider in the nearest five years yielding unique information on the microworld phenomena. The analysis of the data taken by detectors at FNAL will be the key task of the modern elementary particle physics. It is expected to be a unique task for the computing technique as well. The volume of the experimental data which will be transferred to the collaborating institutions (including INP MSU) for processing, reconstruction and analysis is enormous (about tens of Gbytes per week with a typical file size of 50 Mb). Besides INP MSU group produces Monte Carlo event samples for a number of physics tasks which will be used by the whole collaboration and develops the reconstruction and analysis software using local computers as well as remote ones. These activities require the fast interactive access from INP MSU to FNAL computers, including the usage of X-protocol. The availability of the fast connection will allow INP MSU personnel to participate in video conferences which are at present recognized to be an efficient way of communications to solve the current computing and physics problems.  

 

The following represent projects proposed within the Russian Academy of Sciences.

  1. Center of Economics and Mathematics Institute (CEMI RAS) 
  2. Joint research in the field of the Economics of Transition in Russia

    Pennsylvania State University,

    Economics of Electronic Commerce

    Fisher Center for Management and Information Technologies, University of California, Berkeley

    Business Forecasting and Modeling, Mathematical Statistics

    Department of Mathematics - College of Business and Management University of Maryland

    Audio/Video Conferencing

  3. Center for Supercomputer Support of Chemical Research (FreeNet)

    High - Performance Distributed computing in heterogeneous platforms

    Institute for High - Performance Computing and Databases (IHPC DB, Saint Petersburg, Russia) and National Center for Supercomputing Applications

    Collaboratory conferencing

  4. Center for Scientific Telecommunications and Information Technologies

    Creation of Integrated System of Information Resources of Russian Academy of Sciences (RAS) "Science Net".

    This is to provide the integrated environment for access to various and distributed scientific and administrative data for analyzing and processing in distributed computer system of RAS, data actualization and consistency.

    Establishing remote access via high performance networks to the Joint Supercomputer Center (newly created by Ministry of Science and Technology, RAS, Russian Foundation of Basic Research and Ministry of Education), Supercomputer Center of Keldysh, Institute of Applied Mathematics and other supercomputer centers of RAS

  5. Keldysh Institute of Applied Mathematics of the Russian Academy of Sciences (RAS)

    To provide the chemical community access to the excellent computing facilities of the Keldysh Institute. This includes the standard semi-empirical molecular dynamics visualization code, the Original Quantum Chemistry code developed in the Laboratory of Quantum Chemistry and the Statistical Physics of the Karpov Institute of Physical Chemistry.

  6. Nuclear Physics Branch of RAS

    Telecommunications for Experimental High Energy Physics
  7. The American and The Russian research centers in high energy physics and fundamental nuclear physics

    Experimental data transfer within the framework of the collaboration and, video conferencing

    Moscow State University (RUHEP/Radio-MSU), Fermi National Accelerator Laboratory, Brookhaven National Laboratory, Stanford Linear Accelerator Center, Lawrence Livermore National Laboratory, University of Notre Dame, Northwestern University, University of Massachusetts and Indiana University

  8. Vernadsky Institute of Geochemistry in Moscow

    Petrogenesis of Planetary Crusts and Mantles, researching lunar samples from US Apollo and Russian LUNA missions

    The University of Tennessee, Knoxville &emdash; Image transfer and remote sensing

  9. Zelinsky Institute of Organic Chemistry (IOC)

    Developing and implementation of broadband telecommunication technologies for distributed and distant learning and technology transfer. Development of validated configurations for various classes of CSCW applications based on ATM as well as methodical approach for their use for learning and technology transfer;

    CSCW applications, including audio /video conferencing

    University of Missouri.

  10. Siberian Branch of RAS in Novosibirsk.
  11. Diamond Genesis and the Nature of the Earth's Mantle

    Shared visualisation and modelling, remote instrumentation, large data sets and collaborator

    The work is carried out as a part of NSF- sponsored, co-operative US/Russia program, with participation of the University of Tennessee, Knoxville in US and Diamond Treasury of Yakutsk in Russia.

Most of these projects will use the network infrastructure provided by the Moscow Backbone (MB or "MOS" in Russian transcription). The Southern branch of MB interconnects thirteen nodes with a 100 Mbps FDDI ring. The nodes are built using the CISCO Catalyst 1200 with eight 10BaseT ports. This infrastructure provides a 10 Mbps IP connection for most science and education institutes in Southwest Moscow and external connections via the M9 Station. In 1998 the 155 Mbps ATM connection between Moscow State University, Presidium RAS Building (where Joint Supercomputer Center and Nuclear Physics Branch of RAS are located), Steklov Mathematics Institute (MIRAS), Zelinsky Organic Chemistry Institute (IOC) of RAS and M9 Station were established. All of these nodes are equipped with CISCO LightStream 1010s, or FORE ATM switches. This ATM network uses the spare fibers in the MSB fiber optic cable, and is the ATM backbone for the participants of the project (see the figure below).

 

IX. CONCLUSION

The foundation upon which the MirNET Consortium is being built is that the range and importance of current collaborations between the R&E communities of the United States and Russia and the enormous potential for development of future collaborations makes it of vital importance to implement dedicated infrastructure between the emerging high performance R&E networks of the two countries. The sheer volume of the ongoing cooperative efforts makes the significance of the relationship evident.

Over the last ten years the level of cooperative scientific activity between the research and education communities of Russia and the United States has steadily increased in size, scope, and intensity. That this increase has come despite the barriers of distance, and the corresponding inaccessibility of colleagues and major laboratory facilities for each side, is a testament to the intellectual vitality of these communities and to a shared sense of mutual respect. Yet, the enormous potential for such cooperation has only begun to be tapped; its future growth depends upon the creation of networking and telecommunication tools that can bring distant colleagues into the kind of rich, spontaneous, and fully communicative interaction upon which science thrives.

The current network infrastructure between the two countries, however, provides neither the uncongested capacity nor the range and quality of network services required for the global collaborative research communities of the future. The MirNET Consortium has been conceived to address this precise problem.