Scientists in Osaka, Japan, and La Jolla, California, successfully used international research networks to couple the world's largest and most powerful transmission electron microscope (3 million volt), at the Research Center for Ultra High Voltage Electron Microscopy (UHVEM), Osaka University, to a remote-use computer pavilion set up at the University of California, San Diego (UCSD) School of Medicine's National Center for Microscopy and Imaging Research (NCMIR). The international collaborative long-distance research (10,000 kilometers) marked a milestone in the advance of science and raised the awareness of the network engineering community to a real-life application pushing the limits of today's networking technologies .
The benefits of such telemicroscopy efforts have been realized by many during the past decade. Scientists worldwide are now able to have access to the capabilities of scarce, specialized instruments such as the 3 MeV UHVEM, and conduct their research without the lost time, disruption, and expense of traveling to locations distant from their own laboratories. Further, remote instrumentation caters to collaboration with other experts, which facilitates training and results in more efficient research studies. These benefits are expected to ensure that remote instrumentation systems will be a part of our future.
The intent of this paper is to provide related results and experiences that may aid in the pursuit of similar work and also in the design of advanced networking technologies that better support future telescience and collaborative applications. The paper begins with some background information on electron microscopy and the telemicroscopy work done at NCMIR and UHVEM. The paper then describes the general networking requirements for telemicroscopy and presents some results obtained from Trans-Pacific telemicroscopy sessions. Some limitations of today's Internet are listed, along with some lessons learned and future plans.
The powerful beam of high-voltage electron microscopes is capable of substantially higher magnification and resolution, and can produce images of much thicker specimens than can be produced by light microscopes or more conventional, lower acceleration voltage electron microscopes. Specialized techniques involved in electron microscopy studies produce accurate measurements and representations that lead to a finer understanding of the sample under investigation. For example, NCMIR neuroscientists have produced a radically revised computer model of the mitochondria, a cellular organelle, making obsolete the previous, long-standing textbook view.
Electron microscope tomography is a main component of such studies. Tomography involves recording a series of image projections as the specimen is incrementally rotated about the X or Y axis (Z being collinear with the electron beam) to create a "tilt series." These tilt series are then reprojected into a 3D volume using computer reconstruction techniques (such as filtered R-weighted back projection). This volume can be visualized using conventional volume rendering or surface extraction techniques. Alternatively, features found in the volume may be extracted by computer-aided tracing of their surface and by generating a polygon-based 3D model for visualization, measurement, and analysis.
These techniques and capabilities are valuable to many scientific fields. In neuroscience, for example, serial tomography (tomographic reconstruction of two or more serial sections produced separately and subsequently docked together) can produce a very nearly complete representation of a neuron (nerve cell), including much of the structure and substructure within at a very high resolution. Such tomographic computer models afford better understanding of the structure and makeup of the neuron and will lead to a better understanding of the function of nerve cells in the brain and, for example, to advances in the study of debilitating disorders such as Parkinson's and Alzheimer's disease. Other scientific fields that make use of electron microscopy include material sciences, chemistry, marine biology, astronomy, and the computer chip industry.
The high costs of purchasing and maintaining high-voltage electron microscopes and associated imaging and computer equipment dictate that only a few national laboratories exist to house these instruments, where they are made available to collaborating researchers worldwide. The NCMIR laboratory at UCSD and the Research Center for UHVEM at Osaka are two examples of such resources.
At NCMIR, the primary instrument is an intermediate high-voltage electron microscope (JEOL 4000EX IVEM), one of the few in the United States available to the biological research community. Because the higher accelerating voltage of the IVEM allows examination of relatively thick specimens, this instrument is especially useful for deriving three-dimensional biological structure using electron tomography techniques. The IVEM employs a video-rate CCIR camera and a high-resolution slow-scan CCD camera, as well as a traditional film imaging system. The IVEM has an on-board microprocessor and serial port, which allows it to be commanded through a simple interface from another computer.
Similarly, the UHVEM laboratory possesses the world's most powerful electron microscope, a HITACHI 3000. This microscope is over 40 feet tall, and its electron beam can possess an incident energy of 3 million electron volts. Because of radiation safety considerations, the microscope is operated from an adjacent room using specialized computers, controls, and displays, making it an immediate candidate for remote use. The UHVEM facility is predominantly used in the material sciences, but recently NCMIR scientists have been using it for biological research.
Access to and use of such specialized imaging instruments are impeded by the requirement that researchers must travel to the instrument site to perform their research. In the majority of cases it is necessary for researchers to repeatedly visit the microscope resource to conduct their work. In addition, in most cases researchers or staff must also perform the image processing steps required beyond data acquisition at the NCMIR/UHVEM because of the specialized software required and the large data sets produced. Only a few researchers have computing systems in their laboratories that can support these image-processing tools.
Network-based computing and the increasing availability of higher-speed networks will afford easy remote access to such specialized scientific instruments and high-performance computing for data analysis and visualization  . In microscopy, for example, the development of appropriate specimen preparation and analysis methods is an iterative process that is difficult to accomplish in a single visit to the laboratory. By providing remote access to the required instruments and computation, scientists can work interactively with resource staff to optimize specimen preparation, data collection, and image processing and avoid the stress and time pressure of an always too limited visit. Long-term or elaborate studies - which might otherwise require multiple visits to a distant resource - will become more practical when they can be conducted remotely. In fact, networked systems such as telemicroscopy applications make it easy for several investigators at widely dispersed locations to collaborate during the entire process of data collection and analysis.
A "proof of concept" of telemicroscopy was first demonstrated by NCMIR in 1992 at that year's ACM SIGGRAPH conference  . Subsequently, through funding by the National Science Foundation (NSF), the system was redesigned and brought to a production prototype stage. The NCMIR telemicroscopy project currently is funded by the National Science Foundation (NSF) with the development of instrumentation funded by the National Institutes of Health (NIH) National Center for Research Resources. Using the NCMIR system as a rough model, the UHVEM laboratory in Osaka has created a similar telemicroscopy system - funded by the Ministry of Education, Japan - which takes advantage of the UHVEM's unique specifications.
At the beginning of a telemicroscopy study, the remote researcher prepares and ships a specimen to the instrument facility and reserves time on the instrument for a later date. Prior to the scheduled time, local staff inserts the specimen into the microscope and collects a preliminary low-resolution survey - an 8 x 10 mosaic of tiled images taken at low magnification - which is made available to the remote user. At the scheduled time, the researcher will start the telemicroscopy application, connect with microscope control and session management programs over the network, and take control of the microscope. A video rate feed of the specimen under observation allows interactive remote control of magnification, focus, and microscope stage position, just as if the user were at the actual console of the instrument. Typically, the onsite operator and the remote user collaboratively scan the specimen and agree on areas for future in-depth study. High-resolution images of these areas can be recorded and transmitted over the Internet to the remote user for examination or later use. During telemicroscopy sessions, the remote researcher communicates with the local microscope staff by using network-based teleconferencing or, more commonly, by telephone. Telemicroscopy sessions generally last between two and five hours.
The NCMIR telemicroscopy system provides Web-based access to the JEOL 4000EX IVEM in San Diego . The user interface, called VidCon for Video-based Controller (shown in the picture above), is implemented in the Java 1.1 language and can be run on any Java-capable computer system, typically with a Web browser such as Netscape. At the microscope site, a Silicon Graphics workstation acts as the Web server and the video server. Also, a SUN workstation is used to control and communicate with the microscope and associated image-processing hardware.
The VidCon user interface displays the microscope's optical and stage parameters, the command in progress, and a live video image of the specimen under examination. Buttons and text fields are used to issue commands to the IVEM. Control of the instrument can be traded among users at different sites participating in the session. All participants can view the results of commands and the images acquired, but only the user in control of the instrument is allowed to send commands to it. Shared pointers and annotations are available to promote interaction among the session participants.
The UHVEM telemicroscopy system provides remote access to the HITACHI3000 UHVEM in Osaka. A single remote researcher employs a custom "knob box" (pictured at left) to control the remote microscope (pictured at right). The knob box communicates with an associated Hitachi personal computer (PC) that delivers the respective commands to the UHVEM over the network and graphically displays the state of several microscope parameters. A high-quality digital video stream is delivered to the remote user and displayed on a standard video monitor. The digital video system (DVTS) consists of conventional PCs equipped with DV codecs and associated software that runs on FreeBSD.
The performance of the underlying network has an obvious significant influence on the effectiveness and usability of telemicroscopy systems. To better support these systems, it is necessary to understand the network requirements involved. This section presents some details of the network flows present during telemicroscopy sessions: video, microscope control, and captured images.
The network must deliver a gray-scale video stream from the microscope site to one or more remote sites. The video streams have constant bit rates and will last for the entire duration of the telemicroscopy session. The NCMIR system uses Motion-JPEG for compression (hardware assisted at the transmitting site only), TCP for transport, and a custom flow control protocol that attempts to maximize frame rates. Each user is able to dynamically tailor his or her own video stream by adjusting frame size (768 x 576, 384 x 288, or 192 x 144 pixels) and JPEG quality factor. A typical configuration of 384 x 288 frames at a nominal JPEG quality of 70 represents approximately 10 Kbytes per frame. The maximum frame rate the video server can deliver is 10 frames/second, but observed frame rates are somewhat lower, ranging from 1 to 4 frames/second. The UHVEM system delivers a constant, unidirectional 36.5 Mbps digital video (30 frames/second) stream over UDP. Frame rates can be reduced by factors of two at the command line.
Microscope commands are sent to the microscope, and status information is returned to the remote user. In the NCMIR system, for example, the graphical user interface interprets user input and generates a simple ASCII command, which is transmitted to the microscope control software via TCP/IP. Responses are returned in a similar manner. For instance, "Set=Mag(3000)" is a command to set the magnification to 3000 times. "Completed:Set=Mag(3000)" is the response, indicating that the command has been successfully completed. The UHVEM system uses a similar protocol, but the encoding is in binary. In both systems, commands and status replies are small in size (< 20 bytes), and the microscope processes most commands in less than a second. Some automated commands -- such as automatic focus or image acquisition -- can take tens of seconds or more to complete.
With the NCMIR system, the remote user can request that a high-resolution image be captured using the slow scan CCD camera (2560 x 1960 pixels at a resolution of 14 bits/pixel) mounted below the viewing chamber of the microscope. The image data are saved in HDF format at full resolution and stored at NCMIR. For viewing, the image is compressed in JPEG format and delivered to the remote user at a specified resolution. By default, the image is initially delivered at 25% resolution (640 x 490 pixels), which represents about 35 Kbytes of data sent over the Internet. The compressed full resolution image runs to about 350 Kbytes. Typically, a few CCD images are acquired in rapid succession after several minutes of interactively examining the specimen.
To demonstrate the feasibility of telemicroscopy internationally, the NCMIR and UHVEM laboratories have been pioneering transpacific telemicroscopy experiments. NCMIR neuroscientists in San Diego have remotely operated the UHVEM microscope in Osaka, Japan, and UHVEM material scientists in Osaka have teleoperated the NCMIR microscope in San Diego. For the very first transpacific experiment (June 25, 1998), private satellites were used to transmit broadcast-quality video from the Osaka microscope to San Diego. For the subsequent experiments, the satellite connection was replaced by connection through international high-speed research networks. These higher-speed research networks provided the necessary bandwidth between both sites to conduct telemicroscopy sessions at no direct cost. Without access to these networks, continuation of this transpacific work certainly would not have been possible.
The figure below depicts the network path used during transpacific telemicroscopy sessions (link speeds as of January 2000). The major link from Japan to the United States was provided by TransPAC, an NSF and Japanese Science and Technology Agency (JSTA) project led by Indiana University. TransPAC -- also known as APAN -- connects Japan's high-performance research network (JGN) to the NSF-led very-high-performance backbone network system (vBNS) through the Science, Technology, and Research Transit Access Point (STAR TAP) in Chicago, Illinois. When these transpacific experiments began, bandwidth was limited by a 35 Mbps link between STARTAP and APAN Tokyo XP (currently at 70 Mbps). For recent experiments, the OC-3 link connecting NCMIR to the vBNS via SDSC has been the limiting connection; however, this link will be upgraded to OC-12 in the near future.
A 5-hour transpacific telemicroscopy session was conducted on April 29, 1999. During the first half of this session, NCMIR scientists were able to scan their biological sample (Purkinje neurons from rat cerebellum) and select areas that were later recorded onto film by the UHVEM staff. During the second half, scientists from UHVEM visiting NCMIR were able to investigate their material science sample (stainless steel) in a similar fashion. Throughout this session, the video quality was excellent and the scientists were pleased with the level of interactivity provided by the system.
To better understand network traffic characteristics during experiments like these, network engineers collected statistics of transpacific network traffic while a digital video stream was sent from Osaka to San Diego. The statistics came from OC3mon passive traffic monitors at STARTAP (Chicago) and APAN Tokyo XP, which were deployed by the National Laboratory for Advanced Network Research (NLANR) and APAN. Traffic traces were first gathered on March 9, 1999, and then again on April 8, 1999, after the TransPAC link was upgraded. The traces collected at STARTAP (shown in the figures below) clearly showed the UHVEM video stream as well as about 10 Mbps of background traffic. During the March run, the video stream consumed 96% of the bandwidth available on this link!
Subsequent analysis showed that the sustained UDP traffic dominated available bandwidth by causing TCP traffic to back off. This raises interesting concerns (and motivation) for Quality of Service technologies that must make intelligent decisions about how to partition the available bandwidth among all traffic flows. Also, strange drops in network throughput were observed at constant intervals, but further research into this anomaly is required to identify the cause. It is believed that the observed noisy video blocks were due to excessive delay jitter associated with UDP packets. However, their intermittent nature suggests that there may be a way to avoid them or at least minimize their undesirable effect.
Another interesting discovery was that the unidirectional nature of the UDP flow confused a router at UCSD. This router would initially broadcast incoming packets from a specific source along all destination ports and then forward future packets from that source to the one port that responded to the original packet. Since no such response was ever sent by the digital video system, the router continually broadcast the UDP flow to all ports, thereby flooding the network underneath.
The transpacific networking experiments mentioned here were possible because of the existence of national high-speed research networks, such as the vBNS and JGN, and the international transit access points connecting them, such as APAN/TransPAC. Conventional network infrastructure is not adequate to support these advanced applications, especially on an international scale. Thanks to funding of high-speed networking efforts between the US and Japan, adequate communications existed to experiment with and prove the merits of our approach. Other regions of the world are less fortunate: Some do not have high-speed national networking and others that do may not have a suitable transit access point connecting them to other high-speed networks abroad. For example, telemicroscopy experiments between Sweden and San Diego were made possible when NORDUnet, Sweden's advanced research network, was interconnected to the vBNS via STARTAP. Similarly, telemicroscopy is being used as motivation to increase bandwidth between the vBNS and RETINA, Argentina's high-speed research network.
Measuring network performance was done using the netperf and pathchar applications. By introducing significant amounts of network traffic for prolonged durations, netperf produced accurate measures of end-to-end network performance. Once the end-to-end performance was found to be undesirable, pathchar was used to estimate performance at each hop along the path. During our initial experiments, for example, netperf reflected less than the required 36.5 Mbps throughput and then pathchar identified the 35 Mbps TransPAC link as the bottleneck. Similar tools, such as ping and traceroute, produced incomplete or inaccurate results across such large distances. Once bottlenecks were identified, however, outside help needed to be called upon to improve or tune performance.
This was done thanks to the help and guidance of many network engineers worldwide from WIDE, NLANR, and the NSF. Without their assistance, it would have been impossible to modify network paths or establish private virtual circuits in order to achieve the best performance possible. The OC3mon data is another example of the increased understanding that was enabled thanks to the network experts involved. Even so, certain configuration options, such as private virtual circuits, were cumbersome enough for network engineers to implement that they were only available for special occasions.
Today's networking technologies have no adequate support for streaming live video across the wide-area Internet. The TCP/IP protocol will retransmit all packets received in error, causing that packet, and those that follow, to be delayed. A more suitable video transport protocol would use knowledge of the video flow (latency, packet type) to determine whether retransmitting is necessary, whether the errors can be ignored, or whether the packet should simply be dropped. This would yield faster frame rates, bounded latency, and delay jitter, and use less network bandwidth. Further, a suitable protocol for video would be able to provide feedback to the application, such as the current load on the network, that could be used by the software to automatically tune bit rates (e.g., by adjusting frame rate or quality).
Telemicroscopy and other similar applications would benefit from support for heterogeneous traffic flows in the Internet transport protocols. For example, it would be useful to give a higher relative priority to microscope control information than to the video stream, allowing microscope commands to be delivered in a timely manner even when the network is congested. This would allow for interactive control of the microscope, at the expense of lower quality video frames. In addition, application developers need generally available network diagnostic and tuning tools that can be used to improve communications performance where possible. One example of such a tool is Iperf, which claims to have support for measuring and tuning TCP and UDP connections and has a Java front-end that makes it portable and easy to use.
Unpredictable network performance remains a hindrance to general use of telemicroscopy applications. Specifically, communication protocols in use today cannot guarantee any level of service across the wide area. This limitation makes it overly difficult to implement live video streaming systems. In microscope operations such as manual focusing, which requires a high level of interaction with the remote microscope, the user repeatedly adjusts focus and judges whether focus was improved or not on the basis of the video signal. When one is performing such interactive operations, it is crucial that the control and video streams be delivered with constant latency (i.e., synchronized) so that the user can easily determine when the video stream reflects the results of the most recent adjustment. When latencies fluctuate, making this determination becomes very difficult to the user and the application does not have the necessary information to provide any assistance. The result is reduced interactivity with the instrument -- the user must allow enough time to pass after each adjustment -- which prolongs the duration of the session.
Ultimately, however, the current technology of the Internet will impose a serious impediment to the widespread adoption of telescience and other remote applications. In particular, limitations in the addressing space of IPv4 (the current Internet protocol) has serious implications for the long-term viability of the Internet. Increasingly, new sites around the world come online to the Internet to find that they do not have the ability to qualify for the scarce IPv4 global address space needed. This means that they must rely on private (i.e., non-globally routable) address space and Network Address Translators (NATs) to dynamically assign a global address to a wide-area Internet connection. It will be increasingly more difficult to reach systems because of the long-term IPv4 address space limit. Also, end-to-end security of Internet connections is compromised, as the header address translation inhibits the encryption stream.
Next-generation Internet technologies will be investigated as a means to surmount the limitations faced today. One effort will evaluate a new implementation of TCP that is aimed at supporting constant-bit-rate traffic, such as video streams. IPv6 is an answer to the long-term Internet addressing limitations of IPv4. Work is under way to run these telemicroscopy applications over IPv6, to demonstrate that this future IPv6 Internet Protocol technology does work and can be used for a major scientific application. Using the 6TAP (the first native IPv6 exchange point, located at STARTAP in Chicago) and the ATM trunks of the vBNS and APAN networks linking UC San Diego to Chicago and on to Osaka, Japan, one of the earliest, if not the first, real IPv6-based applications will be possible. This will help demonstrate the validity of this next-generation Internet technology.
Aside from a larger address space for future worldwide Internet connectivity, IPv6 will enable mechanisms to support application with heterogeneous traffic flows. For example, the IPv6 protocol contains a "flow label" header that can be used to describe a particular network flow. This field can be used by the network infrastructure to give a different level of service to packets associated with specific flow types. In addition, quality of service protocols -- such as RSVP and Diffserv, which are being deployed on IPv6 networks -- will greatly improve telemicroscopy sessions, as both microscope time and network bandwidth can be reserved in advance. The end-end encryption of the transport links will also be possible because IPsec, the new Internet security specification, is built in from the start as part of the IPv6 protocol basic specification.
This paper has described two telemicroscopy systems that exploit the use of international research networks to advance scientific discovery by providing remote, collaborative access to powerful and unique instruments, computer resources, and techniques. The advanced networking requirements and international cooperation exemplified by work in this area are important characteristics of scientific applications of the future. Clearly, it is important to continue building infrastructure to meet the needs of exciting scientific applications such as the telescience projects described here. Hence, it is critical that application developers communicate closely with network engineers to clearly define the needs of their applications so that adequate networking infrastructure, and efficient use thereof, can be realized.
This work was supported in part by grants from the Research for the Future Program of the Japan Society for the Promotion of Science under the Project "Integrated Network Architecture for Advanced Multimedia Application Systems" (JSPS-RFTF97R16301); the Ministry of Education, Japan; the National Institute of Health Research Resources Division (RR 04050 and RR 08605); and the National Science Foundation (ASC 93-18180 and ACI 9619020, National Partnership for Advanced Computational Infrastructure, and Cooperative Agreement ANI-9807479).