INET Conferences




Other Conferences

[INET'98] [ Up ][Prev][Next]

Virtual Emergency Task Force (VETAF)

Norbert SCHIFFNER <>
Fraunhofer Institute for Computer Graphics


The universal advancement of network and graphics technology, new business models, and global infrastructure developments are transforming the solitary, platform-centered 3-D computing model. With the availability of global information highways, 3-D graphical intercontinental collaboration will become a part of our daily work routine. Our research focuses on determining how computer networks can transform the distributed workplace into a shared environment, allowing real-time interaction among people and processes regardless of their locations.

The Fraunhofer Institute for Computer Graphics (IGD) in Germany and the Fraunhofer Center for Research in Computer Graphics (CRCG) in the United States are preparing for a new age of telecommunication and cooperation by concentrating their research efforts on implementing and using computer graphics technologies over a transcontinental network.

This paper describes the Virtual Emergency Task Force (VETAF) application, which combines the use of 3-D graphics with advanced network technology. With this application, a group of experts located throughout the world can meet to discuss a global crisis in a virtual environment specially designed to support their cooperation.


1. Introduction

Techniques like 3-D graphics, 3-D sound, and 3-D interaction allow a multidimensional form of telecommunication that addresses simultaneously the main perception senses of human individuals. These advanced forms of telecommunication make it possible to create distributed workplaces. New application scenarios can be built to target the particular needs of geographically distributed users. Possible applications range from simple distributed multimedia visualization of scientific data in standard 3-D data formats to technologically very costly distributed virtual reality (VR) environments for teleconferences and simulators [2].

The second chapter describes the concept of a virtual environment. Chapter three describes one possible scenario for the use of the VETAF application as well as its components and how everything was implemented. Chapter four presents some other possible scenarios in which the VETAF application can be used. Future work in this area, conclusions, and references can be found in chapters five, six, and seven.

2. Virtual environments

The fast pace in computing, graphics, and network technologies plus the demands of real-life applications have impelled the development of more realistic multi-user virtual environments (MVEs). An MVE is a distributed application where multiple users are simultaneously present within a simulated 3-D space. We define an MVE as a single environment shared by multiple participants connected from different hosts. All users are interconnected and share a consistent virtual world within the same MVE. Each user is represented in the virtual environment by an avatar which reflects the user's own viewpoint as well as position, orientation, and movements.

Each user can navigate through the world, interact and collaborate with other users in the world, and manipulate objects in the MVE. Changes effected by a user in the MVE are instantaneously visible to all other users.

The components used for the VETAF application are described in the following sections.


The main issue addressed by virtual environments refers to social and workspace awareness in MVEs. Avatars address this problem by representing users in virtual environments. Avatars are an important metaphor in MVEs: users can determine with whom they are sharing the information and in which particular object the other one is interested. Positions can express if users are following the discussion and whether they are looking at the same object. They express whether users want to talk to somebody or are moving tirelessly and nervously around. Appearances of avatars show the status of participants in the world: they can express a user's rank in a group, show if users are present at work, and express the individual taste of a user [3].

Realism in participant representation involves two elements: believable appearance and the capability of movement. This becomes ever more important in an MVE, since the participants' positions are used for communication. As an example: The distance between avatars controls the volume of the participants' audio. The participants' local environments store the whole scene description and use their own avatars to move around the scene. Rendering takes place from their own viewpoints. This avatar concept in MVE has crucial functions in multi-user virtual environments:

  • Perception (see if anyone is around)
  • Localization (see where the other person is)
  • Identification (recognize the other person)
  • Visitation of other person's interest focus (see where the person's attention is directed)

Using abstract virtual figures for avatar representation (Figure 1) fulfils these functions. Our avatar uses a live video stream and a business card identifying the user it represents. The video screen indicates the direction in which the user is currently looking. Each user is assigned a unique color; ears and feet of the avatar as well as the selection pointer are in this color. To point to an interesting position or perform an action, the user can use a distributed pointer, located on top of the avatar.

Figure 1. Avatar

Virtual Worlds

In his article "Networked Virtual Reality and Cooperative Work" [4], Steve Benford describes how virtual reality can support cooperative work and the advantages of the virtual world concept in contrast to all conventional conferencing systems in 2-D.

Benford specifies space as a "resource for activity and interaction." In our "real" world, rooms are often built, installed, and furnished for one or several special activities; e.g., we cook in kitchens and we shower in bathrooms. Furthermore, all tools we need for these activities are stored in those rooms; e.g., dishes, pots, and knives can be found and are used in the kitchen. We assign different functions to spaces and we are used to associate certain activities with specific spaces.

Two-dimensional graphical user interfaces (GUIs) display data in windows, which contain a limited amount of information. Hence, more than one window is necessary, or windows require menu bars, scroll bars, and skip buttons. In contrast to that, a 3-D GUI can display more ("unlimited") information, the navigation is easier, and the spatial structure supports the user to impress data better and to find data again. This means that a virtual environment can structure data in a more familiar way.

For instance, supporting a conference system, an MVE uses the metaphor "house" for combining all activities and interactions in this system into one environment. Rooms divide the house into different spaces associated with different functions such as rooms for registration, rooms for surveying running conferences, auditoriums, meeting points, etc. Some meetings may require external applications such as desktop conferencing tools. These applications can be started automatically if a user enters the specific meeting room and be terminated if the user leaves the room.


In an MVE, the system has to know which objects are in a scene, how they move, and how to represent them. Communication between objects, no matter if they are under control of a user or of the underlying program, is missing in most cases. One possibility to replace this absence in virtual realities is the use of agents. Intelligent agents act on behalf of their user and are able to enhance interaction between objects in a virtual environment. Like human assistants, agents can automate repetitive tasks (e.g., finding a way through the scenario and locating a room within it), remember certain facts which the user forgot, and recapitulate complex sequences in an intelligent way (who is in a room and which action he or she is performing at the moment). Intelligent agents can learn and even make suggestions to the user [5, 6].

The integration of agents into virtual collaborative environments provides an interesting field of research in computer science. Two agents have been developed for VETAF: a door agent (VRDoorAgent), which asks for a certain access code to a room, and the user agent (VRDeputyAgent). This user agent is responsible for fulfilling all access requirements and acts on behalf of its owner. It informs him or her about all possibilities of payment so that the user just has to make the final decision if the next room is entered upon these given premises. By generating a convenient presentation of all available information -- for example, by limiting the information perceived as relevant -- a cost-benefit analysis is much easier to perform. Our virtual environment VETAF is used to visualize these functions of the agents.

3. Virtual Emergency Task Force (VETAF)


The VETAF application presents the idea of a fast exchange of knowledge and manpower with advanced computer technology in order to give support and help wherever it is needed. A group of experts located throughout the world meets, in the case of an emergency, in a virtual environment that supports their communication and their work.

Although there are many other different scenarios possible (e.g., forest fire, technical support, entertainment, social welfare, etc.), we decided to realize the scenario of an emergency in a space station to show how advanced telecommunication techniques today will become standard in the next years.

In the near future, a space station with many inhabitants can be established in the orbit or somewhere in our sun-system to explore the universe, do research, and produce things in microgravity. In comparison to a small spaceship, there are a lot of hazards for such a big static station in space because it is very difficult to move and to react to special events. As it is impossible to protect the space station from the occurrence of hazards, the existence of a good mechanism for handling emergencies immediately is imperative.

Application description

The application setting is a virtual room with a 3-D model of the object of interest suspended in the middle (Figure 2). The 3-D model has different levels of detail and can be edited or replaced by other objects (supported formats include VRML, 3DS, NFF). It is possible to map additional data into the model. The virtual projection walls are used for projections of static data (diagrams, blueprints, video records), integration of standard Microsoft Windows applications (PowerPoint, Word), or as distributed whiteboards. In this particular scenario, blueprints, diagrams, and video recordings from other participants in the virtual environment or of rescue teams on location are projected on the walls.

Users receive a stereoscopic impression of the virtual environment via shutter-glasses and large-scale projections or via a head-mounted display. The level of immersion depends on the application. While shutter-glasses and large-scale projections fully satisfy the requirements for standard office and conference applications, full immersive virtual reality applications (e.g., simulators or walks through virtual buildings) require head-mounted displays.

All participants can talk to each other with full duplex spatial sound. A chat capability makes it possible to connect to other participants over low-bandwidth networks. The VETAF system uses constrained moving modes to make navigation through the scene easier. Participants use a 3-D mouse to move through the environment and to select between multiple moving modes. To select and move an object in the virtual environment, participants can use the mouse to guide their distributed pointer, a light ray, to the target.

Figure 2. VETAF application

Technical realization

The VETAF environment consists of three components (Figure 3): the main VETAF application including the control and status platform, multicast tools for audio and video communication, [7] and an agent platform.

The main VETAF application is implemented with WorldToolKit R6 [8], a portable, cross-platform software development system for building high-performance, real-time, integrated 3-D applications. This part of the VETAF application renders the virtual environment and handles the input/output devices (e.g., 3-D SpaceMouse, monitor, shutter-glasses). Every transformation or movement of entities in the environment is sent as a protocol-data-unit (PDU) packet to every participant. The communication between the participants is based on the IP multicast protocol for update messages and on reliable CORBA for status and control messages.

Figure 3. VETAF Architecture

For video communication in the virtual environment we use the standard MBone tools Vic/VicTex [9]. Vic is a multicast video conferencing application developed by the Network Research Group at the Lawrence Berkeley National Laboratory in collaboration with the University of California, Berkeley. Vic takes a signal from the video device and sends it to a designated multicast address. VicTex is a modification of the standard Vic application developed by Fraunhofer CRCG in Providence. Instead of viewing the video data in a window, VicTex writes the video stream into a shared memory area from where the VETAF application can load and use it as a video-texture in the virtual environment.

The spatial sound audio server [10] is a development of the Fraunhofer-IGD and is based on a local client-server architecture. Different audio sources (e.g., Live Audio Streams, Mediaplayer) can be related to objects in the virtual scene. These audio sources are connected as clients to the audio server which renders the audio signals depending on the position and orientation of each source in the virtual scene. The VETAF application transmits the required position and orientation data via a socket connection to the audio server. The audio server consists of two parts: the WauforVetaf and the RSX Sound Server [11]. WauforVetaf manages the audio and net input and output and the coding and decoding. Several methods to protect the audio stream against losses are also implemented in this application. This is especially important for low-bandwidth connections. All the audio streams received by WauforVetaf are combined with their position and are being rendered by RSX from Intel (Figure 4).

Figure 4. Audio rendering system (WauforVetaf)

Software agents are of great importance for use in real-world domains. Their intelligent behavior enables them to automate and delegate cognitive tasks that were not feasible for machines in the past. Agents support each user individually during a session. They act as a representative of their "employer" in the task they have been assigned.

One major issue of agent technology is cooperation between agents. Independent, heterogeneous agents need a common agent communication language (ACL) in order to communicate. ACL messages are well defined and can be processed without necessarily knowing about the embedded object of a message (content). The most common ACL is the Knowledge Query and Manipulation Language (KQML) developed by the ARPA-supported Knowledge Sharing Effort [12].

For agent communication we use "A Simple Agent Platform" (ASAP) developed by the Fraunhofer IGD [13]. ASAP is based on Java and KQML and provides agent templates to enable the programmer to develop software agents easily. Agents use the capabilities of ASAP during runtime: a facilitator, being part of the agent society itself, offers information about services of other agents. Different conditioners inform agents about system-dependent events or changes in the state of the computer. Integrated networking allows communication over different kinds of networks.

In VETAF two agents for access control to rooms in virtual collaborative environments were developed: the VRDeputyAgent and the VRDoorAgent. The VRDeputyAgent is responsible for the user actions. It collects information about the user (UserData) and provides the graphical user interface (GUI). At the request of the door agent it transfers all relevant data and delivers the preconditions for entering the desired room.

The general sequence is shown in Figure 5. An avatar, the graphical representation of a user, walks against a closed door. The door agent (VRDoorAgent) interprets this as a wish to enter and addresses the avatar. The VRDoorAgent now checks its database for information about the user. If it finds the required information and all entry requirements (e.g., paying a certain amount of money as an entry fee) are fulfilled, the door agent gives way to enter and releases the avatar. The VRDeputyAgent is asked for and provides the required data.

Figure 5. Agent communication

4. Other possible scenarios

The core application of VETAF can be used in various other application areas. Our department is working on new ideas, some of which are described in this chapter.

Virtual showroom

The virtual showroom (Figure 6) is mainly intended to demonstrate how virtual environments can be used in advertising and presentation.

Figure 6: Virtual showroom

Virtual cooperative workplace

A group of architects, mechanics, and technicians are sitting in a virtual room in front of their virtual worktables. Each member of this group is represented by an avatar similar to the avatars in the VETAF scenario. On top of the virtual worktables are virtual computer models (Figure 7). These models represent the real local desktop computer of every participant. On the virtual computer monitor one can see the output of the real monitors. The virtual room is like a huge office where every workplace can be seen from anywhere. In the room stands a big table where the models of the object of interest are placed.

If the participants in the virtual scene are working with their normal desktop environment (e.g., MS-Windows and MS-Office), their avatar stands in front of their virtual working table in a noncommunicative state. If they want to communicate, avatars have to stand up from their virtual workplaces and walk around in the virtual scene. On a wall next to the table there is a projection of a shared application (e.g., a Windows CAD program or a standard whiteboard). Another wall shows a video recording from a mobile participant. Every participant in the scene has an audio connection with 3-D sound for communication. That means that the perception of sound sources depends on the position and orientation of the participants in the virtual scene. Static sound sources in the scene (e.g., audio/video textures on the walls) also use 3-D sound. A mobile participant can be connected with the others via radio transmission for audio and video data. In the case of a low-bandwidth connection, the participants can communicate via a multicast chat tool that will also be projected to a wall in the scene.

Figure 7. Virtual Cooperative Workplace

5. Future work

Current research at Fraunhofer IGD and Fraunhofer CRCG includes work on a general architecture for MVE applications. This architecture is intended to support current applications such as VETAF as well as research in the areas of distributed simulation, generic approach for agents in MVE, and networking for large-scale virtual environments.

The architecture we are working on is based on the observation that different MVE components have very different networking and synchronization requirements. Past attempts at mapping these disparate requirements to a single networking protocol have failed to produce solutions scalable to wide-area, general purpose networks. However, protocols currently exist to implement most, if not all, of these components individually. We have identified five classes of networked data which must be supported in order to provide a general-purpose MVE architecture.

Each component of a virtual collaborative environment (VCE) can participate in any set of these networks. The networks do not form a protocol stack, but rather parallel communication channels with different characteristics.

The different types of network present a shared data abstraction to the components, which is accessed through a well-defined application programming interface (API). Communication between components is permitted only insofar as multiple components can access the same objects in the shared data. The use of an API for inter-component communication allows local as well as remote communication in a transparent manner. When multiple components are operating on the same node in the physical network, their communication can be via shared memory rather than network packets. In the trivial case, a monolithic high-performance virtual environment application running on a single machine can be assembled from reusable, network-able components. In this case, only shared memory communication will take place, providing a clean separation between components without generating network traffic.

The API presented by each network is based on asynchronous sampling of the states of the objects that occupy the given network. Since components sample the various networks asynchronously, they can operate in a decoupled manner. Decoupled operation implies that components can operate independently of one another. This is an important characteristic of any VE design, because it allows components to operate at a constant frame rate, independently of the current update rate of simulation components. The well-defined inter-component communication mechanism also allows different components to run on different processors (when available). In such a case, parallelization and pipelining of rendering, simulation, agent operation, and device handling are possible.

6. Conclusion

IGD and CRCG have already successfully demonstrated a distributed virtual environment in which participants can collaborate and interact with live voice, video, text, modeling, and imagery. The VETAF application has proven to be an effective environment for collaboration among research partners even when participants have been separated by great distances. During the G7 meeting in Bonn 1997 [14], delegates were able to witness and participate in a demonstration of this application between Bonn and Darmstadt in Germany and Providence in the US. The demonstration was also shown at the ACM '97 Expo in San Jose, California.

3-D graphics for global collaborative work are already being used, but these new technologies still present many challenges. Global telecommunication for businesses is currently limited to the transmission of 2-D image data and high-quality audio signals. However, we believe that the greatest impediment to 3-D communication is a lack of understanding of the requirements and benefits of this advanced technology. Therefore, the computer graphics community must cooperate with developers and users to establish and help introduce these progressive forms of telecommunication.

7. References

  1. J.L. Encarnacao, et al. "TRADE: A Transatlantic Research and Development Environment," International Conference on Virtual Systems and MultiMedia '97 (VSMM '97).
  2. Brutzman, D. Graphics Internetworking: Bottlenecks and Breakthroughs. Digital Illusions, C. Dodsworth, ed., Addison Wesley, Reading, Mass., 1996.
  3. Avatar Overview
  4. Benford, Steve; et al. "Networked Virtual Reality and Cooperative Work." Presence, Vol. 4, No.4, 1995, pp. 364-386, MIT Press.
  5. Nwana, H.S. Software Agents: An Overview. Cambridge University Press. Cambridge, UK 1996.
  6. Wooldridge, M.J., and Jennings, N.R. Intelligent Agents: Theory and Practice. The Knowledge Engineering Review 10 (2). 1995.
  7. Multicast Overview,
  8. World Tool Kit R6
  9. McCanne, S., and Jacobson, V. vic: A Flexible Framework for Packet Video. ACM, Multimedia '95, November 1995, San Francisco, CA, pp. 511-522.
  10. Spatial Audio Server
  11. Intel Realistic Sound Experience (3D RSX)
  12. Finin, T., Fitzson, R., McKay, D., and McEntire. R. KQML as Agent Communication Language. ACM Press, November 1994.
  13. Spriestersbach, A., and Peters, R. ASAP -- A Simple Agent Platform. Technical Report. ICSI, Berkeley, CA; Fraunhofer-IGD, Darmstadt, Germany. 1997.
  14. G7 Pilot Project "Global Marketplace for SMEs," 7-9 April 1997, Bonn, Germany, Results of the First Annual Conference.

[INET'98] [ Up ][Prev][Next]