Augmented Books, Knowledge, and Culture

Kim H. VELTMAN <k.veltman@mmi.unimaas.nl>
Maastricht McLuhan Institute
Netherlands

Contents

1. Introduction

Some inventions are very important. Things that change the world are usually more than an isolated invention. They bring about systemic changes in society. The invention of printing around the year 805 in Korea was very important. [1] But it was not until Gutenberg in the 1440's had the idea of using that invention for spreading knowledge that something systemic changed. With Gutenberg's approach, printing became one aspect of a new way of organizing knowledge, a new way of communicating practice. But Gutenberg went broke. It took over a century and a half for the vision to manifest itself and yet the world changed.

The invention of the computer has been compared to the invention of printing. This is both true and misleading. It is true because computers entail a systemic change in society, which will change the world even more than the advent of printing. It is misleading because the computer revolution is not about one invention. This paper explores some dimensions of these claims.

Computers are an important invention. Connected computers through the Internet are a very important invention. The systemic changes they are bringing with them are changing the world. One of these changes is miniaturization. A second is the development of mobile equipment. Wireless connections mean (a) that we shall soon be able to access equipment literally anytime, anywhere and partly as a consequence thereof, (b) most of the neat distinctions between different kinds of devices are disappearing. A third and fourth innovation are Geographic Information Systems (GISs) and Global Positioning Systems (GPSs). A fifth invention is the concept of agents. This means among other things that routine tasks of librarians and scholars can increasingly be relegated to software. A sixth change is that optical character recognition (OCR) is reaching a mature state. This means that digital versions of the Xerox machine are emerging, that we scan anything we wish. It also means, as we shall see, that hypertext takes on a new meaning.

Connected computers in combination with these new technologies are bringing about a systemic change. Most persons expect that this should take the form of some unexpected killer application, which will invariably remove or replace all competition. In our view something more subtle is happening. The systemic change will inspire us to use what we already have in new ways. We shall begin with a brief survey of these new technologies, which at first sight have no connection with one another. This will lead to a review of different meanings of hypertext before exploring how augmented books and augmented knowledge will change our conceptions of knowledge.

2. New technologies

A number of new technologies, which began as independent inventions, are coming together through the phenomenon typically called convergence. These include electronic books, mobile devices, GISs, GPSs, agents, and advances in OCR.

Miniaturization and electronic books

Computers were traditionally mainframes, which took up entire rooms. The rapid development of miniaturization brought computers to the desktop, then to portables, to the laptop, the notebook, and notepad and more recently to handheld devices such as the Palm Pilot. Among these new handheld devices is the electronic book. [2] Books are heavy. Important books such as the Bible often weighed ten pounds or more. The complete Oxford English Dictionary in 26 volumes is so heavy that no ordinary individual can carry all of it around. Electronic versions thereof in electronic books are portable.

The process of miniaturization is continuing. Today's transistors measure about 180 nanometers. In November 1999, Lucent's Bell Labs announced "the world's smallest" transistor gates measuring "just 50 nanometers, which is about 2,000 times smaller than a human hair," the company said. Future refinements should shrink transistor gates to less than 30 nanometers. [3]

Within two decades, we are told, an object the size of a fly will be sufficient space for a computer that is 100 million times more powerful than a contemporary Pentium computer. [4] Technically speaking this means that the electronic equivalent of millions of books can potentially be carried around easily. Wearable computing, a trendy fantasy even five years ago, will probably be an everyday reality within a decade, thanks partly to developments in a new field called molecular electronics or molectronics. [5] A project at Notre Dame University, for instance, has as its goal the production of a "logic device less than 50 nm on a side, with associated input and output structures." As Dr. Marya Lieberman explains:

Quantum-dot cellular automata (QCA) are a completely new architecture for computation. Information is transmitted between QCA cells through Coulomb interactions; depending on the arrangement of the cells, bits can be transmitted, inverted, or processed with logic operations (AND, OR). Information processing does not require the flow of current, so QCA has the potential for extremely low power dissipation.

Switching the cell depends not on voltage gating of a current (as transistors do) but on this tunneling process; thus, the device performance is expected to improve the smaller the QCA cells can be made [sic], with the ultimate size limit being molecular QCA cells.

Classical inorganic coordination chemistry is our main tool for the synthetic work. Both ultra-high vacuum scanning tunneling microscopy and X-ray photoelectron spectroscopy will be employed to characterize model compounds and molecular QCA cells on the surface of silicon wafers. [6]

In 1967, the film Fantastic Voyage described an imaginary trip through the bloodstream of a human being as a brilliant example of science fiction. Given the trends in miniaturization this could become reality within two decades. What possibilities does the future hold for the science fiction of today in novels such as Michael Crichton's Timeline? [7]

Ubiquity and mobiles

Connected with such developments is the rise of ubiquity and mobiles. The late Mark Weiser at Xerox Park developed the idea of ubiquitous computing: [8] that computing was not just about a gadget on a desk but could potentially involve miniature gadgets spread everywhere in our work and home environment, a vision for which he decided wireless communications were necessary. Parallel with this, Leonard Kleinrock developed the idea of nomadic computing for the military. [9]

In 1998 a number of new consortia [10] moved discussions of wireless technologies from futuristic scenarios into the front line. This has had two basic consequences. First, there are new prospects of interoperability among a whole range of electronic devices and appliances beginning in the home and soon spreading elsewhere especially after the introduction of Universal Mobile Telecommunications Services (UMTS) in 2002 when it will effectively be possible to link computers via satellite from anywhere at any time.

Second, there is a so-called convergence of devices whereby the traditional notion of devices having a specific function is fast disappearing. Traditionally, for example, a telephone was for making telephone calls. A camera was for making photographs. A fax machine was for faxes and a computer was for e-mail, word processing, and Internet connections.

In the past two years, there are new mobile devices which perform all of these functions in a single instrument. Some of these instruments have grown out of hand-held Personal Digital Assistants (PDAs) such as the Palm Pilot VII. [11] A number of these devices resemble telephones with miniature screen displays. Perhaps the most striking examples are those of Nokia, [12] a company which is consciously working towards what they term "personal information bubbles." [13] Other important players in the field include Ericsson [14] , Kenwood [15] and Motorola. [16] (figure 1). Qualcomm has a telephone which incorporates Palm Pilot functionality directly. Some players believe that there may well be a new wave of diversification of devices with one crucial difference: such future telephones or PDAs will be completely interoperable.


Figure 1. Examples of new technologies which combine wireless cell-(tele-)phones with Personal Assistants (PDAs) with fax, e-mail, and Internet capabilities from Kenwood and Nokia.

Meanwhile, another trend being explored by companies such as Telecom Italia is the idea of virtual interfaces. In this approach an electronic notepad-like device would have a simple screen which could be reconfigured to resemble the interface for a telephone, a fax machine, a computer terminal, etc. Hence a change of interface permits a new functionality.

With respect to our story these developments are of great significance because they mean that access to the Internet, which today is still largely dependent on a fixed connection, will within five years be accessible in many ways at any time from anywhere on earth. The linking of computers to an orbiting satellite will no longer be limited to the equivalents of James Bond. It will become an everyday affair. The price of such devices also continues to decrease. A new variant of Palm Pilot called the Handspring Visor sells for only $179. [17] Once embedded into everyday devices such as telephones, it is quite feasible that simplified versions of computers could be made available for as little as $25. Persons such as John Gage [18] at the frontiers of the field assure us that the actual cost could well sink to less than $10, at which point the Internet Society's motto of "Internet for all" will become more than rhetoric. Universal accessibility, now only thinkable in the most advanced countries, could potentially be applied around the world to the entire six billion persons of our population.

Geographic information systems and global positioning systems

GISs have been developing steadily since the 1970's. They allow a precise linking between maps and enormous amounts of other information, which can be arranged in layers. Hence there can be one layer indicating where electrical lines exist. A second layer can trace the position of all the sewers. A third can show the position of all the railway lines on a map. A city such as Toronto has over sixty such layers in its GIS. In the past decade there has been an increasing trend to link this information with Facilities Management Systems (FMSs), such that one can also trace where all the telephones, computer terminals, printers, and fax machines are in a skyscraper of offices. Such innovations introduce new interface challenges in moving from two-dimensional to three-dimensional information spaces -- a field where the augmented reality work of pioneers such as Steve Feiner is particularly interesting. [19]

In parallel with the rise of GIS has been a development of GPSs. These were initially developed in the military and continue to be developed in this context, where they have achieved an accuracy of less than a centimeter. Civilian versions are much less accurate but can readily determine one's position anywhere on earth, via a satellite link, to within a hundred meters. In cases where there are clearly defined roads and landmarks this accuracy can be increased to within a few feet. Hence the recent rise of navigation aids in automobiles which show us how to reach a destination via a map. If all this sounds futuristic, a combination of cell phone, PDA, and GPS is already commercially available. [20] As we shall note below, a combination of GIS and GPS has considerable consequences for our interests.

Agents

The introduction of object-oriented programming set out from a notion that individual bits of code could be treated as objects, and thus made re-usable. The amount of code potentially contained within such an object soon grew to include simple characteristics, instructions, routines, and the like. [21] From this emerged the idea that individual agents, each linked with specific instructions, could perform simple routines autonomously. This evolved into a more complex vision of a multi-agent system, and subsequently agent societies, whereby hundreds or even thousands of agents cooperate among themselves to perform tasks, which are cumulatively much more complex.

These developments are potentially of the greatest significance for the world of museums, libraries, and archives. Such institutions typically have great numbers of catalogues. The Vatican Library, for instance, had over eighty catalogues for its secret archive alone. As a result, every time a person wished to search for one new title he or she had to consult eighty books. Assuming that all the catalogues are in electronic form, agents can potentially collate all these names and titles such that a future user will need to consult only a single catalogue in searching for a title. [22] More wide-ranging uses for agents will be considered below.

Optical character recognition

The quest to turn printed documents into digital form was outlined clearly in Buckminster Fuller's Education Automation (1962), where he proposed a universally accessible digital library that would enable anyone, anywhere to study, learn, and grow. He believed that this intellectual freedom of the masses would bring humanity's best ideas to reality. This vision led authors such as Tony McKinley to explore OCR devices over the past fifteen years. [23] Throughout the eighties and early nineties OCR devices were fairly accurate but invariably not fully reliable. In the past few years their accuracy has improved to such an extent that they are near perfect.

One of the unexpected consequences of such advances has been the advent of new technologies such as the Quicktionary. [24] Using this device which resembles a fat pen with a small viewing screen, a user can touch any word which is then scanned by the device and a definition for this term is immediately provided. The present limitation of the Quicktionary is that the number of terms is limited to the internal memory of the pen-like device, which was around 285,000 definitions last year and is presently 430,000 definitions.

A combination of the innovations outlined above introduces considerably more dramatic possibilities. Given wireless connections, a penlike device can scan in a word, which is conveyed to an electronic book or notepad. As the term enters the electronic book the user is offered options resembling items one to five in figure 3. Let us say the user chooses definitions. They are then offered a choice of standard dictionaries such as the Oxford English or Webster's Dictionary. Their choice of dictionary is then coupled with the word identified through the pen scanner and relayed via the Internet to a Virtual Reference Room and the corresponding definition is then sent back to the user's electronic book, notepad or Personal Intelligent Assistant (PIA; by Philips). Whereas the Quicktionary was limited (a) by the memory of the pen device and (b) by screen space to providing a single word or very short definition, our solution will provide as full a definition as desired. This same principle applies to all kinds of reference works including encyclopaedias, book catalogues, and specialized bibliographies.

This can be complemented by a series of elementary add-ons. With voice recognition software, users will have an alternative of making their choices voice-activated, which is an obvious convenience except in cases where a user is working in a foreign language in which they are unfamiliar with pronunciation patterns.

As miniaturization continues apace, an increasing number of standard reference works can simply be downloaded onto the electronic book or PIA. Such regularly used works can thus come under an annual subscription scheme or be part of basic services to a citizen in the way that use of public libraries has traditionally been a free right. Meanwhile, less frequently used works would continue to be used via an Internet connection to on-line virtual reference rooms. Before considering further possibilities a short excursus on hypertext will be helpful.

3. Hypertext

Hypertext is intimately connected with the rise of the Internet as we know it today. The idea was implicitly described by Vannevar Bush in his pioneering article, "As we may think" (1945), [25] and developed by Douglas Engelbart, [26] who coined the term "text link" and linked the concepts of hypertext and multimedia to create the term "hypermedia" and later "open hyperdocument systems." The term "hypertext" [27] was coined by Ted Nelson, who defined it in Literary Machines as "non-sequential writing." [28] Nelson also drew attention to what he termed the framing problem of ordinary hypertext and in his Xanadu system accordingly called for a universal hypermedia environment. As to the origins of the concept Nelson claimed: "The hypertext is a fundamental document of Jewish religion and culture and the Talmudic scholar is one who knows many of its pathways." [29]

In Nelson's vision everything could be linked with everything though he acknowledged that this could bring problems. On the positive side, it introduces enormous freedom because everything can potentially be linked. On the negative side, it creates a complete spaghetti effect qua connections (figure 2). Everything leads to everything else. One cannot see the big picture or the frameworks for understanding due to the unbelievable complexity of links, which Nelson terms the intertwingledness of things. Particularly attractive in Nelson's vision is the potential for all users effectively to become authors, something which the World Wide Web as subsequently developed by Tim Berners-Lee has effectively made into a reality.



Figure 2. Two examples of the spaghetti effect when everything is connected with everything else from E. Noik. [30].

Attractive as all this seems, it also introduces some unexpected problems. A good author must be a good organizer of knowledge. Everyone becoming an author does not yet mean that everyone is necessarily a good organizer of knowledge. In practice many individuals with no experience of organizing knowledge are now presenting their materials on the web. This is a major reason why websites and the Internet as a whole pose such problems qua navigation. [31]

Fortunately these problems of a disorganized web are so universally recognized that a number of possible improvements are being explored. Robert Horn has provided a useful brief history of the Internet with respect to hypertext approaches, which is summarized in figure 3.

Date Project Individual
1945 Invention of concept in "As we may think" Vannevar Bush
1962-75 First operational hypertext: Hypermedia Douglas Engelbart
1965 Coined term "hypertext" Ted Nelson
1968 First university instruction Van Dam and Brown
1972 Menu interfaces Zog Group, CMU [32]
1976 Spatial Dataland Negroponte and Bolt
1986 Hypertext for PC and Macintosh Brown and Guide
1987 Vision of the Knowledge Navigator Sculley
1987 First Commercial Hypertext "Hit" Atkinson

Figure 3. Brief history of hypertext according to Robert Horn (1989)

Robert Horn has also developed the idea of information mapping. [33] Horn outlined a method for navigating through structured hypertrails, which he divided into nine kinds: prerequisite, classification, chronological, geographical, project, structural, decision, definition, and example. This offered a first helpful glimpse of a more systematic approach to the potentially infinite connections, a theme to which we shall return below (sections 4-7).

In Horn's hypertrails the chronological approach is one of nine alternatives. In our view the chronological dimension applies potentially to every hypertrail: i.e., maps have a history, structures have a history, definitions have a history as etymology, etc. Similarly the geographical dimension can apply to all the hypertrails. Structures in China are frequently different than structures in the United States. In our view, it is also useful to identify different levels of hypertrails: those involving single terms as in classifications; a few terms as in definitions, or examples, which can be seen as a branch of definitions (cf. figure 4).

At WWW7 (Brisbane, 1998), Tim Berners-Lee outlined his vision of a global reasoning web, a theme which he developed at WWW8 (Toronto, 1999), where he spoke of the development of a semantic web. The web, he noted, had begun as a method for individuals to communicate with each other. It is rapidly becoming also an instrument whereby machines can "communicate" with each other without human intervention. For this to become a reality requires separating rhyme from reason: separating those aspects which are subjective expressions of individuals (poetry, theatre, literature) from objective logical claims (logical principles, mathematical formulae, scientific laws, etc.). How could this possibly be achieved? Everyone putting things on the web could effectively be invited to identify the truth standard of their materials. Those making false claims would then be subject to legal challenges. Those wishing to avoid stating their status could by their omission implicitly be suggesting that their claims are not as transparent as they would have us believe.

This vision of a global semantic web is a noble goal and has our full support. Essentially it addresses the challenge of how we deal with new knowledge as it is put on the web. Our concern lies in combining the enormous amounts of enduring knowledge in libraries, museums, and archives with this vision. Again a brief excursus on every-word hypertext is needed.

Every-word hypertext

Jeffrey Harrow, in a recent discussion of e-paper in his electronic bulletin on The Rapidly Changing Face of Computing (RCFoC), drew attention to a new approach to hypertext:

For example, imagine if you could do a special "tap" on a word displayed on your Epaper and immediately be whisked to relevant information about it. "No big deal," you're probably thinking, "that's just another hyperlink, which already drives the Web." But in this case, I'm talking about tapping on words that have NOT been previously hyperlinked. For example, I just ALT-clicked on the word "Cambridge" a few paragraphs above (OK, I had to click, but I'm using a PC, not a book of Epaper) and a window popped up telling me that it was settled in 1630, its first name was New Towne, and it has a population of about 96,000. Where did this come from?

Brought to our attention by RCFoC reader Jack Gormley, a new free service on the Web called GuruNet [34] makes a stab at turning EVERY word on your screen, not just those in a Web browser, into a "live" link to further information. With their software installed, ALT-clicking on any word quickly evokes a response from their server, which has licensed numerous reference works to provide more focused, deeper insight than you might get from a search engine.

I've only just begun to explore GuruNet, and it is in beta, but it's an interesting concept. And hey, while it didn't directly know what to do with "RCFoC," when it drew a blank it offered to feed the mystery word to a search engine, which in this case came back with 487 references. Not too bad.

Now try THAT with an Industrial Age paper book -- which gives us an idea of why the day might really come when we can't imagine a "non-active" book. [35]

4. Augmented books

The approach outlined above by Harrow takes any given term, sends it to an Internet search engine, and finds what corresponding sites are available. Our notion of augmented books uses this every-word hypertext in combination with the new technologies outlined above as a point of departure to arrive at new possibilities.

A great advantage of a traditional book is that it makes handily accessible a certain amount of knowledge within its two covers. A limitation of a book is that the rules for organizing this knowledge will frequently vary from one book to another: namely, conventions for names of authors, subjects, or places. For instance, one book refers to Eric Blair, another refers to George Orwell and yet both are referring to the same individual. The rise of electronic library catalogues and more recently digital library projects has led to increasing attention to authority lists qua authors, names, subjects, places, etc. in library catalogues. This tendency towards standardization needs to be extended throughout the entire reference section of libraries such that the names, subjects, and places in classification systems, dictionaries, encyclopaedias, national book catalogues, abstracts, reviews, etc. are all correlated.

The librarian, information scientist, or classification specialist may agree that this is much to be desired, but will almost invariably dismiss the task as impossible because of difficulties in correlating terms. How can one be sure, they will ask, that one can know precisely what the equivalent for a given word (caption, descriptor, term) in Dewey should or could be in the Library of Congress or some other classification system? Abstractly asked, the question is daunting and sometimes near impossible to answer.

Fortunately a more empirical and pragmatic solution is possible. Instead of musing about possible equivalents, one can study what equivalents practitioners have established. Major books have been classed under most of the major classification schemes at least once. So we could start with a book such as the Bible or Dürer's Instruction in Measurement (Underweysung der Messung), examine under what words it is classed in a Dewey library, under what descriptors it is classed in a Library of Congress library, and so on. [36] At the end of this process one has one title with a series of equivalent words in a number of classification systems. This rather dreary task is sufficiently mechanical that it can quite readily be assigned to a low-level agent. After checking many millions of books against several hundreds of classification systems one will have an empirical list of equivalent terms in the main classification systems.

This same approach can be extended to the subject lists in national book catalogues and earlier bibliographies. Historical bibliographies such as Draud and Murhard frequently listed books under very different subjects than those to which we are accustomed. Agents using our empirical approach will simply use a given title as a point of departure and collect all the subject names under which it has been listed. This is a very long process, which would be overwhelming if it had to be done manually. However, with the help of agents and a serious amount of computing, it is fully feasible. A first result will be a new kind of meta-data whereby any book is linked with a set of subject terms found in classification systems, subject catalogues, and bibliographies, all of which are again co-related to a given date. One can then trace the history of terms with which a book is associated in the course of centuries.

One can use a similar approach for authors. We all know Shakespeare. We check a standard bibliography to determine how many plays Shakespeare wrote and what their titles were. We then have agents check all the earlier bibliographies of Shakespeare in order to trace how the number of plays claimed to be by Shakespeare varies over the centuries as do the precise names of those plays. That which applies to authors applies equally to painters, architects, engineers, etc. Traditionally in art history, there was a quest to achieve a catalogue raisonnée of all the paintings of a given artist. With the help of agents we shall have a dynamic catalogue raisonnée whereby we can trace that the number of paintings attributed to Rembrandt in 1700 was different than in 1800, in 1900, in 1950 or even today (after the Rembrandt committee has finished its research).

Agents can also play a considerable role with respect to place names. They can help us recognize that Liège (in French), Luik (in Dutch), Lüttich (in German), and Liegi (in Italian) are one and the same city. They can also help us develop a new kind of dynamic cartography. Poland today is very different than seventy years ago, 1800, 1600, or 1400. This means that one Polish city in 1400 might well have been a Russian or a German city at other time periods. It also means that the number of towns in a country varies enormously over time. If I am searching for all the towns in Poland in 1400, the number will be much larger than two centuries later when the geographical expanse of Poland was much smaller.

Applied to the whole range of materials available in reference rooms of the major libraries, agents will thus lead to a new level of meta-data which not only associates persons, subjects, objects and places with a number of variant names -- which can serve as alternative search strategies -- but also with an awareness of cultural and historical shifts in those variant names. For we shall be able to see that a book which a French person classes in one way may be classed quite differently by an Italian, a Russian, an Indian, a Chinese, or an Australian. These differences extend to all human activity, especially in the field of interpretation. Hence an Italian archaeologist's view of the Roman Forum is different from a French or a German version. Philosophers tell us that people have different world views (cf. German Weltanschauung). This new approach will make such differences visible with respect to everyday objects.

What, the impatient reader may ask, has this to do with our theme of augmented books? Everything, if we consider the enormous potential that will be unleashed if access to all this cumulative reference material is made available through the web. Some simple scenarios will prove useful by way of illustration.

Let us assume I am reading a traditional printed book. I encounter the name Leonardo da Vinci. I am aware that he was a Renaissance figure but want to know more. My pen-like object scans the name, transmits it to my notepad which offers me a number of options with a default for a basic introductory bibliographical sketch, which I choose. The notepad then issues a wireless command to an Internet connection, goes to the nearest virtual reference room, acquires the relevant bibliographical sketch and transmits this back to the monitor of my notepad along with a new set of options: Do I want to know more about Leonardo's life, the titles of his paintings, a list of his (primary) publications, or a survey of (secondary) literature concerning him or am I content with this sketch and now want to go back to the book which I was reading?

There are also many new possibilities with respect to place names. I am reading a book and come across the name Timbuctoo. I have a vague notion it is probably in Africa -- or is it one of those unlikely place names in the United States such as Kalamazoo? -- and would like to know where it is. I scan the term with my pen-like object or I simply speak the word into my notepad, which then issues me with a number of options. Do I wish to see it on a regular map, a terrain map, a political map, a population map, etc.? The notepad then conveys my choice via a wireless connection to the nearest virtual reference room on the Internet and then relays back to my notepad an appropriate map with Timbuctoo highlighted.

Or I am travelling in Turkey and have with me a Baedecker or some other travel book. I know I am within 100 miles of Uzuncaburc, a former Greek colony with an impressive temple, but am lost. Using the same basic approach I am shown where Uzuncaburc is on the map, with the difference that a GPS now determines the position of my jeep and also shows where this is on the map. An automobile navigation system can then compute how I get from where I am to Uzuncaburc, my destination.

At first sight all this is very much like the hypertext idea of a Ted Nelson. There is, however, a basic difference. While acknowledging that all connections are possible, our approach suggests that if I am reading a book and begin searching for names or places there is a certain sequence that makes more sense than others. It usually makes sense, for example, to determine the name of an author before asking what are the titles of that author's publications and it is useful to know the titles before choosing which of those titles one wishes to consult in full text.

The complexity of these sequences will, of course, vary with different users. A child will need a simpler list than a university student. A general practitioner will have other medical requirements than a cardiologist who is the world expert on the left aorta of the heart. In time, middle-level agents will be able to deduce the level of the user and adjust choices and interfaces accordingly.

We refer to our method as an augmented book because we do not foresee a simple replacement of a physical book by an electronic version. The physical book can remain, while wireless technologies give us the privilege of having all the benefits of a major reference room without requiring that we be there, or even that we leave our seat. However, augmented versions of traditional printed books represent only a first step in the revolution which concerns us here.

A next stage of possibilities arises when the physical text we are reading is entirely available in electronic form elsewhere on the web. Let us say that I am reading a physical copy of the Bible, the Koran, or the Mahabharata. I encounter a name such as Solomon and want to know more about this person. As outlined above, a mere scan by my pen-like device can relay the name to my notepad and offer me a number of options which can then be searched for in a virtual reference room and relayed back to my notepad via the Internet. Since the whole of the Bible is on-line, the same network can provide me with additional information such as how many times and in what contexts Solomon is cited in the Old Testament. These incidents can in turn be checked against other historical contexts where Solomon is discussed, such that we can see on which episodes of Solomon's life the Bible focussed and which episodes it overlooked. Images of Solomon can also be found: Solomon's choice with the two children, Solomon and the Queen of Sheba -- including Brunelleschi's panel on the doors of the Baptistery in Florence. Hence a Biblical story can take me "directly" to Renaissance art.

A further scenario is the following. I am reading a book on the history of physics, which has no index (as is often the case particularly with French books). I come across the name Galileo and would like to know how often this name occurs in the book. The pen-like object scans the name, conveys it to my notepad, which combines it with my request, checks the on-line version of the book, and relays to my notepad all the occasions within the book where the name of Galileo is cited. This is effectively a first step towards retrospective indexing or indexing on demand.

This principle could readily be developed. From the title of the work, my handheld or desktop computer can determine that it is a book about physics. An agent then goes to the authority list of all names and culls a subset of all names of physicists. This list is then used as a master list against which to check every name mentioned in the book. At the end of this exercise the agent has a subset of this master list which represents an index of all the names of physicists within this given book. A similar approach can be applied to subjects, places, etc. In the case of more famous books which exist in various editions, of which some have indexes, agents can then compare their own lists with these existing indexes in order to arrive at more comprehensive indexing.

The professional indexer and library scientist will rightly insist that such techniques are never as profound as the indexing skills of human professionals in the field. On the other hand, they will have to admit that there are never likely to be enough professional indexers retrospectively to index all the tens of millions of existing old books. In such a non-ideal world, even limited methods which produce useful results are considerably better than nothing.

A fundamental claim of professional indexers is that simply making a list of all the words in a text may produce a concordance but not a proper index. The radical form of this claim states that even the author of a text is incapable of indexing properly his or her own work. [37] What is needed, argue these indexers, is a careful analysis of the text to determine the underlying concepts or processes, which are often not directly listed in the text. Say the book is about botany and there is considerable discussion of papaveraceae. At other times there is discussion about poppies. Only an expert indexer who is also familiar with botanical classification will know that poppies belong to the papaveraceae family.

Major reference rooms contain our knowledge about classification systems, thesauri, and other methods of ordering subjects and processes. If these are available digitally through a virtual reference room, other possibilities emerge. I identify the term poppy, which is relayed to the virtual reference room. The word is identified as a botanical term, an appropriate botanical classification is found, and I am shown on my notepad the relation of poppies to the papaveraceae family. Or conversely I am reading about the papaveraceae family and wonder what actual plants or flowers belong to this family. By a similar process the virtual reference room is able to show me that poppies are an example of this family.

Hence, I can use classification systems as an intermediary to show me more abstract concepts (through broader terms) or more concrete examples (through narrower terms) of any word, which are encountered in any book that I am reading. This ability to move seamlessly to broader and narrower terms greatly expands both the domains at my disposal and the potential accuracy of my searches. Implicit decisions about what to pre-structure and what to search for on-demand are discussed below.

With a traditional book the amount I am able to glean from its pages is limited to my own knowledge and experience. With an augmented book the millions of hours of analytical sorting and organizing of knowledge represented by (virtual) reference rooms are at my fingertips. [38] The augmented book thus augments my brain by linking it with the cumulative memory of mankind, and guiding it through the levels of knowledge (figure 4).

In future, numerous further developments are foreseeable. My notepad has context awareness: it is "aware" that my research is focussed on bio-chemistry. It also "knows" which languages I am able to read. Accordingly, through user-modelling, an agent culls the books in print and periodicals to keep me in touch with what is happening. Assuming a semantic web as envisaged by Tim Berners-Lee, [39] these books and articles will have metadata tags identifying which are by major authors in the field, which are by specialists, and so on. Subsets of these lists can therefore be offered on the basis of whether I want just the latest views of authorities in the field or wish to study new views also.

Pointers 1. Concepts and their Terms (Classification Systems)
2. Concepts and their Definitions (Dictionaries)
3. Concepts and their Explanations (Encyclopaedias)
4. Titles (Bibliographies)
5. Partial Contents (Abstracts, Indexes)
Objects 6. Full Contents (Books, Paintings, etc.)
Interpretations 7. Internal Analyses  
8. External Analyses  
9. Restorations  
10. Reconstructions  

Figure 4. Basic levels of knowledge in the System for Universal Multi-Media Access (SUMMA).


Figure 5. Connections between reference levels (a term, a definition, an explanation, a list of authors, titles, partial contents as in abstracts and reviews) and full text, links between primary and secondary literature and different kinds of content, namely, internal analyses, external analyses, restorations, and reconstructions.

5. New reference potentials

Implicit in the above is a new approach to reference materials as a whole. In the past, reference rooms contained a whole range of sources (1-5 as outlined in figure four). In a traditional library I would consult a classification scheme to study related terms and use what I find to check another part of the subject catalogue. Or possibly, I might check the definition of a subject term, and then go back to the subject catalogue. In a virtual reference room all these individual levels are linked and I can go from any of the levels to re-enter my search mode.

Hence, in a traditional library each time I wished to consult a new source, I had to open some other book, often with its own idiosyncratic methods of organization. Given a virtual reference room where all these works are linked through common authority lists, which reflect historical and cultural dimensions of knowledge, I can go from a term to a definition, an explanation, a list of authors, titles, to partial contents (abstracts and reviews) to full text without leaving my desktop or notepad. While some may contend that all this is merely futuristic dreaming it is useful to look at the Japanese Electronic Dictionary Research Institute's (EDR) Electronic Dictionary as an example of what is happening today (figure 6). [40]

These new reference potentials are by no means limited to the reference sections of libraries. In the past, partial contents of books were available through at least three kinds of materials: (1) tables of contents and indexes, usually at the beginning and end of a book; (2) abstracts and reviews, available in periodicals; and (3) learned monographs and commentaries in secondary literature with their own indexes usually established under different principles than those in the primary literature.

In future, these three kinds of materials will be linked. I am interested in Leonardo da Vinci. A master index provides me with a list of all subjects in his manuscripts. These subjects correspond to a standard classification system, such that I can switch to broader or narrower terms. Say I am interested in optics. A list of all references to optics is displayed on my screen. I can ask for subsets relating to a given date or period (e.g., 1505-1508), and/or a given place (e.g., Milan). I choose a folio in the Codice Atlantico (Milan, Ambrosiana). Low-level agents will (have) search(ed) all the secondary literature on Leonardo concerning that folio and the subject optics. This results in a list of all secondary literature relevant to that folio as a whole, to that folio with respect to optics, and to optics elsewhere in the notebooks.

Alternatively, I am reading secondary literature, say the late Dr. Kenneth Keele's basic work, Leonardo da Vinci. Elements of a Science of Man, and encounter a reference to optics in Leonardo's Manuscript C (Paris, Institut de France). The system now allows me to go back to that folio in Leonardo's notebooks and, then, for example, back to a list of all other references to optics in Leonardo's writings.


Figure 6. Structure of the Electronic Dictionary Research (EDR) Institute's Electronic Dictionary.

In future, these connections will not be limited to traditional library materials. Lectures given by Leonardo scholars and university professors will be online. Agents will search these and create appropriate links. The order in which these are searched will vary. Sometimes it will follow a vertical model (as in figure 5). At other times the sequence may be quite different as in the following example (figure 7). Say we have a folio from the Codex Atlanticus with a preparatory drawing for the Last Supper. Agents will prepare alphabetical and chronological lists of all preparatory drawings for the Last Supper, link these such that one can view the corresponding images accordingly, i.e., alphabetically or chronologically, then go to an image of the Last Supper in Milan, and from there (a) to all copies and versions thereof; (b) to a lecture which compares all these drawings and originals, and/or (c) to a lecture which considers other aspects of the Last Supper.

The full power of the approach lies in that the sequence is open; that is, while viewing the lecture in (b) above, I can choose to return to the original folio from which the preparatory drawing comes, follow the figure it represents through various versions and copies, or study the history of restoration of the original fresco with respect to that figure. Quite simply, the potentials of linking things as foreseen by Bush, Nelson, and others remain, with one fundamental addition: there are lists to guide me through the various choices which present themselves.


Figure 7. Connections between different levels of reference levels and interpretation: a folio from a primary text (level 6); alphabetical and chronological lists (level 4); list of preparatory drawings (level 7), internal analysis (level 7), and external analysis (level 8) of same.

In the interests of simplicity we have focussed, in the above examples, on new treatments of textual knowledge. Augmented books in electronic form allow much more, of course. If an original, printed book showed an engraving or black/white photograph of the Acropolis, the corresponding augmented book potentially allows us to call up color photographs, Quick-Time Virtual Reality (VR) sequences, movie clips, and even live video feeds. In short, it unleashes the whole range of multimedia on any book we may happen to be reading. From an administrator's viewpoint, this approach also implies a virtual reorganization of libraries and museums.

6. Virtual reorganization of libraries

In an earlier section we noted that the convergence of new media will bring about a digitization of all reference works as well as a series of new links between these such that one can go seamlessly between classification systems, dictionaries, encyclopaedias, bibliographies, abstracts, and reviews. It is important to stress that these are only the first steps in a more fundamental change, whereby the contents of these reference works will be transformed, as will be illustrated by some examples.

In the case of classification systems, for instance, (low-level) agents can produce a kind of concordance between all the major classifications indicating which are the corresponding broader and narrower terms in other classifications. Hence, to continue with our papaveraceae example, the system will tell me which classifications have narrower terms for this concept and then allow me to view these alternatives either singly or together.

The case of dictionaries is similar. Having scanned these in and created links between equivalent words therein, a next step will be to analyze different kinds of definitions found therein, following, for instance, Dahlberg's fundamental distinction between ostensive, nominal, and real definitions. [41] This can lead to a re-organization of dictionaries such that every word has accompanying it, not only a series of etymological definitions, with full references to the history of a term in various languages, but also a clear indication of the kind of definition entailed. This will provide us with an important new search parameter. Sometimes we may wish to consult only real definitions, in Dahlberg's sense above, and at other times we might accept a more liberal approach and include ostensive and nominal definitions. Meanwhile, authority lists of personal names can be correlated with biographical dictionaries, such that I can see exactly which biographical works (be it the Dictionary of National Biography or Jöcher's Gelehrten Lexikon) are available for any given individual.

With respect to encyclopaedias, agents can determine all the languages in which a given term is treated. This might in turn be linked with classification systems such that one can visualize how one language deals more broadly with one subject area than another, and/or with a greater level of detail than another.

With respect to titles, although library catalogues typically have information concerning the standard title in the original language of writing/or publication, these same catalogues typically display titles without any regard to this standard. Hence, if one looks for Alberti's Della pittura, one needs to also look under De pictura, On Painting, Traité de la peinture, Traktat der Malerei, etc. Agents can collate all these variants and remind us of the language involved in each case. [42] While this could seem redundant in the case of On Painting for someone who is a native English speaker, in the case of O Malarstwie, the Polish equivalent, the language tag can prove very useful. In the case of reviews, agents can do systematic searches beginning with standard reference works such as the Internationale Bibliographie der Rezensionen and present these findings both alphabetically and chronologically.

Physically, libraries have traditionally been divided into two fundamental parts: (1) a reference room, which contains catalogues and reference works and (2) stacks which contain the books: i.e., pointers to content and the contents themselves. Academically, scholars have in turn divided those contents into two further categories: primary literature (original sources) and secondary literature (monographs and articles about those original sources). Physically this distinction is somewhat reflected through separate sections for books and periodicals.

Another way of looking at the primary-secondary distinction is to suggest that primary literature is the equivalent to the original (objective) contents, whereas secondary literature constitutes the (subjective) interpretations of those contents. As such, secondary literature then lends itself to four further distinctions. Interpretations (of secondary literature) can entail (a) internal analyses of the object itself, as brought into focus by close reading; (b) external analyses, whereby the object is compared with similar objects of that class; (c) restorations, whereby the original object has built into it the interpretations of the restorer; and finally (d) reconstructions, whereby the built-in interpretations of the reconstructors are that much greater. These reconstructions pertain to given diagrams, claims, ruins, etc. The system will give systematic access to each of these. [43] Taken together with the pointers to knowledge (reference works) and the full contents of the texts themselves, these interpretations are an important part of the ten basic levels of knowledge (figure 4).

This approach to searching at different levels of knowledge is now typically referred to as granularity. Historically, this process was very long in developing. The idea of searching at the level of books goes back to the earliest known libraries in the third millenium before Christ. The idea of searching at the level of articles came only at the end of the nineteenth century when Otlet and Lafontaine founded the Mundaneum (Brussels now at Mons) and the International Institute of Bibliography (now the FID in the Hague). In a sense, full-text searching began with the introduction of indexes in the latter Middle Ages, but practically speaking full-text searching has only been introduced in the course of the past decades with the advent of more powerful computers.


Figure 8. Examples of three basic levels of granularity in searching for materials. An ability to study the full contents of books at level three goes hand in hand with a need to link this with more abstract searching at levels one and two.

Paradoxically, this ability to search potentially at the level of every word has introduced a concurrent need to search at higher levels of abstraction in order to see the wood for the trees. Hence as one has gone from studying an individual word, phrase, or sentence to all uses in a book, there has been an increasing need to study also partial contents and concepts at a higher level in order to understand larger patterns (figure 8). Here, one approach is to re-organize knowledge in terms of different subsumptive relations such that one can go from a subject down to its properties or up to its whole/part and type/kind relations (figure 9). This is a realm where recent developments in scientific visualization offer promising new possibilities. [44]

Material, Subject Relations
Logical relation Subsumptive Type/Kind Principle/Manifestation
Genus/Species
Species/Individuum
Whole/Part Organism/Organ
Composite/Constituent
Matrix/Particles
Subject/Property Substance/Accident
Possessor/Possession
Accompanance

Figure 9. Levels of abstraction based on subsumptive relations according to Perrault.

On this problem of linking different levels of abstraction, futurists such as Benking [45] have also intuitively sought to link subjective and objective elements in arriving at a more systematic understanding of the world. Benking foresees using a universal classification system such as the ICC as a means of switching between various classification schemes and moving from levels of abstraction to levels of detailed knowledge. Lacking in such outlines is a clear method of how one moves between various levels of abstraction. Even so, his intuitive understanding of the challenges is extremely stimulating.

Our approach to levels of knowledge is an attempt to create a more coherent framework for systematic study at different levels of granularity. In electronic versions of libraries these levels provide important new approaches to orientation within a field of knowledge, contextualization of knowledge, and ultimately the re-organization of knowledge.

7. Virtual reorganization of museums

That which applies to libraries will apply in similar form to museums, archives, and other memory institutions with collections. A simple example will illustrate how all this will lead to a new contextualization of knowledge. The Last Supper is an event in the life of Christ as reported by the four Evangelists (Matthew, Mark, Luke, and John) in their gospels in the New Testament.

Once agents have searched all references in the secondary literature, we can trace all commentaries on the Last Supper as well as the history of images of this event. Hence, we can trace how the Franciscan idea of making a fresco of the Last Supper at Assisi was taken to Santa Croce, the Franciscan church in Florence, adapted by the Dominicans through Ghirlandaio in San Marco (1479-1480), then became the fashion in Florence as Ghirlandaio produced another version in Ognissanti (1480), Castagno another in Sant'Apollonia (c.1480) before Leonardo da Vinci immortalized the theme in Santa Maria delle Grazie (Milan, c. 1495-1498). The copies of Leonardo's version now in Lugano, Tongerloo, and London (Royal Academy, Burlington House) will also be available, as will many other versions as paintings, models, etc.

Internal analyses (level 7 in figures 4 and 5) will allow me to trace all discussions about the contents of a given fresco, including identifications of the various persons in Leonardo's version. External analyses (level 8) will allow me to trace the history of different versions and also different treatments of a given Apostle within a fresco. Restorations (level 9) will make available all information about various interventions concerning Leonardo's painting at Milan. Reconstructions (level 10) will make accessible all attempts to analyze the space within the fresco, ranging from crude drawings to complex perspectival drawings, physical models, and computer simulations.

In the case of paintings, this process of contextualization will provide, to the extent possible, a history of where the object was situated: i.e., a three-dimensional version of the approach in the Getty Provenance Index. As a result I shall be able to see where the painting hangs today along with all the places it hung previously in other galleries including the church or palace for which it was initially commissioned. This same principle can be extended to the near future, such that I am able to see where paintings will travel for special exhibitions in the coming months.

Initially the idea of a virtual museum was seen primarily as a means of helping tourists: providing a guide so that tourists can find more quickly Botticelli's Birth of Venus in the Uffizi, Leonardo's Mona Lisa in the Louvre, Rembrandt's Nightwatch in the Rijksmuseum, etc. Initial examples of this approach such as the Micro-Gallery in London focussed on orienting users qua the public rooms of the National Gallery. A next step would be to make visitors aware that major museums such as the National Gallery have a number, sometimes a great number, of paintings in storage. Classing these by themes and showing how there are (sometimes not so subtle) shifts of what gets shown and what gets stored in the course of decades will provide new insights into the history of taste.

The French concept of an imaginary museum (le musée imaginaire) expands this approach considerably. For now an electronic museum can do things that do not necessarily correspond to the physical world and sometimes would not be possible in the physical world. A dramatic example would be to show electronically all the paintings of Leonardo in a single, fictive space, [46] and combine this with all the restorations and reconstructions of those works, particularly the Last Supper. Such virtual museums can be extended to periods and styles. For example one could have an imaginary museum of impressionism which would show all the famous extant examples ranging from the Jeu de Pommes in Paris, the Metropolitan in New York, the Barnes Collection in Philadelphia, and the Pushkin Museum in Moscow.

Slightly more prosaic but wonderfully useful nonetheless would be a virtual history of exhibitions. Ideally, this would serve as an excellent aide mémoire in helping us to recall historic, so-called blockbuster, exhibitions. At the same time it can help us to imagine what we missed. Many of us know the experience only too well. There is an exhibition by one of those famous artists such as Rembrandt, Monet, Van Gogh, or Picasso in a city such as Paris or Washington, D.C., and we miss it because we just cannot get away during the few weeks that the exhibition is in town. Or even worse, we get there only to find that the exhibition is sold out for the few days that we are there. Sometimes we manage to get a ticket and find that we have fifteen minutes to see a set of paintings because visitors are being herded through as if they were cattle. Alternatively, we get in and find that the herds are so great that we effectively cannot see the painting at all. This even occurs with paintings that are on permanent display such as the Mona Lisa.

The advent of the notepad computer and PIAs described earlier offer many new possibilities. Some of these are being explored in the context of the Intelligent Information Interfaces (I3, Icube) [47] program of the European Commission. For instance, the Hyper Interaction within Physical Space (HIPS) [48] project, which is being tested in the Museo Civico of Siena, allows visitors to listen to information using earphones and make notes on a PDA. A limitation of this project is that information used is limited to that contained within the PDA.

Our approach foresees linking the PDA through the Internet with a much larger array of materials in a virtual reference room as in the following scenario. I am standing in front of a copy of Brueghel's Winter Landcape in the Bonnefanten Museum (Maastricht). My notepad computer is equipped with a small camera, which recognizes the painting via pattern recognition, the title, or simply via the equivalent of a bar code. This information is taken up by a (low-level) agent, which notes that this is a copy of a more famous original now in the Kunsthistorisches Museum (Vienna). The software agent then offers me a possibility of seeing various other copies and versions of the same painting.

While I am still at home and preparing to visit a city such as Florence I could "have a chat" with my PIA. I could point out that I am going to Florence for four days and am interested in seeing museums and churches. The assistant determines which museums and churches are open during those days and displays these as a list. Say I choose the Uffizi and San Marco. The assistant tells me what exhibitions will be on at that time. If I am a specialist, the assistant can give me a list of painters exhibited in the Uffizi and ask me which of these are of particular interest -- assuming that I am a new user and my assistant does not yet know my specialties. The assistant can then list earlier exhibitions about those painters, both in Florence and elsewhere, and more detailed literature, if I choose. Hence, I can effectively do research for my visit before I arrive in Florence and once there I can call up different copies, versions, related preparatory drawings, etc. If I am interested in the history of the painting, I can review various restorations which have been made thereof or study the layers under the surface of the painting as revealed by new techniques such as infrared reflectography. [49] The new media will literally allow me to look into pictures and see things that I could never see as a regular visitor.

Such innovations may prove to be as significant for museum professionals as for tourists and regular visitors. There has traditionally been a clear division of labor whereby curators studied the surfaces of paintings as connoisseurs, whereas conservators studied the layers underneath. Often these two groups went about their work as if they were dealing with completely different objects. In fact, detailed knowledge of the layers of paint under the surface studied by the conservator can often provide vital clues for a connoisseur, who is uncertain whether to attribute a given work to the master himself, to a student, to a member of the workshop, or to a follower. Hence, the new media can bring new interactions between conservators and curators, which can ultimately further enrich the viewing experience of visitors.

As the capacities of high-level virtual reality become more generally available, further scenarios are possible. One could have a history of the Louvre in Paris which shows (a) how the complex of buildings grew in the course of the centuries, (b) how the paintings of the collection are configured differently over the centuries; (c) how these changes and changes in frames reflect shifts in the history of taste, and (d) in certain cases even show reconstructions of how paintings have changed in color over the ages. [50]

Seeing such intelligent guides and virtual exhibitions cannot truly replace the experience of the actual objects. Nonetheless, they serve at least four valuable purposes:

  1. In many cases, the simulations will have the effect of inciting persons to look more attentively at the originals.
  2. They can help us to see a given painting in the context of others by that artist and/or their contemporaries. As a result when we next see that painting on its own in its home gallery, it will evoke a much richer set of connotations.
  3. Even those of us who are very familiar with museums such as the Louvre, the Prado, and Vatican often do not have occasion to see the great collections and monuments of Russia, India, China, and Japan. In fact very few persons are able to see all the great museums. Hence virtual exhibitions can serve to bring understanding of those places which we cannot visit.
  4. Finally, in the case of endangered sites, such as the Caves of Lascaux or the Tomb of Nefertari, which are now closed to the public, such virtual reality reconstructions are a substitute for viewing the original, in the interests of long-term conservation.

8. Reorganization of knowledge

We noted earlier how object-oriented programming led increasingly to a bundling of instructions concerning a given operation. Applied to cultural objects this implies that all the knowledge concerning a given subject such as the Last Supper will be bundled therewith, such that one can trace its development chronologically and spatially throughout a range of media. Since the Renaissance we have used media to separate objects: books went into libraries, paintings into art galleries, drawings into drawing collections (French: cabinets de dessin), engravings into engraving collections (German: Kupferstichkabinett), maps into map rooms, etc. The reorganization of knowledge implies that a theme such as the Last Supper, which entails all of these individual media, will link all these objects which are now spread out through numerous institutions.

This trend is more interesting because companies such as Autodesk have already extended the notion of object-oriented programming to the building blocks of the manmade world through what they term Industry Foundation Classes. [51] Hereby a door is now treated as a dynamic object, which contains all the information pertaining to doors in different contexts. Hence, if one chooses a door for a 50-story skyscraper, the door object will automatically acquire certain characteristics, which are very different from a door for a cottage or for a factory warehouse. This is leading to a revolution in architectural practice because it means that those designing buildings will automatically have at their disposal the "appropriate" dimensions and characteristics of the door, window, or other architectural building block which concerns them. There is a danger that this could lead to stereotyped notions of a door, window, etc., a McWorld effect, whereby buildings in one country are effectively copies of those in other countries, and travel loses its attractions because everywhere appears the same.

Alternatively, one can extend the concept of foundation classes to include cultural and historical dimensions. As a result, an architect in Nepal wishing to build a door, in addition to the universal principles of construction applying to such objects, will be informed about the particular characteristics of Nepalese doors, perhaps even of the distinctions between doors in sections of Khatmandu or near Annapurna. Similarly an Italian restorer will be informed about the particular characteristics of doors in Lucca in the fifteenth century. All this may seem exaggerated. But after some of the key historical houses with elaborate ornamental carvings in Hildesheim were bombed during the Second World War, a small group of carpenters worked for several decades to reconstruct the original, beam by beam, carving by carving. They did so on the basis of detailed records (photographs, drawings etc.). If this knowledge is included in the cultural object-file of Hildesheim doors, windows, and houses, then rebuilding such historical treasures will be much simpler in future.

At stake is something much more than an electronic memory of cultural artifacts which will serve as a new type of insurance against disaster. The richest cultures are not static. They change with time, gradually transforming their local repertoire, often in combination with motifs from other cultures. The Romanesque churches of Northern Germany adopted lions from Italy for their entrances, which were, in turn, based on lions from the Middle East. The church of San Marco in Venice integrated Byzantine, Greek, and local motifs. The architecture of Palermo created a synthesis of Byzantine, Norman, Arabic, and Jewish motifs. The architects in Toledo and at Las Huelgas near Burgos created their own synthesis of Jewish, Arabic, and Christian motifs. A comprehensive corpus of variants in local heritage can thus lead to much more than a glorification of local eccentricities and provincialism. It can prove an inspiration to multi-culturalism in its deepest sense. It can include basic building methods, styles, ornament, and decoration.

These combinations were successful because they were guided by culture and taste. Combinations per se do not guarantee interesting results. If taste and sensibility are lacking, the results are merely hybrid versions of kitsch. So the technology must not be seen as an answer in itself. It offers a magnificent tool, which needs to be used in combination with awareness of the uniqueness and value of local traditions. These new possibilities of using technology to expand the vocabulary of cultural expression apply not only to the physical, built environment of the manmade world. Potentially, they apply to the whole of knowledge. Thus the re-organization of knowledge leads to a re-combination of elements to produce new expressions and new knowledge. Here we shall outline briefly some scholarly and business dimensions of what is entailed.

Scholarly dimensions

Individuals (who?)

An initial challenge is to achieve dynamic knowledge of individuals: (1) changing historical knowledge about an individual, (2) changing perceptions of an individual, and (3) assessing the authority of literature concerning an individual. First, there is a problem that what is known about the writings or paintings of an individual changes over time. Today there are static lists of complete works or of a catalogue raisonnée of paintings. Such lists need to be dynamic. As noted earlier, the list of paintings attributed to Rembrandt changes with time and this phenomenon must be integrated into our search strategies such that there will be a time-sensitive answer to the question: What did Rembrandt paint?

Second, there is a paradox that persons now famous such as Montaigne or Leonardo were judged very differently throughout the centuries, almost forgotten in some generations, particularly praised and often for different reasons in other generations. Most of our present methods of presenting individuals do not take adequate account of such aspects.

Third, with a genius such as Leonardo, thousands of persons feel prompted to write something about the man. The number of persons in any generation who have actually read his notebooks has never been more than a handful. The Internet potentially offers us access to everyone who cites Leonardo, but has almost no mechanisms in place to distinguish between standard works, generally respected works, and non-authoritative lists. In our view, the radical proposal of some to re-introduce censorship is not a reasonable solution. The problem is made the more elusive because the recognized world authority in one decade may well be replaced in another decade.

Needed, therefore, are new kinds of dynamic, weighted bibliographies, which allow us to have subsets on the basis of field-specific acceptance, new ways of expressing and recording in electronic form the well-established traditions of peer review (which is totally different from attempts to do simplistic electronic computations of quality), to arrive at peer review with an historical dimension in electronic form and yet still have access to a wider range of less authoritative or, more precisely, less established by the authorities, sources in a field. In tackling such alternatives between the authority of sources vs. (mere) citations, we would be using technologies in new ways to return to central questions of quality. Having such dynamic biographies will permit new approaches to fundamental questions of how we assess quality and importance. The extent to which agents can be developed to address these challenges needs to be studied.

Objects (what?)

Beyond the initial problem of access to a specific object in a museum, library, or archive, or to free-standing monuments is a larger challenge of how we can access all the related objects which have derived from or have been affected by the object: the copies, replicas, versions, imitations, and sometimes pastiches.

Present-day sources typically focus on objects as static entities. The limitations of print frequently lead us to focus on one example as if it were the whole category. Accordingly we all know about the Coliseum in Rome but most of us are unaware of the dozens of coliseums spread throughout the Roman Empire. Using dynamic maps and chronologies, new kinds of cultural maps can be developed, which allow us to trace the spatial-temporal spread of major cultural forms such as Greek theatres or temples, Roman coliseums, or Christian Romanesque churches. This will allow novel approaches to longstanding problems of central inspiration and regional effects, the interplay between center and periphery, in some cases between center and colonies. Such questions pertaining to original and variants (versions, copies, imitations) are again central to the challenges of a European Union which aims to maintain diversity within a larger unity. At another level, this is effectively a global challenge.

Concepts and their relations (what?)

Presently we have many different classification systems and thesauri. Earlier in this paper I outlined a practical approach for mapping between these systems, by using existing applications in libraries in order to do a retrospective backtracking to link concepts. Theoretical proposals for mapping among these systems also exist (Williamson, [52] McIlwaine [53] ). Systems such as the Universal Decimal Classification (UDC) and developments in terminology allow more systematic treatment of relations among subjects into classes such as subsumptive, determinative, ordinal, etc. (Perrault). [54] A dynamic system that allows us to switch between classifications in different cultures and historical periods can provide new kinds of filters for perceiving and hopefully appreciating subtleties of historical and cultural diversity. The enormous implications for learning range from philosophy (especially epistemology), where we can trace changing relations of concepts dynamically, to the humanities with courses on culture and civilization (a term which again has very different connotations in French, German, and English). Instead of just citing different monuments, works of art, and literature, we shall be able to explore different connections among ideas in different cultural traditions. For example, with respect to the fine arts, Ranganathan's classification (India) is much weaker than Dewey (United States), yet much more subtle than Dewey with respect to metaphysics and religion.

Historical and cultural concepts (what?)

An integration of the methods outlined will lead to new kinds of knowledge maps, which allow us to trace the evolution of concepts both spatially in different countries and temporally in different historical periods. This will lead to dynamic bibliographies. We shall be able to trace how a concept is frequently broached at a conference, then developed in articles and books, and gradually evolves into a recognized discipline, which may subsequently re-combine with another as, for example, the way in which biology and chemistry led to bio-chemistry. Such an approach to the growth of knowledge will allow us to return with new depth to problems broached above of standard/model versus variants/versions, of center versus periphery and the role of continuity in the spread of major forms and styles of expression. [55]

Space (where?)

Current printed maps in atlases are static. Historically the boundaries of maps change. At JRC, the Institute for Systems, Informatics and Safety (ISIS) has already developed very impressive three-dimensional maps. In conjunction with the European Space Agency much work is being done in the area of coordinating GISs and GPSs. The spatial meta-data project would produce dynamically changing atlases and link this with GIS and GPS. This is a prerequisite for visualizing changing political boundaries. Coupled with information concerning cities and events and other historical data, this will permit much more nuanced search strategies. As mentioned earlier, we shall then be able to trace how the boundaries of a country such as Poland shift dramatically over the centuries. The question Where is Poland? will thus shift with time and adjust accordingly. Applied globally this will furnish us with more than simple instructions of how to get there. It will make visible people's misconceptions of geography at various points of history. It will also reveal how politics affects the boundaries of countries such that, for instance, India's maps of India and Pakistan may well be different than Pakistan's maps of the same two countries. To achieve this, global cooperation is needed.

Time (when?)

Connected with this is a challenge of historical temporal meta-data, whereby one has a standard method for correlating the different time scales of various chronological systems and calendars (including the Jewish, Chinese, Indian, Gregorian, Islamic, and Julian calendars). Those of the Hebrew faith had their fourth, not second, millenium problem some time ago. Coupled with historical data this will be a significant step towards studying history from multicultural points of view. For if I am reading am Arabic or Jewish manuscript and come upon the date 380, the system will immediately provide an equivalent in the Christian Gregorian or Julian calendars.

Narratives (how? why?)

Beyond the initial problem of access to a given subject or episode such as the Crucifixion or Diana and Actaeon, there is the challenge of accessing the context of these subjects, the narratives from which they derive. An integration of the above methods will bring new access to the history of narrative, and thereby new approaches to literature, art, and culture as a whole. A culture such as Europe is defined by a relatively small number of major narratives deriving from two traditions: (1) Judeo-Christian (the Bible, Lives of the Saints) and (2) Greco-Roman (Homer, Virgil, Ovid). We belong to the same culture if we know the same narratives, if we have the same stories in common.

Paradoxically, those who have the same stories inevitably develop very different ways of telling those stories. The media differ. For instance, in Italy the lives of the saints most frequently become the great fresco cycles on the walls of churches. In France and the Netherlands, the lives of the saints are more frequently treated in illuminated manuscripts. In Germany, they frequently appear in complex altarpieces. Not only do the media vary but also the ways of telling stories. The Life of Christ in Spain is very different than in the Balkans or within the Orthodox tradition in Russia. Even so the commonality of themes means that Europeans can feel an affinity towards a Russian Orthodox church, which they cannot readily feel with an Indian temple with stories from the Mahabharata or the Ramayana (unless of course they know these stories as well).

In these transformations of the familiar lies at once the fascination of change through continuity, which inspired the studies of Aby Warburg and his school, but also implicitly, a series of important lessons about the keys to diversity. The most diverse narratives are precisely about the most familiar stories. Uniqueness cannot simply be generated by trying to be different, by rejecting others and removing them through ethnic cleansing. Uniqueness comes through sharing common stories, which inspire fundamental values (such as life and truth, e.g., through the ten commandments), and then expressing them differently. To visualize and make visible the complexities of these historical diversities of expression is our best hope for understanding the challenges of future diversity. Inherent in such questions lie the seeds for understanding changing definitions of our world: dynamic phenomena, processes rather than static definitions.

The attentive reader will be aware that the re-organization of knowledge implicitly leads to two related yet different challenges. One is simply: How do we re-arrange what we have so far in order to make new sense thereof? A second challenge is more elusive: How will the new technologies transform the very ways in which we organize, store, and access information? Will our principles of classification change dramatically? Will our definitions of knowledge be changed completely? In this paper we have offered a series of possible, tentative suggestions in the direction of a new vision which embraces both these challenges.

To achieve this, however, as should by now be abundantly evident, requires much more than a simple scanning in of documents. It requires long-term commitment to meta-data and eventually a complete re-organization of all our knowledge. We may continue to store things in libraries, museums, and archives but the sources of insight will come through virtual reference rooms and repositories beyond the walls of any of those institutions. And these electronic repositories may well become systematic links which prove to be larger than the original sources, meta-data banks of authority lists with all the key pointers concerning a given person, object, event, or process, properly reflecting the complexities of cultural and historical views thereof. Moreover, as will be discussed below, this heritage of the contemplative life (scholarship) needs to be integrated with the insights of the active life (business). Some basic references on knowledge organization are provided in Appendix 1.

Business dimensions

Individuals (who?)

Presently, individuals in a company typically have a curriculum vitae (c.v.) which summarizes their education, employment experience, publications, lectures, etc. In the future c.v.'s may well include dynamic aspects, with brief video clips of lectures or presentations, dynamic maps which show where a person traveled and perhaps even indicate the impact of their products, books, etc.

Companies (what?)

In the past decades there has been an increasing fascination with visualizing change in all domains of business. A quest to see dynamic changes in stocks led Hani Rashid to create a Virtual New York Stock Exchange (3DTFV) [56] with which one can see stocks in real time as they rise and fall. This allows one to see the state of a company at any given moment.

Imagine, however, a historical version of the same, whereby we can trace a company qua its financial growth and the number of its employees; whereby we can see the companies which a given corporation buys out, plans to acquire, and actually takes over; the firms through which it is linked in consortia; and the firms with which it has alliances and associations. Using such a tool we could watch how IBM grew from a family business to one of the great corporations of the world. One could also see at any time the relation of a given firm to others in the field. What, for example, is the size of IBM relative to other computer firms such as Fujitsu, Compaq, Dell, Bull, etc.?

Institutions (what?)

How will these converging new media transform institutions as we now know them? Some radical thinkers assume that within two decades banking will be so completely automated that bank employees will be largely redundant and consequently bank buildings will no longer have their present use. Will the same things happen to most large companies? Does this mean that the downtown sections of most large cities which are now dominated by office blocks could conceivably be refitted to provide a new kind of inner-city housing? Will the same thing happen in government? If so, why are countries with futuristic visions such as Malaysia building a whole new government city, Putrajaya? Will we look back in twenty years to the idea of automatic banks and government with the same amount of irony with which we now look back to the idea of the paperless office as it was advertised in the sixties and seventies? In any case, as with companies, there is a challenge of how we can arrive at dynamic visualizations of the growth and/or decline of institutions. Some basic references on institutional aspects and knowledge management are provided in Appendix 3. It is instructive to recall that some authors such as Philip C. Murray define knowledge as "information transformed into capabilities for effective action. In effect, knowledge is action," [57] which is far removed indeed from our notion of knowledge organization. In this interpretation knowledge management is nothing more than management with new tricks.

Products (what?)

The question of what will change applies equally in the domain of products. There are many new methods for networked design and even networked manufacturing and production. More objects than ever are being produced. But how can we be certain that they are better?

Processes (how?)

The quest for efficiency in work processes was led in the early twentieth century by Frederick W. Taylor, who became so famous that his approach is still known formally as the Taylorisation of work. [58] It is implicitly also remembered through Charlie Chaplin's classic movie Modern Times. Nearly 40 years ago Everett Rogers wrote his classic work on Diffusion of Innovations. [59] In the past decades this has led to new activities such as Gantt charts, [60] work flow software, business process engineering, and requirements engineering. Nearly two decades ago thinkers began writing of the Taylorisation of intellectual work. [61]

The realm of processes is particularly interesting for it is precisely in this field that indexers claim to have made the greatest progress in the past decades. Ironically there is practically no cooperation between scientific management experts in the tradition of Taylor and professional indexers. How can we combine the intellectual insights of indexers regarding processes with the advances in visualization of processes from other domains? One of the great problems with early solutions in this domain is that they were almost invariably in-house or proprietary solutions produced by external consultancy firms. A challenge lies in using standards whereby all these disparate systems can become accessible through a common front-end.

In the Mediaeval period simple processes such as mining, smelting, purification of metals such as copper and iron, and the preparation of medicines were largely verbal. The advent of printing in the West during the Renaissance saw an increasing trend to codify these instructions in book form. These early how-to books led gradually to the instruction and repair manuals of the twentieth century. In the past three decades these manuals have become available in electronic form and increasingly with multimedia aids such as virtual reality. How will these new media transform the nature of technical training, instruction, and repair? As will be mentioned below, some believe that many of these activities will also become automated. Radical visions in this context claim that any process which can be described will become automated within the next two generations. Meanwhile, futurists already speak of self-repairing and self-healing products. [62]

Control (how?)

One could usefully see the growth of control systems and management systems as a subset of processes or at least as a closely related domain for, ultimately, all efforts at control and management require a careful understanding of the processes which are being controlled and managed. Here again a new kind of meta-data is implicit because potentially we need to (a) understand the processes as they are intended to function; (b) recognize a series of things which can hinder or disrupt those functions, and (c) know which steps are necessary in order to repair these. All this may seem theoretical and futuristic but actually reflects a trend toward intelligent devices on the Internet:

AT&T Network Services is spearheading an effort to provide managed network services for intelligent devices that would allow remote monitoring and troubleshooting of any piece of equipment that can be linked to a network. AT&T is now testing the service, which it expects to release in early 2000. Managed network services have the potential to significantly alter business operations. For example, companies that send support staff into the field to repair a customer's device would be able to monitor the device's performance remotely and replace parts proactively before a failure occurred. Furthermore, managed network services would allow manufacturers to build only one variation of a product, allowing customers to obtain additional features by downloading firmware onto a chip embedded in the device. Current demands for customization require manufacturers to build many variations of each device. Intelligent device-management software is already available from RapidLogic, which offers RapidControl software to consolidate the different ways of accessing, managing, and controlling intelligent devices. [63]

Transactions (how?)

One can foresee that there will emerge new meta-levels which trace and make visible larger patterns in transactions of firms, not just that x dollars were spent this week, but more concerning the directions of money flows. A scenario follows: firm A is based in Colorado and has annual earnings of $10 million. Traditionally, 80 percent of those earnings and expenditures took place within the state, and the remaining 20 percent within the United States. When 5 percent of those earnings are suddenly transferred to a Swiss account, the system is able to produce the financial equivalent of a flow chart in order that managers can visualize and understand exactly what processes are entailed. This approach can be applied at a personal, company, and institutional level and potentially also at the national level. As e-commerce and Internet trade shift from hype to become central aspects of our lives, such methods may prove to be the only hope for governments to maintain an overview of taxable goods. Indeed they may become the only hope for governments to maintain the very idea of taxes. Some basic references on organizational aspects are provided in Appendix 2.

Learning

Learning is one of the ubiquitous buzzwords in all these developments. Lifelong learning is touted as an emerging phenomenon as if employees in the past were not faced with the phenomenon. Chris Argyris and Peter Senge [64] have convinced many that there are now Learning Organizations, [65] wherein change is a part of everyday experience. Needed are new methods which will help us to visualize this change. As in so many other realms the re-organization of knowledge lies in no small part in trying to measure what was previously not measurable, to visualize what was previously invisible.

Some see a challenge of understanding these hitherto diverse phenomena as part of the emerging knowledge society. For instance a new International Institute of Infonomics (Heerlen), which will work closely with the Maastricht McLuhan Institute, sees a need to link these various business dimensions, namely, individuals, companies, and institutions, and organizational aspects such as products, processes, control, transactions, and learning, and relate all these to shifts in the theory of knowledge and knowledge transfer. [66] Some basic references on learning are provided in Appendix 4.

Closely linked with these developments are new theories and practices as to how students learn best. We are told that a shift in the role of teachers is under way: "From the sage on the stage to the guide on the side." There is an ever greater emphasis on problem-based, project-based, and even product-based learning. There is a fascination with collaborative learning, with electronic notebooks for courses, and with an assumption that students will do their own annotations. Here developments in multivalent documents offer new possibilities. [67]

Levels of truth

For such an approach to work we also need new methods to determine the truth level of any claim, as Tim Berners-Lee has already envisaged in his concept of a semantic web. Such an approach will allow us to distinguish between personal claims; those accepted by groups, national, and international bodies; those which are logically true; and those which have been experimentally demonstrated.

It is important to recognize, moreover, that this truth level is itself subject to historical changes. During the Middle Ages, for instance, the Donation of Constantine -- whereby that Roman emperor theoretically handed over the rights of his empire to the Catholic Church -- was long believed to be true. During the fifteenth century, the Renaissance scholar, Lorenzo Valla, after careful palaeographical and codicological detective work, discovered that the document had been written after the fact and could therefore demonstrate that it was false.


Figure 10. Schematic view of a European trend to use the Internet for new access to enduring knowledge.

Hence, in future the question Was the Donation of Constantine true? will have a nuanced answer: it was false but for a given period it was believed to be true. If all this sounds close to sophistry and too abstract, it is useful to recall that these problems of veracity remain very real in contemporary medicine. Almost every day new drugs are developed. National bodies require that these drugs be carefully examined by experts and undergo rigorous testing. Medical experts are therefore consulted, whose cautious opinions are sometimes quoted out of context to give a better impression concerning the efficacy of the drug. Only through access to the full context of these comments can the level of certainty reasonably be determined.

In the case of scientific laws such as gravity, the level of certainty is, theoretically, beyond discussion -- else it would not be a true law. In many cases, however, there are hypotheses, where there is no single, incontrovertibly accepted answer. In such cases we need access to different views concerning the problem.

As noted earlier, the re-organization of knowledge which is under way is much more than a simple scanning process from analog to digital form. It entails new approaches to all branches of knowledge. It will introduce animated, dynamic aspects into knowledge, both in the sense of science (cf. German Wissenschaft) and know-how (cf. Dutch kennis). Individual facts and claims will be contextualized through links, which Horn would call hypertrails. These hypertrails will, moreover, have their own recognizable structures qua knowledge organization and will have cultural dimensions (e.g., proverbial French logic vs. English reason), historical dimensions (with different chronologies), and geographical dimensions (with different kinds of maps and different boundaries over time).

9. Augmented knowledge

The late Marshall McLuhan spoke of technologies as extensions of man. In the past those technologies were invariably extensions of physical aspects of the human condition. Through a mechanical "extension" of our arms, bulldozers allowed us to push more than we ever could with our bare hands. Similarly cranes allowed us to lift more. Telephones were extensions of our voice, televisions of our eyes. Through web cameras on the Internet we can see news halfway around the world without having to go there. An almost anecdotal example in a recent newsletter gives an idea of the unlikely extensions of technology, which are occurring through these Internet developments:

Connecting your Singer to Cyberspace

Was I shocked when I read this sentence in a sewing discussion board? "Husquavarna also has a new machine coming out in January which changes their software/hardware needs. The new machine will use a regular 3.5" floppy..." All my experiences with Husquavarnas were with chain saws, but beyond that -- sewing machines with USBs, floppies, and their own file format debates? Bernina, Pfaff, and Singer all have developed product lines that connect to the Internet (via a PC) so you can download patterns and embroidery. [68]

The combination of technologies which are the focus of this paper continue this trend but at the same time take it inward. Augmented books and the like are about extending our memory and the capacities of our minds. In the past the amount I knew depended on how much I had learned personally, and then remained largely a function of how good my memory was. If my work involved scholarship and I was fortunate enough to have at my disposal a great library with the resources of a major reference room, then such tools could compensate somewhat for limitations of memory, but I was still limited to what was within reach at any given moment. If I happened to live in Toronto, it was of little help to know that a wonderful reference work stood somewhere on a shelf at the British Library, the Bibliothèque Nationale, or the Vatican. Through augmented books, linked via notepads to virtual reference rooms through wireless connections, all this changes. I can have at my fingertips the cumulative memory concerning enduring knowledge as found in our libraries, museums, and archives.

What is particularly fascinating in these developments is the promise of a new kind of convergence that goes far beyond the technological dimensions, which were the starting point of this essay. Some authors have written about collective [69] intelligence or connected intelligence. [70] Pierre Levy, for instance, claims that we have created three kinds of spaces in the past and that the new technologies and the Internet are introducing a fourth space entailing knowledge and collective intelligence (figure 11). [71] Richard Barbrook notes that Levy's ideas are inspired by Islamic theology, whereby the "collective intelligence" is rather similar to God and that Levy's real interest is in real-time direct democracy. [72]

Kind of Space Products
Earth Language, Technology, Religion
Territory Agriculture, City, State, Writing
Commodities  
Knowledge and Collective Intelligence Virtual Agora, New Democracy

Figure 11. Four kinds of spaces according to Pierre Levy in his book Intelligence Collective (1996).

Others have gone further. Gregory Stock has written of Metaman: The Merging of Humans and Machines into a Global Superorganism. The Fishers have written of The Distributed Mind: Achieving High Performance Through the Collective Intelligence of Knowledge Work Teams. Peter Russell has gone further to write The Global Brain Awakens: Our Next Evolutionary Leap. [73] Meanwhile visionaries in the business world, such as Allee, speak of Fifth Generation Management: Co-Creating Through Virtual Enterprising, Dynamic Teaming, and Knowledge Networking, Halal et al. speak of The Infinite Resource: Creating and Leading the Knowledge Enterprise. [74] On the Internet there are also a series of Conceptions of the (sic!) Global Superbrain. [75] In other words, something much more than a re-organization of old and new knowledge has begun. Implicitly there is a new convergence between these.

On the one hand, as we have noted, the realm of enduring knowledge has focussed ever more on the potentials of systematic knowledge organization. Whereas the world of new knowledge championed by technology and business would have us believe in natural language searches, whereby every term is equal in a complete democracy of words, the champions of enduring knowledge remind us that there are distinctions among words, such as concepts, terms, descriptors, and captions; that there are a number of coherent relations among these concepts; and that there are distinctions to be made between structures and systems. The world of enduring knowledge relies on the traditions of logic, philosophy, grammar, linguistics, and, more recently, classification and information science.

In the world of education, the notion of Computer Supported Collaborative Work (CSCW) has taken the form of Computer Supported Collaborative Learning (CSCL). Here there is an assumption that young children and students can benefit from collaboration even before they have specialized knowledge. Sharing becomes a virtue in itself even before there is content to be shared. It is striking that mobile instruments play an important role in this vision also. For instance, Stanford University's Learning Lab foresees working with collaborative learning tools, interactive spaces, personal learning strategies, and tools. One such example is their Personal Electronic Notebooks with Sharing (PENS). [76]

On the other hand, from the business side this move towards convergence has seen an increasing attention to visualizing the steps/stages/phases in processes and production, a Taylorisation of the whole spectrum of human activity from physical labor to the creative process itself through the use of simulations, workflow schemes, and enterprise integration. One could argue that Engelbart's concept of augmented knowledge at the Bootstrap Institute is effectively another step in this same direction. And as the mastery of processes becomes ever greater, there is a corresponding emphasis on (business) process re-engineering.

Along with these attempts to visualize processes, there have been two other trends: one towards operation at a distance (through tele-conference, tele-collaboration, tele-control, tele-management, tele-operation, and tele-presence), and another toward virtualization (through virtual communities, virtual corporations, schools, and universities). And concurrent with this shift from scientific management of work in the sense of Taylor to knowledge management is an emphasis on learning organizations, on learning applied to entities, companies, and corporations, independent of the individuals and persons working within them. (Hence, theoretically, no single executive "makes a difference" to a company, and hence can be replaced as mergers and takeovers ensue.) These quests to master new knowledge owe much to systems theory, "chaos theory" (a seemingly contradictory combination of terms), complexity, [77] and developments in neural networks, whereby systematic treatments of apparently random forms bring unexpected patterns of order.

What makes these trends the more significant is that thinkers concerned with the systematization of intellect, such as Guilford, have intuitively sought to link units, classes, relations, systems, etc. with products and operations (figure 12).


Figure 12. J. P. Guilford's Structure of Intellect.

Through an explicit integration of old and new knowledge this is possible. We need to combine the logical rigor of the innovations in knowledge organization by the champions of enduring knowledge in memory institutions with the dynamic methods being developed by those using new knowledge in the worlds of technology, business, and finance. The ensuing augmented knowledge will bring about much more than a re-organization of knowledge. It will generate an unparalleled series of new insights. Indeed it may well reveal that what seemed excessive hype, may prove to be more like understatement in the face of the fundamental transformations in how we deal with knowledge in the century to come. Augmented knowledge is a key to our future.

Seen in terms of the history of the Internet, all this has an added fascination. For it will be recalled that the idea of augmented knowledge was also one of the initial inspirations for the work of Douglas Engelbart as he was founding the Internet in 1969. And yet the focus was quite different. Engelbart was concerned with sharing information within a large organization (figure 13) and with how engineers and designers in one large company could share their ideas online in the creation of new knowledge (figure 14). For this reason, Engelbart was so fascinated with the potentials of CSCW, which has emerged as a field in itself. [78] Engelbart's vision of Augmented Knowledge, which forms the basis for his Bootstrap Institute, bears some interesting parallels to the emphasis by Frederick Brooks [79] on Augmented Intelligence as opposed to Artificial Intelligence (IA vs. AI).

Meanwhile, there have been curious shifts in the original vision of the Internet as foreseen by Engelbart and his colleagues over the past 30 years. Nearly a half century ago, Engelbart, one of the most brilliant minds of our century, foresaw the possibility of hypermedia e-mail and CSCW for large corporations and ultimately for everyone. The United States has continued to develop the notion of the Internet as a tool for creating new knowledge. The rhetoric of being collaborative remains, but the realities of the dominant competitive ethic of the United States have led to an ever greater emphasis on e-commerce, whereby short-term commercial gain dominates over long-term intellectual advantage.

Collaborative work tools have become very big business. In 1996, Boeing, Ford, Kodak, MacNeal-Schwendler [80] , and Structural Dynamics founded the Technology for Enterprisewide Engineering Consortium. [81] Recently Boeing has begun licensing its collaborative software. [84] New companies such as Alibre [85] and Aristasoft [86] are joining the ranks of more established firms in the field such as SherpaWorks, which produces Enterprisewide Product Data Management, [87] and Nexprise, [88] which is used by Lockheed Martin.


Figure 13. From Douglas Engelbart: Each functional domain is a candidate for working interchange with all others. [82]


Figure 14. From Douglas Engelbart: Close cooperation between large organizations puts new demands on knowledge-work interchange. [83]


Figure 15. Vision of a new approach to augmented knowledge, which combines enduring knowledge (through libraries, museums, and archives via virtual reference rooms) and new knowledge (through collaborative environments) with filters for various cultural viewpoints.

Some idea of the scale of this enterprise emerges if one looks to Lockheed Martin, a firm which is known to many as a missiles and space company. The Lockheed Martin Palo Alto Lab is working with the University of Southern California on Virtual Environments for Training (VET) [89] . It also has a division called Lockheed Martin Integrated Business Solutions which describes itself as "the world's preeminent systems engineering and integration [90] company."

For instance, we enabled one of the country's largest residential home lenders to realize its potential for financing over $1 trillion in mortgages. We delivered a new multitiered data warehouse and client/server architecture to one of the nation's largest telecommunications companies, enabling it to maximize network services and decrease the threat of telephone fraud. [91]

Such figures make the predictions about e-commerce reaching a number of billion dollars in the next few years sound trivial. Meanwhile, as Europe becomes increasingly aware of the Internet, it has focussed increasingly on the potentials of the Internet as a means of gaining new access to enduring knowledge [92] and has seen this also as a way of augmenting knowledge.

It is not a question here of determining whether the American goal is better or worse than the European. What interests us is that both are concerned with augmenting knowledge and both are complementary. Having access to the cumulative memory of civilization is a wonderful thing. Having access to this as we work together collaboratively is even more powerful. What we need to do is combine the American approach qua new knowledge with the European approach qua enduring knowledge to arrive at fundamental new levels of insight for the whole of humanity.

A next step will be to include in this vision a South American, an African, and an Asian point of view as is emerging in ASEAN through pioneering work in Japan, Malaysia, Singapore, Hong Kong, and, increasingly, China. [93] This approach with respect to continents will then need to be fine-tuned such that it can reflect accurately the diversity of individual countries, provinces, cities, and ultimately individuals (figure 15).

Explicit Knowledge Implicit Knowledge
Codified Tacit
Information, Theories, Formulae, Procedures Experience, Abilities, Attitude
Handbooks, Manuals, Drawings, Schemas Being able and willing
Transfer by Instruction Sharing through Demonstration
Obtainable by Study Copying and Imitation
Explicit Knowledge is Power Implicit Knowledge can be Power

Figure 16. Basic differences between explicit and implicit knowledge.

The domain of enduring knowledge is focussed primarily on what can be termed explicit knowledge. On the other hand, there is also a challenge of capturing implicit knowledge (figure 16), which Polanyi termed personal or tacit knowledge [94] and which is closely related to the challenge of filters for cultural dimensions. How one captures the essential characteristics is more elusive. Here virtual reality environments such as VET can prove useful in the future, but this cannot solve the problem of retrospective treatment of these themes. Meanwhile there are other more immediate challenges.

10. Challenges

Various stumbling blocks stand in the way of such a re-organization of knowledge as outlined above. Two major challenges are metadata and the privatization of knowledge. Others include questions of knowledge management: the extent to which indexes should be pre-structured or made available on-demand, the potentials of cooperation, and the role of filtering.

Metadata

The scenarios outlined above concerning virtual reference rooms set out from a simple, but elusive assumption, namely, that the names, subjects, places, and chronologies in the various reference works are all standardized such that one can move seamlessly from one list to another. This is the domain of metadata, which establishes rules of information about information. Many important advances in metadata have been and are being made, notably, through the Dublin Core [95] and the Resource Description Format (RDF). [96] These pragmatic solutions are important because they promise universal access to at least some basic aspects of collections. [97] Nonetheless, a profound danger remains that an ability subsequently to perform deeper queries of sources could disappear.

Moreover, such solutions focus entirely on static knowledge, on contemporary knowledge, without attention to cultural or historical changes in knowledge. As noted earlier, new methods are needed which enable access to these complexities of historical and cultural diversity, such that we can develop new kinds of dynamic knowledge maps, some of which are evolutionary. To this end, it has been recommended that the European Commission, in the context of its long-term research program, promote the creation of a new kind of metadata. This could then become a starting point for further collaboration with other countries around the world.

Because such a long-term approach to metadata will entail very high level and large-scale technologies in areas where industry will reasonably not see immediate gain, it has also been recommended that the underlying technologies be developed by the Joint Research Centre (JRC), probably in conjunction with national research institutes in countries such as Germany (GMD), France (INRIA), and Italy (CNR). Second, given the complexity of the challenge, a modular approach has been suggested.

Privatization of knowledge

Trends toward privatization of knowledge have come mainly from publishers and multi-national corporations, although recently universities have become more active in this domain.

Publishers

Already during the Renaissance various city-states and cities created city libraries for their citizens. In some university towns (e.g., Bologna, Padua, Heidelberg, Oxford) the university library informally served a public function. Such examples evolved into the concept of public libraries paid for by taxpayers and readily accessible to all citizens.

During the Renaissance and until recently publishers played an essential role in publishing the works of authors, which were then bought by libraries and thus made available to members of the public who could not afford to have their own collections of books. While this basic pattern has continued, other trends in the past 50 years have begun to undermine this long, positive tradition of publishers.

With the enormous rise of universities during the 1960's and 1970's, spawned partly by the "baby boom" after the Second World War and also by a new philosophy which held that potentially everyone had a right to a university education, publishers realized that there was a new market in providing reference books and classics for these new institutions. This inspired the rise of a reprint industry. The good news was that new universities now had their own copies of standard reference works. The bad news was these reprint companies increasingly assumed the copyright of works which they had merely reprinted. As these materials became increasingly connected with electronic databases through companies such as Dialog, new deals between reprint companies and those offering electronic services (e.g., Reed and Elsevier, later Reed-Elsevier) meant that reference materials, which had largely begun in the public domain, were now in the hands of private companies, themselves subsidiaries of multi-national conglomerates.

These reference materials entail two things: the actual contents and the names or subjects which those contents describe. If those names and subjects become viewed as proprietary, then the hopes of combining these names with others to make universally accessible authority lists becomes difficult if not impossible. This is not to say of course that publishers no longer have a role to play, only that with respect to the metadata which they use, qua names, subjects, places, etc., more cooperation with public institutions is needed. Else the concept of universal access will no longer be feasible.

Industry

A second stumbling block toward this vision comes from an unexpected quarter: industry and business. [98] The business world assures us that the knowledge society is the emerging paradigm for the twenty-first century. One might have expected that business would do everything in its power to facilitate this. To understand why this is not the case, some history is needed. Traditionally there has been a single institution which was responsible for coordinating the whole of known knowledge. In the West, from about the fourth to the eleventh century, this task was almost exclusively in the hands of the Catholic Church. The eleventh and twelfth centuries saw the rise of universities, and although certain orders of the church, notably, the Franciscans, Dominicans, and later the Jesuits, continued to play a considerable role, as the universities gradually became secularized they effectively became the central repositories of learning until the nineteenth century. Since then six major changes have occurred:

  1. There has been the rise of polytechnics (in France the Ecole Polytechnique) and technical colleges (in Germany the Technische Hochschule as at Braunschweig, Berlin, Darmstadt, and Munich and the Eidgenössische Technische Hochschule in Switzerland). As a result technical knowledge came increasingly in the purview of these new institutions.
  2. The rise of nation states led to the idea of national libraries. Here Panini's vision for the British Museum soon became a model for national libraries in Europe and all around the world. By the end of the nineteenth century, the enormous public support of these national libraries made them the new central repositories of knowledge.
  3. As the idea of the nation state flowered, the notion of national research laboratories became an increasingly natural prospect. One had to do specialized research in order to protect the competitive advantage of one's country. In countries such as Italy, this national research council (Consiglio Nazionale delle Ricerche) typically maintained close contact with the great universities (Rome, Bologna, Pisa), but at the same time produced major bodies of knowledge not readily accessible to those within the university. In the United States, these institutes grew into a vast network, often wrapped in secrecy. For instance, the Department of Energy alone has 20 major laboratories including Ames, Argonne, Lawrence Livermore, and Los Alamos. [99] It is significant that the names of these institutions are familiar while their activities are not.
  4. In other countries such as Britain, another development emerged in the 1960's and 1970's. Under a rhetoric that government must focus on its "core business" there was a trend to privatize its holdings, with a result that major research laboratories were sold, often to private enterprise.
  5. The major corporations found it useful to develop their own research labs. These grew steadily in scale. Here a few examples will suffice to give an indication of the scope of the phenomenon. IBM, for instance, has major laboratories such as Watson and Almaden, plus an Advanced Semiconductor Research and Development Center (in East Fishkill, NY), [100] as well as laboratories in Winchester (England), Zürich (Switzerland), and Naples (Italy). Similarly, the Japanese firm NEC has a number of laboratories in Japan, namely Kawasaki, Osaka, Ohtsu, Sagamihara, and Tsukuba, as well as others in Europe (Bonn and Berlin) and North America (Princeton [101] and San Jose). Philips has its main laboratory in Eindhoven with others in England (Redhill), France (Limeil-Brévannes), Germany (Aachen), and the United States (Briarcliff Manor). In 1997, Hitachi had 17,000 researchers in 35 labs with an annual budget of $4.9 billion. The University of Bologna, one of the oldest universities of the world and also one of the largest with 103,000 students, has an annual budget approaching $1 billion, of which a large part goes to day-to-day teaching and administration rather than to research.
  6. There has been a rise of private universities, often specifically connected with a specific corporation. There are over 1000 corporate universities in the United States alone. Some of these corporate universities are larger than most public universities. For instance, the campus of Motorola University has 100,000 students. The new campus of the British Telecom University will have 125,000 students.

The net result of these six changes over the past two centuries is an implicit crisis for any vision of true sharing of knowledge. The mediaeval aim of the university as representing the universe of studies (universitas studiorum) is mainly wishful thinking today. While some universities continue to pursue their quest for the public good, large domains of new knowledge are now in the hands of national institutes, which continue to work on their interpretation of the public good, and corporate research institutes which are concerned only with the private gains of the companies they represent. Meanwhile, other universities, conscious of these dangers, are now trying to privatize the intellectual property of their professors with a view to retaining at least some stakehold in an ever more divided vision of the universe. [102]

Some members of that corporate world spend their lives trying to convince us that the traditional distinctions between public and private good are now outdated, and that we need to devote all our energies in the direction of private interests. In Britain, there is even national support for the creation of a National University of Industry, a title which implies that public and private interests are now one and the same.

Yet they clearly are not. The public interest assumes that we share knowledge in reaching new insight. For this reason, entry to the greatest libraries of the world in London, Paris, and Rome have always been on the basis of who is most academically qualified, rather than who pays the largest entry fee. Private interest claims that each corporation must hoard as much knowledge as possible through patents, copyrights, and other protection measures to prevent others from having access to this knowledge.

Paradoxically we have a corporate structure which is calling on the one hand for us to think internationally and indeed to become global in our outlook. Meanwhile, this same corporate structure is leading to an increasing privatization and segmentation of knowledge that stands in the way of the very global vision which they are preaching. Some businesspeople urge that we must develop new (private) business models for the knowledge society, and they are right. But something even more important is required. We must also develop new public models for sharing knowledge, else we shall increasingly find that, notwithstanding fashionable trends toward knowledge management, all of us, including private laboratories, are unaware of developments elsewhere and billions will be wasted re-inventing the proverbial wheel.

Pre-structure vs. on-demand

We have noted earlier the need for metadata which, in conjunction with authority lists, will enable us to search systematically through the titles of works and their full texts. In the case of major authors such as Aristotle and Leonardo da Vinci, complete basic indexes will exist online. This is a task of pre-structuring knowledge so that members of the public and scholars will find things without difficulty.

Yet if every possible connection were made with everything else then the size of the concordance-like databases would be far greater than those of the original sources in full text. Moreover, as new terms emerge, new connections which are possible will come to light. So the list would have to be continually updated.

In the United States, there is a tendency to treat such problems purely in terms of demand. This is an idea underlying Frequently Asked Questions (FAQs) and it may prove interesting to explore the idea of historical FAQ's. On the other hand, while popularity is a reasonable determining characteristic in the case of television programs where ratings often rule the day, it does not always make sense with respect to scholarship. The most popular books do not always have much to do with the most important books. In great libraries, a book may well lie untouched for years and then, in the hands of insightful scholar, suddenly generate enormous insight.

Needed therefore is a combination of pre-structured links available in databases and on-demand searching for new links which can be generated on the fly. In many cases a scholar is likely to have specific questions concerning a relatively small corpus of a few dozen or a few hundred books. Having set these parameters, scholars can then have agents go to work as they are about to leave the office and find the processed results when they return the next morning.

Cooperation

Industry has demonstrated that cooperation through CSCW is effective, particularly in mega-projects such as the design of the Euro-Fighter. When the Hubble Space Telescope broke down, a team of 10,000 scientists worked together to repair it. A simple example of where this could lead is mentioned in a recent edition of the Chronicle:

A climatologist at the Rutherford Appleton Laboratory in Chilton, England, is calling for volunteers to assist him in evaluating the accuracy of computer models of the earth's climate. Myles Allen, head of the Casino-21 project, says by harnessing the power of at least a million computer desktop machines, he could compare the numerous variables included in computer models with actual data on weather patterns from 1950 to 2000. He could then extrapolate from those results predictions for each year from 2000 to 2050. [103]

In the humanities the extent to which cooperation is feasible and practical is more of an open question. It is true that major reference works such as the Oxford English Dictionary, the Grove Dictionary of Art (in 34 volumes), and the Encyclopaedia Britannica are the product of cooperation by a large team of authors. Yet most major monographs are typically still the product of a single individual, the argument being that one mind is needed to synthesize the information and create a coherent line.

Here some visionaries are more optimistic. Baron Professor ir. Jaap van Till, in a recent communication, "Just Imagine," [104] drew an analogy with a lens in optics. One can take a lens, cover part of the lens, and still see the image, although it is somewhat lessened in strength. Therefore, he suggests, one could build the equivalent of a super-resolution lens by founding a European Observatory or its equivalent which would have thousands of persons looking at one thing at a time, which would then be combined to arrive at a bigger picture -- which brings us back to the concept of augmented knowledge outlined above.

Filtering

If all the above materials are truly accessible, however, then there will be new challenges of filtering the materials in keeping with the level of the user. This is a domain where higher level agents are likely to come into their own. Given sufficient knowledge of the background, education, tastes, preferences, and approaches of a user, high-level agents can theoretically take over many of the mundane tasks of everyday searching and other activities. To what extent the choices of these "autonomous" agents will, could, or might encroach upon individuals' perceptions of their own freedom is an open question which lies beyond the scope of this paper. [105]

11. A new public good

In the nineteenth century the nation state became the guardian of the public good. This continued throughout the twentieth century until the last decades, which saw an increasing rhetoric telling us that government regulation, described as intervention in the sense of meddling, is no longer necessary. We must return, we are told, to a laissez faire approach as originally foreseen by Adam Smith. That the father of capitalism also spoke very clearly about the need to share resources with poorer nations is often omitted by such individuals.

The domain in which these arguments are most vividly discussed is telecommunications. Government regulation, we were assured, is the only stumbling block to progress. For a few years these government bodies earnestly examined whether their role was no longer relevant. In the meantime they have discovered that their presence is more necessary than ever and that more cooperation is necessary if they are to be effective at the global level.

There is also recognition of the limitations of privatization in other domains. A decade ago, for instance, there were enthusiastic movements toward at least the partial privatization of railways in Britain and elsewhere in Europe, under the rhetoric that increased competition would lead to both lower prices and better service. The actual results have been quite different. In many cases the prices are higher, the service is less, and, much more disturbing, there is increasing evidence that simple greed has compromised traditional safety standards resulting in disastrous train accidents, the most recent of which was at the beginning of October 1999 in London, England.

In its national government, the United States relies on a concept of checks and balances to ensure that the relative powers of the president and congress are kept in equilibrium. Internationally we need a corresponding set of checks and balances between public and private interests, between the intrinsic, implicit rights of all citizens, and the legitimate, explicit ambitions of global business. For instance, the whole area of Internet governance which, through ICANN, is still dominated by a U.S. viewpoint, needs to reflect adequately the rights of all citizens, including the majority of the world's population, which at present lacks access to the Internet, telephones, and television. [106]

In terms of private interests we have many global corporations. These corporations are guided in the first instance by a short-term profit motive. The public efforts of individual governments are not enough. We need new spokespersons for the public good to coordinate these efforts: locally, regionally, nationally, and internationally. There are many international organizations working in the direction of cooperation, often unaware of each other's efforts. [107] Multi-national organizations such as the European Union and ASEAN (Association of Southeast Asian Nations) can play a significant interim role in bringing us closer to true international cooperation in this sphere. At present, we have only one global organization devoted to the public domain, namely UNESCO (United Nations Educational, Scientific & Cultural Organization), a body whose annual budget, as Philippe Quéau has observed, is equal to seven minutes of the annual budget of the U.S. military. And it is a body, moreover, to which the richest country in the world is not contributing.

Recently there was a meeting in Florence (October 4-7, 1999) which brought together for the first time leading members from the World Bank, UNESCO, and the realms of culture (including ministers of culture from 40 countries). The President of the World Bank, James Wolfensohn, explained that the bank's interest in culture was ultimately determined by enlightened self-interest. In the past, ignorance of cultural dimensions often led to the failure of considerable investments. [108]

A similar insight is needed in the realm of knowledge as a whole. Without international sharing, without a global vision of access to both enduring and new knowledge we cannot make sufficient progress. Persons, institutes, memory institutions, universities, companies, corporations, and governments need to work together in building something which is greater than any single body could ever hope to achieve on its own. The once fiercely competitive world of business has recently learned the power of competition with cooperation in the form of "coopetition." [109] An awareness along these lines is required in order to arrive at a new kind of public good at a global level.

12. Conclusions

This paper has argued that connected computers in combination with miniaturization, mobility, GIS, GPS agents, and OCR are bringing about a systemic change. A transformation of hypertext can be seen as a first dimension of this change. More significantly, this systemic change is leading to augmented books, whereby traditional printed carriers of knowledge become enhanced in their power to convey knowledge through new electronic links with virtual reference rooms.

Augmented books lead to augmented minds. Traditionally, the amount of knowledge a book could convey was a simple combination of the book's content plus the reader's prior knowledge. A fundamental consequence of the new technologies is that this amount of knowledge can now be greatly enhanced by calling on the collective memory of mankind as recorded in the great reference rooms of the world. These vast repositories, once the instruments of a few thousand scholars with access to the reading rooms of national libraries and a few great historical libraries, can now be made available to everyone.

We have shown that making accessible virtual reference rooms leads to much more than a mere translation from printed paper knowledge to electronic knowledge. Because they entail new indexing and systematic hyper-textual cross-referencing, augmented books in combination with virtual reference rooms offer many new reference potentials. They imply a virtual reorganization of libraries and ultimately a re-organization of knowledge itself.

Thus connected computers, in combination with a series of interconnected new technologies, are bringing about a concept of augmented knowledge through new access to traditional knowledge in libraries, museums, and archives. This is all the more intriguing because the initial vision of the Internet as envisaged by Douglas Engelbart foresaw augmented knowledge with respect to new knowledge through CSCW. Our suggestion is that the true potential of the Internet lies in harnessing a combination of these two visions: using systematic access to the enduring knowledge of libraries, museums, and archives as a key to new collaborative insight and discovery. We noted two large and three minor stumbling blocks to this vision, and suggested that a new definition of the public good at a global level is required to address these. A challenge for pioneering groups such as the Internet Society is to ensure that these enormous potentials of augmented books and augmented knowledge are unleashed for the benefit of humanity. Thereby the vision of "Internet for everyone" can become reality.

Appendix 1: Institutions and groups concerned with knowledge organization

(For Knowledge Management see Appendix 3 below.)

International

Consortium of European Taxonomic Associations  (CETA)

Cybernetics and Systems Theory [110]

Global Group

International Network for Terminology  (TermNet) [111]

International Information Centre for Terminology  (Infoterm)

Gesellschaft für Terminologie und Wissenstransfer (GTW) [112]

International Standards Organisation (ISO TC 37)

International Institute for Terminology and Research  (IITF) 

Dr. Budin & Dr. Picht (Copenhagen)

International Federation for Information and Documentation  (FID) [113]

The Hague

International Society for Knowledge Organization  (ISKO) [114]

Journal: Knowledge Organization

Amsterdam

Cf. ISKO Germany [115]

Society for Information Management  (SIM) [116]

Chicago

Themes

Complexity, Complex Systems, Chaos Theory [117]

Description Logics [118]   (DL)

Terminology web sites [119]

National

Belgium

Ms. Cerisier

Mundaneum [120]

Rue des Passages 15

B-7000 Mons

Belgique

Tel. 32 65 31 53 43

Fax. 32 65 31 66 63

P. Otlet et LaFontaine succeeded by G. Lorphèvre

Canada

Calgary

Knowledge Science Institute   (KSI) [121]

Brian R. Gaines

Toronto

Knowledge Media Design Institute  (KMDI) [122]

Process Interchange Format  (PIF) [123]

Faculty of Information Studies  (FIS) [124]

Denmark

Danish Information Applications Group  (DIAG)

Finland

Helsinki

Center for Knowledge and Innovation Research  (CKIR)

Knowledge Media Laboratory

Timo Saari

Nordterm Net [125]

Nordic Terminological Reference Format (NTRF)

Germany

Karlsruhe [126]

Institut für Angewandte Informatik und Formale Beschreibungsverfahren (AIFB)

Ontobroker

ACM Classification [127]

Gesellschaft für Klassifikation  (GfK) [128]

Cf. H.-H. Bock Information Systems and Data Analysis

Saarbrücken

Max Planck Institut für Informatik, Saarbrücken [129]

Algorithms and Complexity Group

Meta-Logics and Logical Frameworks

Programming Logics Group

India

Centre for the Development of Telematics [130]

Sam Petroda

Italy

Trento

Knowledge Representation and Reasoning Group   (KRR)

A research group at IRST.

I-38050 Povo TN, Trento, ITALY

Phone: +39 (461) 314-517 - Fax: 302-040

Our current research interests include Knowledge Representation and Reasoning in Artificial Intelligence, Description Logics, Computational Logics, Natural Language Formal Semantics, Formal Ontology, Conceptual Modeling, and Flexible Access to Information.

Sixth International Conference on Principles of Knowledge

Representation and Reasoning (KR-8) [131]

Trento 2-5 June 1998

Japan

Japan Institute of Chief Information Officers  (JICIO)

Netherlands

Amsterdam

Expert Centre for Taxonomic Identification [132]   (ETI)

University of Amsterdam

Heerlen

International Institute of Infonomics  (III)

Maastricht McLuhan Institute  (MMI) [133]

Cf. Infonomics.net [134]

Twente

Centre for Telematics and Informatics [135]

Zeist

Neural Networks Resources [136]

Neural Network World [137]

VSP, PO Box 346

3700 AH Zeist

South Africa

University of Capetown

Cladistics [138]

Cladistics e-journal of the Willi Hennig Society [139]

Switzerland

St. Gallen

Net Academy on Knowledge Media [140]

Bibliography [141]

United Kingdom

England

London

University of London

Atlas of Cyberspace [142]

Milton Keynes

Open University

Knowledge Media Institute (KMI) [143]

Bayesian Knowledge Discovery Project [144]

Digital Document Discourse Environment  (D3E) [145]

Resource Organization and Discovery in Subject Based Services  (ROADS)

Social Science Special Interest Group  (SOSIG)

Organization Theory [146]

United States

Cambridge, Mass.

MIT Center for Coordination Science  (CCS) [147]

Thomas W. Malone

David Tennenhouse [148]

Los Angeles

University of Southern California

Information Science Institute [149]   (ISI)

Santa Fe

Santa Fe Institute [150]

College Park

University of Maryland

Human Computer Interface Lab

Schenectady, NY 12501

General Electric Corporate Research and Development

Classification projects

Aristotle: Automated Categorization of Internet Resources [151]

AutoClass Project [152]

Automatic Text Classification

Mortimer Technology [153]

Electronic Dewey [154]

Graphical Elements for Information Browsing Systems [155]

David Fox

Knowledge Mapping

Ted Kesik [156]

Library of Congress Classification in Electronic Form [157]

Personal Taxonomies [158]

Taxonomic Resources and Expertise Directory [159]

Taxonomy of Information Visualization User Interfaces [160]

Chris North

Thesauri Building and Editing tools [161]

Visual Browsing in Web and non-Web Databases [162]

"An Algorithmic Approach to Automatic Thesaurus Generation" [163]

Classification in specific fields

Biology

International Council of Scientific Unions   (ICSU)

Committee on Data for Science and Technology   (CODATA) [164]

International Union for Biological Sciences  (IUBS)

Taxonomic Database Working Group  (TDWG)

International Working Group on Taxonomic Databases [165]

Botany

Integrated Taxonomic Information System  (ITIS) [166]

Zoology

Taxonomical Databases [167]

Taxonomy and Systematics at Glasgow [168]

Individuals

Grefenstette, Gregory [169]

Jenkins, Charlotte [170]

Levy, Pierre

Miller, Paul

Sigel, Alexander

Turner, Mark [171]

Companies

Banxia [172]

Decision Explorer

Cycorp

CYC

Austin, Texas

Context-Space

Doug Lenat [173]

Fritz Lehmann

Xerox Parc

Quantitative Content Analysis Area   (QCA)

Automatic Hypertext Indexing

Question Answering

Automatic Thesaurus Term Generation

Conferences

International Study Conference on Classification Research (ISCCR-1)

International Federation of Classification Societies   (IFCS)

Fourth Int'l workshop on Computer Aided Systems Technology  (CAST94)

Information Society 98 (IS-8)

International Society for Knowledge Organization Conference  (ISKO)

ISKO '2000 is in Toronto

Books

Automatic Classification of Speech Acts with Semantic Classification Trees and Polygrams, ed. M. Mast, E. Nöth, H. Niemann, and E.G. Schukat-Talamazzini, International Joint Conference on Artificial Intelligence 95, Workshop "New Approaches to Learning for Natural Language Processing", pages 71-78, Montreal, 1995. [174]

Boyd Rayward, W., The Universe of Information. The Work of Paul Otlet for the Documentation and International Organisation, The Hague: FID (printed in Moscow), 1975.

Guilford, J. P., The Nature of Human Intelligence, New York: McGraw Hill, 1967.

Structure of Intellect. [175]

Cf. the work of Heiner Benking.

Grefenstette, G., Explorations in Automatic Thesaurus Discovery, Moston, MA: Kluwer Academic Publishers, 1994.

Harper, Richard H. R., Inside the Imf: An Ethnography of Documents, Technology and Organisational Action (Computers and People Series)

Horn, Robert E., Information Mapping. Mapping Hypertext, Lexington Institute Press, 1996. [176]

Litofsky, Barry, SMART Collection: cisi-1419. Utility of Automatic Classification Systems for Information Storage and Retrieval. [177]

Smith, John B., Collective Intelligence in Computer Based Collaboration, Hillsdale, NJ: Erlbaum, 1994. [178]

Zeng, Marcia Lei

Searching for New Ordering Systems for the Internet Resources

A study of the approaches to organizing information in the World Wide Web Virtual Libraries from 1995 to 1997.

School of Library and Information Science

Kent State University,

Kent, OH 44242-0001, USA.

mzeng@kentvm.kent.edu [179]

Articles

Ingetraut Dahlberg, "Library catalogs on the Internet: Switching for Future Subject Access," Advances in Knowledge Organization, vol. 5., 1996, pp. 155-164.

R. Dolin, D. Agrawal, A. El Abbadi, J. Pearlman, "Using Automated Classification for Summarizing and Selecting Heterogeneous Information Sources," D-Lib, January 1998. [180]

Louis Hoebel, William Lorensen, Ken Martin, "Visualize Temporal Constraints," SIGART Bulletin, Winter 1999, pp. 19-25.

Cf. Kitware [181]

Stephen Jay Kline, "Powers and Limitations of Reductionism and Synopticism"   Program in Science, Technology and Society, Stanford University Report, CFI, February 1996.

Sougata Mukherjea, James D. Foley, Scott Hudson [182] , "Visualizing Complex Hypermedia through Complex Hierarchical Views" CHI 95.

Amos O. Olagunju, "Nonparametric methods for automatic classification of documents and transactions" (abstract) in: CSC '90. Proceedings of the 1990 ACM eighteenth annual computer science conference on Cooperation, p. 448. [183]

Dmitri Roussinov, "A Neural Network Approach to Automatic Thesaurus Generation" [184]

Gerda Ruge, "Combining Corpus Linguistics and Human Memory Models for Automatic Term Association"

Gerda Ruge, "Automatic Detection of Thesaurus Relations for Information Retrieval Applications" in Christian Freksa, Matthias Jantzen, Rüdiger Valk (eds.): Foundations of Computer Science. Springer: Berlin, 1997, 499-506

Appendix 2: Institutions and groups concerned with organizational aspects

Products

Design

Tele-Design (Architecture)

Marketing Research

Concept

Design and Development

Prototype

Test and Validate

Planning

Simulation and Modelling

Lockheed Martin

Simulation Based Design [185]

Manufacturing Design Processes and Products [186]

Design Management [187]

Technology for Enterprisewide Engineering Consortium

Boeing, Ford, Kodak, MacNeal-Schwendler, and Structural Dynamics

Research (1996)

Manufacture

International Standards

Global Engineering Network (GEN) [188]

International Alliance for Interoperability (IAI)

Industry Foundation Classes [189]

Autodesk et al.

National Information Infrastructure (NII) [190]

for Industry with Special Attention to Manufacturing

Multidisciplinary Analysis and

Design Industrial Consortium (MADIC)

NASA

Georgia Tech

Rice

NPAC

Affordable Systems Optimization Process (ASOP)

Products (OII)

Universal Access to Engineering Documents [191]

National Center for Manufacturing Sciences [192]

Consortia

Automotive Network eXchange (ANX) [193]

Sandia Labs

Integrated Manufacturing [194]

Predictive Maintenance [195]

Purdue University

Center for Collaborative Manufacturing [196]

Companies

ESI Technologies

Enterprise Management Information Systems (EMIS) [197]

General Electric

Manufacturing Technology Library

Computer Aids to Manufacturing Network (ARPA/CAMnet) [198]

Production

Workflow

Workflow Management Coalition (WfMC) [199]

Workflow Software.com [200]

MIT Center for Coordination Science (CCS) [201]

MIT Process Handbook [202]

Workflow and Reengineering International Association (WARIA) [203]

Enterprise Integration Clearinghouse [204]

Enterprise Integration Technologies

Work

Virtual Work [205]

Ergonomics Standards [206]

Tele-observation (Police)

Law

Tele-operation (Medicine) (see below Health)

Tele-Mail

Messaging and E-Mail

GOSS

Tele-Conferencing (Business)

Virtual Auditorium

Pavel Curtis [207]

Tele-Collaboration

University of Saskatchewan

Dimensions of Collaborative Learning [208]

Collaborative Software Resource ClearingHouse [209]

Computer Supported Cooperative Work (CSCW) [210]

ISABEL

Links to go [211]

Computer Supported Collaborative Learning Resources [212]

Second European Conference on Computer Supported Cooperative Work

Collaborative Virtual Environments (CVE) [213]

Manchester 17-19 June 1998

ACTS AC 082

Design Implementation and Operation of a Distributed Annotation Environment (DIANE)

ACTS AC 017

Collaborative Integrated Communications for Construction (CICC) [214]

David Leevers

DavidLeevers@compuserve.com

This project envisages a cycle of cognition:

Map

Landscape

Room

Table

Theatre

Home

National

Germany

GMD  Cooperative Work

i-land [215]

United States

DARPA

Intelligent Collaboration and Visualization Community [216]

1. Develop Collaboration Middleware

2. Tools for Sharing Meaning

3. Tools for Sharing Views

4. Prototype and Evaluate Collaborative Applications

Managing Shared Data in a Design Environment [217]

Ohio Supercomputer Center

Remote Consultation

SDSC

Molecular Interactive Collaborative Environment (MICE) [218]

Major Companies

Novell GroupWise 5

Oracle Interoffice

Lotus Notes

Attachmate

ICL Teamware

Microsoft Exchange

Tele-Presence

B. International

ACTS-CHAINS

Distributed Virtual Environments and Telepresence [219]

ACTS-DOMAINS

1. Multimedia Content Manipulation and Management

Cluster 2  3D/VR and Telepresence Developments through the 3-D Cluster and Telepresence and Distributed Virtual Environments Chain

GII Testbed 7

Remote Engineering using Cave to Cave Communications

Gesellschaft für Mathematik und Datenverarbeitung (GMD)

Collaborative Wall

Cf. National Center for Supercomputer Applications (NCSA)

Infinity Wall

Distributed Interactive Virtual Environment (DIVE)

C A Virtual Environment (CAVE)

D. National

Armstrong Lab

Human Sensory Feedback (HSF) for Telepresence [220]

Forecast

Global Emergencies

B. International

G7 Pilot Project

Global Trends

Insurance

Investment

Patterns

Control

Companies

International Control Systems [221]

Transactions

@brink.com

E-Commerce Sites [222]

Books

Brad Cox, Superdistribution Objects as Property on the Electronic Frontier, Wokingham: Addison Wesley Publishing Company, 1996 [223]

Mik Lamming and William Newman, Interactive System Design, Cambridge, Mass: Addison Wesley, 1995. [224]

Jalal Ashayeri, William Sullivan, Flexible Automation and Intelligent Manufacturing,  Proceedings of the Ninth International Faim Conference, Center for Economic Research,  New York: Begell House, 1999.

Paolo Brandimarte, A. Villa, ed., Modeling Manufacturing Systems: From Aggregate Planning to Real-Time Control, Heidelberg: Springer Verlag, 1999.

A. Sen, A.I. Sivakuma, R. Gay. Eds., Computer Integrated Manufacturing, Heidelberg: Springer Verlag, 1999.

Roger Hannam, Computer Integrated Manufacturing: From Concepts to Realisation, Reading, MA: Addison-Wesley Pub Co, 1998.

Tien-Chien Chang, Richard A. Wysk, Hsu-Pin Wang, Computer-Aided Manufacturing, Upper Saddle River: Prentice Hall, 1997. 2nd edition. (Prentice Hall International Series in Industrial and Systems Engineering).

S. Kant Vajpayee, Principles of Computer Integrated Manufacturing, Upper Saddle River: Prentice Hall, 1998.

Information Control in Manufacturing 1998 : (Incom '98): Advances in Industrial Engineering: A. Proceedings Volume from the 9th Ifac Symposium on Information Control in Manufacturing (1998 Nancy), Pergamon Press, 1998.

John M. Usher, Uptal Roy, H. R. Parsaei, eds., Integrated Product and Process Development: Methods, Tools, and Technologies, New York: John Wiley and Sons, 1998. (Wiley Series in Engineering Design and Automation). 

Spyros G. Tzafestas, ed., Computer-Assisted Management and Control of Manufacturing Systems Heidelberg: Springer Verlag, 1997. (Advanced Manufacturing).

Arthur L. Foston, Carolena L. Smith, Tony Au, Fundamentals of Computer Integrated Manufacturing, Upper Saddle River: Prentice Hall, 1997.

Articles

Caren D. Potter, "Digital Mock-Up Tools Add Value to Assembling," CGW Magazine, November 1996. [225]

Brad Cox, "Planning the Software Industrial Revolution," IEEE Software Magazine, Special issue: Software Technologies of the 1990's, November 1990. [226]

Appendix 3. Institutions and Groups concerned with Institutional Aspects and Knowledge Management

Business Process Re-Engineering (BPR) [227]

Bibliography [228]

BPR Sites [229]

Brint

Business Process Redesign (BPR) and Process Innovation [230]

Directory of Process Modeling and Business Process Reengineering Resources [231]

University of Toronto

Business Process Re-engineering Advisory Group [232]

Business Process Reengineering Online Learning Center [233]

Reengineering Resource Center [234]

Knowledge Management

Knowledge Management Consortium (KMC) [235]

International Society of Knowledge Management Professionals (KMCI)

Gaithersburg, Maryland

Ireland

Knowledge Management [236]

Practical Discovery of Knowledge Management and Agents [237]

Knowledge Management Resources [238]

WWW Virtual Library on Knowledge Management [239]

Fora

Knowledge Management Think Tank [240]

Knowledge Management Community

@Brink Network Multiforums [241]

Business, Technology and Knowledge Management Community

Conferences

Intranets for Knowledge Management Conference [242]

San Francisco 3-5 November 1999

Individuals

Tennenhouse, David, Chief Scientist, MIT, Center for Co-ordination Science (CCS)

Weick, Karl, Rensis Likert Collegiate Professor of Organizational Behavior and Psychology, University of Michigan

Virtual Communities

Psychology of [243]

Virtual Communities in the UK [244]

Virtual Learning Communities [245]

And Electronic Commerce [246]

Conferences

First International Conference on Virtual Communities [247]

1998 Bath

1999 ??

2000 London [248] 19-20 Sept.

VirComm [249]

Global Business Network

Scenario Bibliography [250]

GBN Book Club [251]

Association for Global Strategic Information

Infonortics [252]

Policy

@brint.com [253]

Books

Raymond Grenier, George Metes, Going Virtual: Moving Your Organization Into the 21st Century, Upper Saddle River: Prentice Hall, 1995.

Bob Norton, Cathy Smith, Understanding the Virtual Organization, New York: Barron's Educational Series 1998. (Barron's Business Success Guide);

Kimball Fisher, et al, The Distributed Mind: Achieving High Performance Through the Collective Intelligence of Knowledge Work Teams,

Bo Hedberg, Goran Dahlgren, Jorgen Hansson, Nils-Goran Olve, Virtual Organizations and Beyond: Discover Imaginary Systems, New York: John Wiley and Sons, 1997. (Wiley Series in Practical Strategy).

Jessica Lipnack, Jeffrey Stamps, Virtual Teams: Reaching Across Space, Time, and Organizations With Technology, New York: John Wiley & Sons, 1997.

Jane E. Henry, Meg Hartzler, Tools for Virtual Teams: A Team Fitness Companion, Milwaukee: American Society for Quality, 1997.

Paul J. Jackson, Jos Van Der Wielen, ed., Teleworking: International Perspectives: From Telecommuting to the Virtual Organisation, London: Routledge, 1998. (The Management of Technology and Innovation).

Jack M. Nilles, Managing Telework : Strategies for Managing the Virtual Workforce,  

New York: John Wiley & Sons, 1998. (Wiley/Upside Series)

Magid Igbaria, Margaret Tan, ed., The Virtual Workplace, Hershey, PA: Idea Group Publishing; 1998. (Series in Information Technology Management).

Francis Fukuyama, Abram N. Shulsky, United States Army, The 'Virtual Corporation' and Army Organization, Santa Monica: Rand Corporation, 1997.

Appendix 4: Learning

Learning Organizations [254]

Term Coined by Chris Argyris

Bibliography [255]

European Learning Network

European Network for Organizational Learning Development (ENFOLD) [256]

Stanford Learning Organization Web (SLOW) [257]

Stanford Learning Lab [258]

Maastricht McLuhan Institute Learning Lab

@brint.com. The Biz Tech Network

Knowledge Management and Organizational Learning [259]

Virtual Corporations and Outsourcing [260]

Organizational Learning [261]

Learning Organizations in Development [262]

Edwin C. Nevis, Anthony J. DiBella, Janet M. Gould, Understanding Organizations as Learning Systems [263]

Society for Organizational Learning [264] (SOL)

Virtual Organizations Research Network (VoNet) [265]

Electronic Journal of Organizational Virtualness (eJove)

Learning Organization Resources [266]

Books

Peter M. Senge, et al., eds., The Fifth Discipline Fieldbook : Strategies and Tools for Building a Learning Organization, New York: Currency/Doubleday, 1994.

Ikujiro Nonaka, Hirotaka Takeuchi, Hiro Takeuchi, The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation, New York: Oxford University Press, 1995.

Virtual Universities

UNESCO

Virtual Universities: International Projects [267]

American Distance Education Consortium (ADEC)

Virtual Universities [268]

Canadian Association of Distance Education  (CADE) [269]

Virtual (Online) Universities [270]

Contact Consortium

Virtual University Projects [271]

Virtual University [272]

Virtual University

Professor Dr. Hans Jörg Bullinger

Fraunhoferinstitut für Arbeitswissenschaft und Organisation

Stuttgart

Louisiana State University

Center for Virtual Organization and Commerce [273] (CVOC)

Lecture re: new virtual universities [274]

Virtual Schools [275]

Virtual School [276]

Cox, Brad J,

Virtual Schoolhouse [277]

America-centric list

Virtual Schools: Television Program [278]

Education and Research

Brint.com [279]

Learning Theories

Constructivism

Constructivism [280]

Constructivist Learning Index Page

George Gagnon [281]

Social Construction of Reality [282]

Books

Bolter, Jay David. Writing Space: The Computer, Hypertext, and the History of Writing. Hillsdale, NJ: Lawrence Erlbaum and Associates, 1991.

Landow, George. Hypertext: The Convergence of Contemporary Critical Theory and Technology. Baltimore: Johns Hopkins UP, 1992.

Latour, Bruno. Science in Action: How to Follow Scientists and Engineers through Society. Cambridge: Harvard UP, 1987.

Intertextuality

Piaget, Jean, Genetic Epistemology, transl. by Eleanor Duckworth, New York: Columbia University Press, 1970. (Woodbridge lectures series; 8).

Ravitz, Jason [283]

A developmental model for distributed learning environments [284]

Articles

Piaget, Jean. "The constructivist approach: recent studies in genetic epistemology," In: Construction and validation of scientific theories. Genève: Fondation Archives Jean Piaget, 1980, pp. 1-7.

Acknowledgments

The ideas in this paper have grown out of many years of discussions with a circle of friends including,Father John Orme Mills, Professor André Corboz, Eric Dobbs, and Rakesh Jethwa and, more recently, Heiner Benking. I thank my colleagues Johan van de Walle and Professor John Mackenzie Owen for reading the text and providing helpful suggestions and references. Professor Baron Jaap van Till stimulated new ideas as acknowledged in note 93. I am grateful to Eddy Odijk (Philips) and John Gage (Sun) for discussions of new technologies. For specific news items re: technological developments I am grateful to Dr. Anne Tyrie, editor of the CITO-Link Line, and Jeffrey Harrow, editor of The Rapidly Changing Face of Computing.

I am deeply grateful to Dr. Ingetraut Dahlberg, who has long been a source of inspiration, who generously read the text and offered a number of helpful suggestions. I am very grateful also to my doctoral student, Nik Baerten, for reading the paper, making constructive suggestions, and helping with the illustrations. 

Notes

[1] Michael Giesecke, Der Buchdruck in der frühen Neuzeit. Eine historische Fallstudie über die Durchsetzung neuer Informations- und Kommunikationstechnologien, Frankfurt: Suhrkamp Verlag, 1991.

[2] There are a number of these electronic book devices. 

SoftBook Press has developed a SoftBook System. See: http://www.softbook.com/softbook_sys/index.html. Nuvo Media has produced the RocketBook which is supported by Barnes and Noble and Bertelsmann. See:  http://www.nuvomedia.com/html/productindex.html. Meanwhile Every Book Inc. has produced the Dedicated Reader. See: http://www.everybk.com/. Related to this are new devices such as the Cyberglass developed by Sony, Paris. Cf. Chisato Numaoka, "Cyberglass: Vision-Based VRML2 Navigator," Virtual Worlds, ed. Jean-Claude Heudin, ed., Berlin: Springer Verlag, 1998, pp. 81-87.

Related to the electronic book is the electronic newspaper such is NewsPAD: See http://www.ictnet.es/newspad/newspad.html. Acorn Risc Technologies, (Cambridge, UK) has developed the NewsPAD. See: http://www.byte.com/art/9702/sec17/art2.htm linked with research at Edinburgh on the NewsPAD. See: http://omni.bus.ed.ac.uk/ems/research/r_newspad.htm.

[3] See http://www.wired.com/news/technology/0,1282,32545,00.html

[4] Jeffrey R. Harrow, "The Rapidly Changing Face of Computing," November 1, 1999,  The Lesson at: http://www.compaq.com/rcfoc. This relates to the new field of moletronics or molecular electronics described by John Markoff, "Tiniest Circuits Hold Prospect of Explosive Computer Speeds," New York Times, New York, July 16, 1999. Cf. next note.

[5] See http://www.nd.edu/~mlieberm/qca.html

[6]  See http://www.nd.edu/~mlieberm/

[7] Michael Chrichton, Timeline, New York: Alfred Knopf, 1999.

[8]  See http://www.ubiq.com/hypertext/weiser/UbiHome.html.

[9]  See http://fmg-www.cs.ucla.edu/travler98/intro.html. More recently this has led to the CoSMoS project (Self-Configuring Survivable Multi-Networks for Information Systems Survivability. See: http://www.lis.pitt.edu/~survive/darpa/quad.html.

[10] One of the most important of these is a consortium of eight leading consumer electronics firms headed by Philips called Home Audio Visual Interoperability (HAVI). See: http://www.havi.org. Philips has linked with Wi-LAN (Calgary) using IEEE 1394 wireless transmission at 2.4 GHz. (Firewire). See: http://www.eetimes.com/story/OEG19990827S0032. Using a variant of IEEE 802.11, this solution employs Wide-band Orthogonal Frequency Division Multiplexing (W-OFDM) to achieve 46 Mb raw data and 24 Mb in practice. This technology also uses MPEG2. See: http://www.wi-lan.com/graphics/iwill/w-ofdmoverview.pdf.

Also very important is the Open Services Gateway Initiative (OSGI) which includes IBM, Sun, Motorola, Lucent, Alacatel, Cable & Wireless, Enron, Ericsson, Network Computer, Nortel, Oracle, Royal Philips Electronics, Sybase and Toshiba. See: http://www.osgi.org.

Meanwhile the Home Radio Frequency Working Group (HomeRF) includes Compaq, Ericsson, Hewlett-Packard, IBM, Intel, Microsoft, Motorola, and Philips Consumer Communications. National Semiconductor and Rockwell Semiconductor are among the supporting companies. This employs a frequency-hopping technology. It transmits data at 1.6 Mbit/s between home PCs and peripherals and supports up to four separate voice channels. They are working on a Shared Wireless Access Protocol  (SWAP). See: http://www.wired.com/news/news/technology/story/10711.html.

Competition was announced by Microsoft-3Com (11 March 1999) and by MIT-Motorola (15 March 1999).

More important is the Wireless Application Protocol Forum (WAP Forum) which has produced the Wireless Application Protocol (WAP). See http://www.wapforum.org/. This initiative includes a number of technologies:

                     Global System for Mobile Communications (GSM 900,1800 &1900MHz)

                     Code Division Multiple Access                     (CDMA IS-95/IS-707)                   

                     Time Division Multiple Access                     (TDMA IS-136)

                     Program Delivery Control                              (PDC)

                     Personal Handyphone System MOU Group   (PHS)                                           

                     Ericsson's Eritel subsidiary's cellular

                     land-radio-based packet-switched data

                     communication system                                    (Mobitex)

                     http://www.ora.com/reference/dictionary/terms/M/Mobitex.htm

                     DataTAC

                     CDPD

                     DECT

                     iDEN (ESMR)

                     (Iridium

                     TETRA)

The WAP Forum is in turn linked with ETSI's Mobile Execution Environment (MEXE). See: http://www.oasis-open.org/cover/wap111198.html.

At a lower speed but also important is the Blue Tooth Consortium, which includes IBM, Toshiba, Ericsson, Nokia, and Puma Technology, entailing a single synchronization protocol to address end-user problems arising from the proliferation of various mobile devices -- including smart phones, smart pagers, handheld PCs, and notebooks -- that need to keep data consistent from one device to another. It targets short-distance links between cell phones and laptops with a 1-Mbit/s network that connects devices up to 10 meters apart. The frequency-hopping technology operates in the 2.54-GHz ISM band.See: http://www.bluetooth.com/v2/default.asp

and http://www.infoworld.com:80/cgi-bin/displayStory.pl?980413.ehbluetooth.htm.

Also significant is Wireless Ethernet (IEEE 802.11) incorporated into ISO/IEC 8802-11: 1999, which was designed for wireless transmission of data only. Isochronous information is excluded. The approach uses the 2.4-GHz band and offers a raw data rate of up to 11 Mbits/s. See: http://grouper.ieee.org/groups/802/802info.html.

W3C Mobile Access Interest Group is working on Composite Capability Preference Profiles (CC/PP). See: http://www.w3.org/TR/Note-CCPP. This initiative under Ora Lassila. This includes HTTP Access Headers, Salutation, IETF CONNEG, MIME and P3P. See: http://www.concentric.net/~Olassila and ora.lassila@research.nokia.com.

There is also the Global Mobile Commerce Forum (GMCF). See: http://www.gmcforum.com/got.html.

For Infrared Remote Control by Warner Brothers Online, see: http://www.wb.com/frame_moz3_day.html. Cf. Data General Network Utility Box (NUB). See: http://www.wired.com/news/news/technology/story/10036.html.

Meanwhile a number of initiatives in the direction of integration are coming from the telephone world. Perhaps the most important is the Voice eXtensible Markup Forum (VXML). See: http://www.vxmlforum.com/industry_talk.html. Formed by AT&T, Lucent Technologies, and Motorola, and 17 other companies, VXML is working on a standard for voice- and phone-enabled Internet access, says David Unger, an AT&T product strategy and development division manager.

Cf. Open Settlement Protocol (OSP), whereby Cisco, 3Com Corporation, GRIC Communications, iPass and TransNexus have teamed up to promote inter-domain authentication, authorization and accounting standards for IP telephony. See: http://www.cisco.com/warp/public/146/september98/4.html

Cf. Nortel, Microsoft, Intel, Hewlett-Packard who are working on corporate networking equipment that can handle data, voice, and video communications. Cf. Home Phone Network Alliance (Home PNA) See: http://www.homepna.org/.

Cf. Data General Network Utility Box (NUB). See: http://www.wired.com/news/news/technology/story/10036.html.

On mobile developments generally, see: http://www.ispo.cec.be/infosoc/legreg/docs/greenmob.html

At the Multi-National level there are the activities of former DGXIIIb ACTS Domains:

Mobility, Personal and Wireless Communications

Future Public Land Mobile Telecommunications Systems                       (FPLMTS)[10]   

            See http://www.itu.ch/imt-2000

Digital Mobile System                                                               (DSM)

Global System for Mobile Communications                                           (GSM)

Digital European Cordless Telecommunications                         (DECT)

Pan European Paging System                                                    (ERMES)

Cf. IETF Mobile IP Working Group[10]                                                   (RFC 2002)

            See: http://www.telematix.com/library/assoc-org/index.html

Charles Perkins[10] SUN

See http://www.surloc.org/~charliep

            Cperkins@eng.sun.com

At the national level there is also:

Virtual Environment Dialogue Architecture                                             (VEDA)

See: http://www.cs.ucl.ac.uk/research/live/papers/A.Steed.html.

[11] The Palm Pilot has become the more significant in the months of October and November 1999 through deals with Nokia and Sony to share the technology.

[12] See: http://www.nokia.com/press/photo/future.html

[13] Steve Silberman, "Just say Nokia," Wired, September 1999, pp.137-149, 202. Cf.: http://www.wired.com/wired/archive/7.09/nokia.html.

[14] See: http://www.ericsson.se/pressroom/photolibrary/deliver.jpg. and  

          http://www.ericsson.se/pressroom/phli_pcoco.shtml.

[15]  See http://www.kenwood.net/products/index.cfm?AMA=open&ama_hheld=open&radio=VC-H1&selection=Amateur.

[16]  See http://www.mot.com/

[17]  See: http://www.pencomputing.com/palm/Reviews/visor1.html.

[18] Chief Scientist, Sun Microsystems. 

[19]  See: http://www.cs.columbia.edu/~feiner

[20] Cited from Jeffrey R. Harrow, The Rapidly Changing Face of Computing, Oct. 18, 1999:

That's exactly what Benefon has introduced at the recent Telecom 99 show.  

Brought to our attention by RCFoC reader Sean Wachob, this pocket GSM marvel will always know where it's at, and it will display moving maps so you know, too. It also provides 14.4 kilobits/second Internet access, and it offers 10 days of standby time (although I haven't seen what using the GPS receiver does to battery life). See:

http://www.benefon.com/pressinvestors/pressroom/1999/escgbrelease.html.

[21] The objects were arranged in classes governed by parent-child relationships defined as inheritance. As programmers introduced further characteristics such as behaviors and autonomy, inheritance became a problem. Active objects provided an interim solution, until one turned to the concept of agents.   

[22]  In terms of multiple libraries this process is presently being carried out through collation programs in the context of international projects such as the Research Libraries Information Network (RLIN), the Online Computer Library Network (OCLC), the On-Line Public Access Network for Europe (ONE) and the Gateway to European National Libraries (GABRIEL). In future, collation of the combined holdings of these international databanks of library holdings could be performed by agents.  

[23]  These devices include Compuscan, Hendrix and Dest, the Kurzweil Intelligent Scanning System and the Calera Compound Document Processor. See: http://onix.com/tonymck/ocrlab.htm.

[24]  See http://www.incomm.ch/quick/.

[25]  See http://www.isg.sfu.ca/~duchier/misc/vbush/.

[26]  See http://www.bootstrap.org/. A basic bibliography for Douglas C. Engelbart (hereafter DCE), includes:

"A Conceptual Framework for the Augmentation of  Man's Intellect," Vistas in Information Handling. Ed. P.D. Howerton and D.C. Weeks. Washington, D.C.: Spartan Books, 1963;

DCE and William K. English. "A Research Center for Augmenting Human Intellect." Proceedings AFIPS Conference, 1968 Joint Computer Conference. December 9-11, 1968, San Francisco. Montvale, NJ: AFIPS Press, 1968;

"Coordinated Information Services for a  Discipline or Mission-Oriented Community." Proceedings Second  Annual Computer Communications Conference, January 24, 1973, San Jose, CA;

DCE, Richard W. Watson and James C. Norton. "The Augmented Knowledge Workshop." Proceedings AFIPS Conference, 1973 National Computer Conference and Exposition. June 4-8, 1973, New York. Montvale, NJ: AFIPS Press, 1973;

"Toward Integrated, Evolutionary Office Automation Systems." Proceedings Joint Engineering Management Conference. October 16-18, 1978, Denver, CO.

"Toward High-Performance Knowledge Workers," Proceedings AFIPS Office Automation Conference. April 5-7, 1982, San  Francisco, CA;

"Authorship Provisions in AUGMENT." Proceedings COMPCON Conference. February 21-March 1, 1984, San Francisco;

"Collaborative Support Provisions in AUGMENT," Proceedings COMPCON Conference. February 21-March 1, 1984, San Francisco, CA.;

"Workstation History and the Augmented Knowledge Workshop." Proceedings ACM Conference on the History of Personal Workstations. January 9-10, 1986, Palo Alto, CA.

and Harvey Lehtman, "Working Together," BYTE, December 1988, pp. 245-252.

"Knowledge-Domain Interoperability and an Open Hyperdocument System," Proceedings of the Conference on Computer-Supported Cooperative Work, Los Angeles, CA, Oct 7-10, pp. 143-156. (AUGMENT, 132082). Also republished in Hypertext/Hypermedia Handbook, E. Berk and J. Devlin [Ed.], McGraw-Hill, 1991.

"Toward High-Performance Organizations: A Strategic Role for Groupware,"  Groupware '92, Proceedings of the groupWare '92 Conference, San Jose, CA, Aug 3-5, 1992, Morgan Kaufmann Publishers, pp. 77-100.

[27] Standard references on the theme of hypertext include the following:

The Society of Text. Hypertext, Hypermedia and the Social Construction of Information, ed. Edward Barrett, Cambridge, Mass.: MIT Press, 1989.

Jay David Bolter, Writing Space: The Computer, Hypertext, and the History of  Writing. Hillsdale, NJ: Lawrence Erlbaum and Associates, 1991.

George Landow, Hypertext: The Convergence of Contemporary Critical Theory and Technology. Baltimore: Johns Hopkins UP, 1992. New edition: 1997. This work makes important distinctions between a number of linking materials, pp. 12-14:

Lexia to Lexia Unidirectional

Lexia to Lexia Bidirectional

String (word or phrase) to Lexia

String to String

One to Many

Many to One.

N. Streitz, J. Hannemann,  J. Lemke, et al., "SEPIA: A Cooperative Hypermedia Authoring Environment," Proceedings of the ACM Conference on Hypertext, ECHT-2, Milan, 1992, 11-22.

Jakob Nielsen, Multimedia and Hypertext -The Internet and Beyond, New York: Academic Press, 1995. German translation: Multimedia, Hypertext und Internet. Grundlagen des elektronischen Publizierens, Braunschweig: Vieweg, 1996. 

For German views on hypertext see: Hans-Peter Kolb, Multimedia- Einsatzmöglichkeiten, Marktchancen und gesellschaftliche Implikationen, Frankfurt: Peter Lang, 1999.

Cf. the dissertations of:

Uwe Schreiweis, Hypertextstrukturen als Grundlage für integrierte Wissensaquisitionssysteme, Aachen: Verlag Shaker, 1994; Martin Richartz, Generik und Dynamik in Hypertexten, Aachen: Shaker Verlag, 1996.

There are also numerous sites on hypertext on the Internet. A good introduction is found at: http://www.hcc.hawaii.edu/guide/www.guide.html#t2. Some history and a basic bibliography is found at http://cheiron.humanities.mcmaster.ca/~htp/. For a more thorough bibliography focussing on literary hypertext see: http://www.eastgate.com/Hypertext.html.

On hypertext and hypermedia see: http://www.gwu.edu/~gelman/train/hyperbib.htm. On links with cyberspace and critical theory: http://landow.stg.brown.edu/cpace/theory/theoryov.html.

On hypertext fiction see http://www.duke.edu/~mshumate/hyperfic.html.

For an instructive interpretation into the future of narrative from an American author see:

Janet H. Murray, Hamlet on the Holodeck : The Future of Narrative in Cyberspace, Cambridge Mass.: MIT Press, 1997. Literary implications of Hypertext are also emphasized in:

Michael Joyce, Of Two Minds: Hypertext Pedagogy and Poetics. University of Michigan Press, 1994.

Espen Aarseth, Cybertext: Perspectives on Ergodic Literature. Baltimore: Johns Hopkins Press, 1997.

On links between hypertext and intertextuality see: http://calliope.jhu.edu/press/books/landow/intertext.html. On specific links with Derrida see: http://home.earthlink.net/~outlyr/hypertext/body/07_0.html. cf. "The Supplement of Copula: Philosophy before Linguistics." Textual Strategies: Perspectives in Post-Structuralist Criticism. Ed. Josué V. Harari. London: Methuen, 1979. 28-120.

On hypertext, hypermedia and learning see: http://www.stemnet.nfca/~elmurphy/hyper.html. For the MIT Hypertext project see:

http://hypertext.rmit.edu.au/publications/mia/mia_figure_one.html. On the use of hypertext and WWW see: http://www.psychologie.uni-bonn.de/allgm/mitarbei/privat/gerdes_h/hyper/Bookm.htm. On connections with ubiquitous computing see: http://www.ubiq.com/hypertext/weiser/IbiHome.html. Re: hypertext review see: http://www.isg.sfu.ca/~duchier/misc/hypertext_review/.

Concerning hypertext interfaces to library information systems in the EU project

HYPERLIB see: http://www2.echo.lu/libraries/en/projects/hyperlib.html. For an information science definition of hypertext see: http://web.uvic.ca/~ckeep/hfl0122.html:

The term hypertext is used to refer to full-text indexing schemes which are more elaborate and, ideally, more useful in creating "links" or connections between related subjects or terms. [...] Hypertext allows the user to move from within one section of text to another, related section of text, without having to exit the current document (or section of document) and re-enter a new document. (O'Connor, "Markup, SGML, and Hypertext for Full-Text Databases--Part III" 130).

[28] Ted Nelson, Literary Machines, South Bend, Indiana: Self published, 1981, p. 0/2.  The subtitle of this work was:

Project Xanadu, an initiative toward an instantaneous electronic literature; the most audacious and specific plan for knowledge, freedom and a better world yet to come out of computerdom; the original (and perhaps the ultimate) Hypertext System.

[29] Ibid, p. 1/16. Cf. David Small's Talmud Project (MIT Media Lab, Small Design Firm).

  Cf. Tim Guay Simon Fraser University:

            The concept has been use in ancient literature, such as the Talmud; with its commentary on commentary on the main text, and its  annotations, and references to other passages within the Talmud, and outside in the Torah and Tenach. It is a very biological form of presenting information that models how our minds processes, organizes, and retrieves information. It creates very organic information space, as opposed to the artificial linear format imposed by the print paradigm.

Conceptually, hypertext forms associations called links, between chunks of information called nodes. The resulting structure is commonly referred to as a web, hence the name World Wide WEB for the CERN project. These basic characteristics, coupled with hypertext's other characteristics, allows the production of extremely rich, flexible documents and metadocuments, especially when combined with multimedia to form the fusion referred to as hypermedia.

   See: http://hoshi.cic.sfu.ca/~guay/Paradigm/Paradigm.html and

           http://hoshi.cic.sfu.ca/~guay/Paradigm/Hypertext.html).

[30] Emanuel G.Noik, "Exploring large hyperdocuments fisheye views of nested networks," Conference on Hypertext and hypermedia. Proceedings of the fifth ACM conference on HYPERTEXT '93, (November 14 - 18, 1993, Seattle, WA), pp.192-205.

[31] Further contributing to this confusion are conflicting interests. The owners of search engines such as Yahoo and AltaVista gain money from advertising which is calculated on the basis of number of hits. The interests of firms such as AltaVista thus favor as many detours as possible on the way to finding what we really want. By contrast, a user wants as few distractions as possible in arriving at their goal. Thus the advertising interests of the search companies are actually preventing use of the most efficient search strategies. Yet another problem is that a number of websites are designed by marketing experts who are only interested in the effects of presentation (rhetoric) rather than the structure (grammar) or logic (dialectic) of the materials. As a result the substance of the sites is masked by its form.   

[32]  Carnegie Mellon University.

[33] Robert E. Horn, Mapping Hypertext, Waltham, Mass: Lexington Institute Press, 1989, 1996. See: http://www.eastgate.com/catalog/MappingHypertext.html. Horn identified seven kinds of information types: procedure, process, structure, concept, fact, classification and principle.

[34]  See http://www.gurunet*.com/index.html

[35] Jeffrey Harrow, The Rapidly Changing Face of Computing, 13 September 1999 at: http://www.compaq.com/rcfoc.

[36] On the theme of switching between systems see: Ingetraut Dahlberg, "Library Catalogues on the Internet: Switching for Future Subject Access," Knowledge Organization and Change, Frankfurt am Main: Indeks Verlag, 1996, pp. 155-164. (Advances in Knowledge Orgnaization, Volume 5).

[37] For an important statement of this position see Robert Fugmann, Subject Analysis and Indexing. Theoretical Foundation and Practical Advice, Frankfurt: Indeks Verlag, 1993 (Textbooks for Knowledge Organization, vol. 1). 

[38] While proponents of broadband connections will welcome such scenarios, systems analysts might warn that such an approach would totally overload the network and render it useless. Again some balance is required. We need to recall that the storage space on a typical local computer has changed from 20 Megabytes to several Gigabytes in the past decade and will expand even more dramatically in the next two decades. Botanists working on a specific domain will simply have the standard reference books of that field on their local system. Subsets of that will be available on notepads as they go into the field. New combinations can also be foreseen. If they are environmentalists, whose work requires their driving around constantly in remote territory, they can have a large memory capacity in their jeep, which is consulted by wireless from their notepad as they walk around without needing to burden the Internet directly as they do their routine work. On the other hand, if they come across something which is new to them, they can use a satellite connection to check whether there is information concerning this finding in central repositories such as Kew Gardens.

[39] See Tim Berners Lee's lectures at WWW7 and 8 at http://www.w3.org/Talks/. For a discussion of some implications of these developments for knowledge organization see the author's: "Conceptual Navigation in Multimedia Knowledge Spaces," TKE-9, Terminology and Knowledge Engineering, Vienna: Termnet, 1999, pp. 1-27.

[40]  See http://www.gip.jipdec.or.jp/english/project-e/project20-e.html.

[41] See the author's "Conceptual Navigation," as in note 39 above, figure 1.

[42] For another discussion of this problem see the author's  "Past Imprecision for Future Standards: Computers and New Roads to Knowledge", Computers and the History of Art, London, vol. 4.1, (1993), pp. 17-32.

[43] For a more detailed discussion concerning these categories see the author's: "A Databank on Perspective: The Concept of Knowledge Packages", Metodologia della ricerca: orientamenti attuali. Congresso internazionale in onore di Eugenio Battisti, Milan, 1991, Arte Lombarda, Milan, 1994, n. 3-4, parte secunda, pp. 166-170.

Skeptics will invariably say that all this is very attractive in theory but completely unrealistic in practice. Scanning in all the texts is already a monumental task in itself. Creating all these links would create as least as much information again and be too much. This seeming plea for efficiency qua space has limited validity. Thanks to a lack of proper indexing, scholars are constantly re-searching material unaware that the materials have already been studied by numerous others. The existence of comprehensive indexes would thus prevent duplication and at the same time help fight problems of plagiarism. 

At a practical level one would invariably begin with standard works and only use basic subject headings. These would gradually be extended to include all subject headings and then be applied in turn to ever greater ranges of books.

[44] These developments are discussed in the author's recent book: Frontiers in Conceptual Navigation for Cultural Heritage, Toronto: Ontario Library Association, 1999. 

[45]  See http://www.newciv.org/cob/members/benking/benking.html.

[46]  Leonardo's Last Supper would, for example, never be allowed to go on a travelling exhibition. Nor is it likely that the Louvre would ever consider lending all its Leonardos at once.

[47]  See http://www.i3net.org/i3projects/

[48]   See http://zeus.gmd.de/projects/hips.htm

[49] For a further discussion of such possibilities see the author's "Frontiers in Electronic Media," Interactions Journal of the ACM, New York, July-August 1997, pp. 32-64.

[50] Very interesting attempts at such retrospective color conversion have, for instance, been carried out by Hitachi with respect to the famous engravings of Hokusai, such that one can effectively see how the engraving fades with time.

[51] This is being coordinated by the International Alliance for Interoperability. See:  http://cic.cstb.fr/ILC/html/iai.htm; http://byggeri.dti.dk/bps/IAI-WEB/iai.htm

and http://www.interoperability.com/.

[52] Nancy Williamson, "An Interdisciplinary World and Discipline Based Classification," Structures and Relations in Knowledge Organization, Würzburg: Ergon Verlag, 1998, pp. 116-124. (Advances in Knowledge Organization, vol. 6).

[53] C. McIlwaine, "Knowledge Classifications, Bibliographic Classifications and the Internet," Structures and Relations in Knowledge Organization, Würzburg: Ergon Verlag, 1998, pp. 97-105. (Advances in Knowledge Organization, vol. 6).

[54] These possibilities are further outlined in the author's "Conceptual Navigation in Multimedia Knowledge Spaces," as in note 39 above.

[55] These problems are further explored in the author's recent book Frontiers in Conceptual Navigation for Cultural Heritage, as in note 44 above. 

[56] See: http://www.asymptote.net/proj.htm. Cf. http://www.kubos.org/web/vrml/workshop/w1.html.

[57]  See http://www.ktic.com/topic6/13_TERM2.HTM

[58] Frederick W. Taylor, Principles of Scientific Management (1911), New York: Norton, 1967. Cf. http://eldred.ne.mediaone.net/fwt/taylor.html.

[59]  Everett M. Rogers, Diffusion of Innovations, New York: Free Press, 1962. Fourth edition 1995.

[60] For a recent survey of Gantt and other leading management experts see: http://www.worthingbrighton.com/expert.html.

[61] Mike Cooley, "The Taylorisation of Intellectual Work," in: Les Levidow and Bob Young, eds., Science, Technology and the Labour Process: Marxist Studies, London: CSE Books, vol. 1, 1981, pp. 46-65. (Reissued by Free Association Books, 1983).

[62] An early article on The self-Healing PC appeared on 23 September 1996 at:

http://informationweek.com/598/98iuds2.htm. On 11 July 1997, it was announced that "MCI Eyes Self-Healing Network" at:  http://www.commweek.com/cwi/netnews/070797/news0711-2.html. In 1998, this idea was taken up anew:

Sun plans "self-healing" software.

The company calls its policy management applications "self-healing" because they can respond to conditions and events automatically, taking corrective action without human intervention.

See:http://abcnews.go.com/sections/tech/CNET/cnet_sunsoft0624.html.

Cf. Los Alamos which has developed "Real-Time, Puncture-Detecting, Self-Healing Materials" at: http://www.lanl.gov/external/science/99r&D.html.

Not all definitions of self-healing are quite as dramatic as their hype might at first suggest. E.g.: Support.com's Self-Healing System enables help desk personnel to automatically diagnose any software problem and remotely fix it. The solution is based on Support.com's breakthrough DNA Probe technology, which makes all software self-fixing by automatically determining the complete working state -- all components and dependencies -- of any Windows software, anytime, anywhere. The system can be rapidly deployed for immediate benefits -- unlike alternative solutions, which require costly development and maintenance of diagnostic scripts and application descriptions. See: http://adapt.remedy.com/ppp/partners/pptr75_info.htm.

[63] InfoWorld Electric, 08/10/99 cited in the CITO Link-line -- 18 October 1999. This idea has been in the making for some time. For instance, on 2 March 1993, ATT announced self-healing wireless network technology. See: http://www.att.com/press/0393/930302.nsa.html. 3M has Mill Rolls which it describes as "resilient, gouge-resistant and self-healing" at:

http://www.massasoit.com/3MNonwovenRolls.html.

[64] Peter Senge, The Fifth Discipline, The Art and Practice of the Learning Organization, New York: Currency Doubleday, 1990. For a review see: http://www.rtis.com/nat/user/jfullerton/review/learning.htm.

[65] See the Learning Organizations' homepage at: http://www.albany.edu/~kl7686/learnorg.html

[66]The Institute's formal definition of Infonomics is:

Infonomics is the new interdisciplinary science investigating the digitisation of society. More specifically it investigates the impact of new possibilities of restructuring, manipulating and exchanging data and information in real time and across space on: individual and collective behaviour; organisational and economic structure and performance; legal systems; knowledge accumulation and diffusion; communication modes and learning; culture and politics. It does so in an interactive, interdisciplinary fashion looking both at impacts as well as feedbacks.

Meanwhile, some see infonomics much more narrowly as little more than a Taylorisation of white collar workers. See, for instance, Mark Heyer, Heyertech Inc., The Institute for Infonomic Research:

Infonomics is the study of the relationship between people and information. Infonomics focuses on the interaction between people and information systems and how to maximize the effectiveness of these transactions. Infonomics can be extended to the study of organizational information dynamics as the sum of many personal interactions. It provides the basis for the study of large scale cultural information phenomena. (Cf. http://www.heyertech.com/html/iir_main.html)

[67] See: The Multivalent Document Home Page: http.cs.berkeley.edu/~wilensky/MVD.html. Cf. Thomas Arthur Phelps, "Multivalent documents: anytime, anywhere, any type, every way user-improvable digital documents and systems," Ph.D. Dissertation, University of Berkeley, 1998:  www.cs.berkeley.edu/~phelps/papers/dissertation-abstract.html.

[68]  Cited by: Dr Anne Tyrie, CITO Link-line, November 1, 1999: http://www.ontcentex.org/main_intro.html.

[69]  Pierre Levy, L'Intelligence collective, pour une anthropologie du cyberspace, Paris: La Découverte, 1994. Pierre Levy, who has also warned of the dangers of a second flood in information, has at the same time worked with others in developing the idea of trees of knowledge (skills) in the sense of competencies in the book: Michel Authier et Pierre Levy, Les arbres de connaissances, Paris: La Découverte, 1993. (Collection Essais), to analyze person's competencies which has taken the form of a company for knowledge management, Trivium. See:http://www.trivium.fr/htm/trivium/fftriv.htm. Cf. Christophe d'Iribarne, "Etude de l'Arbre de Connaissances d'un point de vue mathématique; Michel Authier, "Arbres de Connaissances, Controverses, Expériences"; Ibid., "Eclaircissement sur quelques fondamentaux des Arbres de Connaissances," Revue Documents du Cereq, Numéro 136, juin 1998 (Numéro spécial).

[70] Derrick de Kerckhove, Connected Intelligence: The Arrival of the Web Society, Toronto: Somerville House Books, 1999. Here the author examines hypertextuality mainly in terms of news, books and museums. His focus is access to existing knowledge rather than transformation of knowledge leading to new insights and new knowledge. Professor de Kerckhove's techno-optimism does not reflect upon the dangers and pitfalls of the new technologies.

[71] Pierre Levy, as in note 68 above. Cf. English Translation: Collective Intelligence : Mankind's Emerging World in Cyberspace, Dordrecht: Plenum Press, 1997. See the review of Pierre Drouin in Le Monde (as cited in http://www.alapage.com/cgi-bin/1/affiche_livre.cgi?l_isbn=2707126934):

La Terre fut le premier grand espace de significations ouvert par notre espèce, où Homo sapiens invente le langage, la technique et la religion. Un deuxième espace, le Territoire, se construit à partir du néolithique, avec l'agriculture, la ville, l'Etat, l'écriture. Puis naû', au XVIe siècle, l'espace des marchandises. Nous allons vers un nouvel espace, celui «du savoir et de l'intelligence collectifs», qui commandera les espaces antérieurs sans les faire disparaû're. Sans doute, le savoir a-t-il toujours été au coeur du fonctionnement social, mais la nouveauté est triple: vitesse d'évolution des connaissances, masse des personnes appelées à les produire et à les apprendre, apparition de nouveaux outils pour rendre  l'information «navigable».

Vers une agora virtuelle

Du coup, on peut réinventer le lien social «autour de l'apprentissage réciproque, de la synergie des compétences, de l'imagination et de l'intelligence collective». Pour Pierre Lévy, «l'effet boule de neige» est assuré : après les groupes organiques (familles, clans et tribus) et les groupes organisés (Etats, Eglises, grandes entreprises), des groupes «auto- organisés» réaliseront «l'idéal de la démocratie directe dans les très grandes communautés en situation de mutation et de «déterritorialisation».

[72]  See Richard Barbrook in his review of Pierre Levy's Collective Intelligence in the New Scientist, 13th December 1997 (as cited at http://ma.hrc.wmin.ac.uk/ma.theory.2.4.db):

Once we all have access to cyberspace, we will be able to determine our own destiny through a real-time direct democracy: the "virtual agora"....According to Levy, cyberspace therefore is the online version of a hippie commune.

[73]  See Gregory Stock, Metaman: The Merging of Humans and Machines into a Global Superorganism, Toronto: Doubleday Canada, 1993; Kimball Fisher, Maureen Duncan Fisher, The Distributed Mind: Achieving High Performance Through the Collective Intelligence of Knowledge Work Teams, New York: Amacom, 1997; Peter Russell, The Global Brain Awakens: Our Next Evolutionary Leap, Element, February 2000.

[74] Charles M. Savage Allee, Fifth Generation Management: Co-Creating Through Virtual Enterprising, Dynamic Teaming, and Knowledge Networking, Oxford: Butterworth-Heinemann, 1996; Verna Allee, The Knowledge Evolution: Expanding Organizational Intelligence, Oxford: Butterworth-Heinemann, 1997; James L. Creighton, James W. R. Adams, Cybermeeting: How to Link People and Technology in Your Organization,  New York: Amacom 1997; William E. Halal et al., ed., The Infinite Resource: Creating and Leading the Knowledge Enterprise, Jossey-Bass: San Francisco, 1998. (Jossey-Bass Business and Management Series).

[75]  See http://www.cybercom.net/~rbjones/rbjpub/cs/ai014.htm

[76]  Cf. Lecture by Chris Thomsen, Maastricht University, 27.10.1999. These topics will be further discussed at an upcoming conference on: Computer Supported Collaborative LEARNING (CSCL'99)- connecting learning communities globally, 11-15 December 1999, sponsored by the Stanford Learning Lab at which Douglas Engelbart will give one of the keynote lectures. See: http://learninglab.stanford.edu/CSCL99.

[77] Cf. Eric Bonabeau, Marco Dorigo, Guy Theraulaz, Swarm Intelligence. From Natural to Artificial Systems, New York: University Press, 1999.

[78] CSCW as envisaged by Engelbart entails highly educated persons sharing their specialized knowledge. Is this effectively the same as Computer Supported Collaborative Learning (CSCL)? Cf., for instance, http://www.cica.indiana.edu/cscl95/ and http://sll-6.stanford.edu/CSCL99/.

[79] University of North Carolina, Chapel Hill.

[80]  See http://www.macsch.com/

[81]  Cf. the International Council on Systems Engineering (INCOSE). See: http://www.incose.org.

[82] See: http://www.bootstrap.org/augment-132082.htm.

[83] See: http://www.bootstrap.org/augment-132082.htm. Looking at Engelbart's fascinating diagrams, which are reminiscent of Buckminster Fuller's tensegrity diagrams, one is struck that the dangers discussed earlier with regard to Nelson apply here also. If everyone made all the links possible, it would be very easy to lose one's way amid the connections. Hence, once again the need for a metaphorical equivalent of a compass such as SUMMA (cf. figure 2) to provide bearings remains paramount.

[84]  See http://www.idg.net/go.cgi?id=182825

[85]  See http://www.alibre.com/

[86]  See http://www.aristasoft.com/

[87]  See http://www.inso.com/sherpaworks/sherpads.htm

[88]  See http://www.nexprise.com/

[89]  See http://vet.parl.com/~vet/.

[90]  See http://www.lockheedmartin.com/mission99/MSv2n4.pdf

[91] http://canada.careermosaic.com/cm/lockheed/lmibs1.html

[92] Americans frequently interpret the European trend as a recognition that content is king, but this is too simplistic a characterization. The European emphasis on libraries, museums and archives is much more than a quest for access to content. It is a search for new ways of accessing our awareness of cultural diversity and differences as new keys to tolerance.

[93] Cf. the book by R.G.H. Siu, The Tao of Science. An Essay on Western Knowledge and Eastern Wisdom, Cambridge, MA: MIT Press, 1957.

[94]  Michael Polanyi, The Tacit Dimension, London: Routledge & Kegan Paul, 1966. Cf.

http://www.cordis.lu/cybercafe/src/greylit.htm#TACIT.

[95]  See http://www.clpgh.org/clp/Libraries/dublincore.html

[96]  See http://www.w3.org/RDF/

[97]  Professor John Owen Mackenzie has kindly brought to my attention a 'special topic issue' of the Journal of the American Society for Information Science (JASIS), volume 50 (1999), issue 13, on metadata (more precisely: 'integrating multiple overlapping metadata standards', guest editor Zorana Ercegovac).

[98] Some persons are more pessimistic about these developments. See, for instance, David C. Korten, When Corporations Rule the World, West Hartford and San Francisco: Kumarian Press and Berrett-Koehler Publishers, 1996; Richard J. Barnet, John Cavanagh, Global Dreams: Imperial Corporations and the New World Order, New York: Touchstone Books, 1995. Cf. Hans Peter Martin, Harald Schumann, Die Globalisierungsfälle. Der Angriff auf Demokratie, Reinbek: Rowohl, 1996. This book has a more dire interpretation of global corporations and discusses the "tittytainment" idea of Zbigniew Brzezinski. 

[99]  See http://www.doe.gov/people/peopnl.htm

[100]  See http://www.toshiba.com/tai.press/motorola.htm

[101]  See http://www.neci.nj.nec.com/

[102] On this problem of the privatization of knowledge see a recent book by Seth Shulman, Owning the Future, Boston: Houghton and Mifflin, 1999. For a plea to keep universities out of the copyright domain see Claire Polster, "The University Has No Business in the Intellectual Property Business, CAUT Bulletin ACPPU, Ottawa, September 1999, p. 36.

[103]  See http://chronicle.com/

[104] Personal Communication from Professor Baron Jaap van Till. 16 July 1999:

De gedachtengang is alsvolgt.

1. Ik was zeer onder de indruk van het tonen van de verschillende

   wijzen van kijken van de Franse schilder naar die brug en hoe

   geleidelijk hij die anders (met andere accenten <orthononale

   coordinaten>) ging schilderen. Als het ware een TRAJECTORIE.

   Evenzo de verschillende (wijzen van) kijken naar de Romeinse

   binnenstad, met steeds andere CONTEXTEN erom heen.

2 Ik heb eerder ooit ook zulke verbluffend info-rijke sequenties gezien

  van - zelfportretten van Rembrandt

         - gebruik van licht en schaduw van Rembrandt

         - ontwikkelingsverhaal van Mondriaan

         - ontwikkelingsverhaal van Picasso

   Op zichzelf zeggen die schilderijen inderdaad bijna niks!

   Behalve dat zij , zelfs als je slechts een stukje van een

   schilderij ziet, allen een eigen soort vingerafdruk, een schaalloze

   fractaal hebben waaraan je direct kan zien dat het door hen gemaakt

   is.

   Conclusie: er is dus een bovenliggende en een of meerdere onderliggende

   schalen van beeld-samenhang t.o.v. een enkel schilderij !!

   Vraag: hoe zou je die kunnen "vangen", samenbrengen??

3. Eerste hint hoe dat zou kunnen is vervat in mijn "Telescoop Metafoor",

   Zie het verhaaltje op mijn  homepage (the eyes of the world).

   Teneur is dat vele waarnemers vanuit verschillende invalshoeken

   samen via een Netwerk communicerend een veel beter beeld kunnen

   vormen van stukken van de werkelijkheid: de resolving power van

   Internetgebruikers samen kan aan ieder ven hen een beter beeld

   opleveren. Ziedaar de incentive voor cooperatie via Internet.

   Iedereen wordt er wijzer van door kennis te delen en die kennis

   bevindt zich tegelijk overal en nergens, als een hologram.

   Fractaal gerepliceerd, zonder centrum.

   Dit lijkt erg veel op optica. Je kan een deel van een lens afdekken

   toch kan je nog (zij het minder scherp/helder) zien. Het loopt via

   onze ogen: twee waterige bollen, pas in de hersenen wordt een beeld,

   en nog wel een met diepte! en perspectief! gevormd.

   Vraag: HOE ontstaat er een beeld uit al die snippers en lichtvekken??

   (en waar? antwoord kan overal waar je een "lens" kan neerzetten).

   Onze hersenen kunnen dat blijkbaar moeiteloos: een beeld vormen

   uit vele waarnemeingen en indrukken. Dus ons hoofd werkt als lenzen-

   stelsel. Brilliant nietwaar van die neuronen?

4. Tweede hint. Hoe ik denk dat een "kennislens" alsvolgt zou kunnen werken.

   Hoe kan je uit een serie beelden een (bijv.3-D) beeld (-gedeelte)

   beter zien? Ander woord: hoe je focuseeren, je "concentreren"

   (hogere graad van bewustzijn (awareness of awareness))??, wat is het

   het mechanisme voor 3?

   Ik denk dat het werkt met enige vorm van "super-resolutie".

Uitleg (staat ook een stukje over met links op mijn homepage).

In de radartechniek heeft men een (vroeger zeer geheime) methode

ontwikkeld om uit meerdere zeer ruisvolle radaropnames van bijv een

vliegtuig(groep) een beeld met beter oplossend vermogen te krijgen.

Idee is dat in elke foto in het hele beeld wel echoinfo uitgesmeerd zit

wat ze bijelkaar zou moeten kunnen krijgen. Later werd de methode toegepast

op meerdere satelietfoto's in sequentie van tijd en plaats (zie URL op

homepage).

De beelden worden getransformeerd naar het frequentie en fasedomein

(via Fourier transforms, wat ook lenzen doen!). De spectra worden slim

gecombineerd en nu komt het: geextrapoleerd. Volgens een boek (Andrews?)

wat ik hier 35 jaar geleden over las met zg. Prolate Spheriodical

Wavefunctions,je kent ze wel (eigenfuncties van FFT??) :-)).

Na treugtransformeren naar het beelddomein verschijnt een plaatje

waar info uit de hele "trajectorie" in verzameld is. Mijn prof

wilde dit verhaal indertijd niet geloven, want hij zie dat je geen

info kan laten groeien. Klopt, er is een prijs aan. Je kan alleen

een klein stukje tegelijk van een heel beeld "scherp" zien, net zoals

jij dat nu doet bij het lezen. Je kan je maar op een stukje denkwerk

tegelijk concentreren (aandacht). Of in het geval van een serie

beelden, je kan er info uittrekken naar een beeld of stukje beeld.

Dus: je KAN de babelse versnippering/ uitspreiding/ uitsmering/ vervaging

weer doen convergeren met een lens, MAAR niet door jou alleen op alle

plaatsen van het beeld en in alle tijden tegelijk. Als je zo'n lens

zou bouwen met "SuperResolutie' op het Internet gericht, dan kan dat!

Maar je zal maar naar een ding tegelijk kunnen kijken.

ObservatoriumEuropeanum

Informatie verspreidt zich en zit overal in verweven of weerkaatst.

Echo's van alles wat ooit gebeurd is zijn in fracties overal waar we

maar kijken aanwezig. Laten we een netwerk bouwen wat als lenzenstelsel

werkt. De CIA en andere van die diensten doen dat ook, via

zeer moeizame menselijke en computerslagen, CNN doet het al veel

beter en sneller. Waarom zouden we dan niet ook

zoiets kunnen bouwen voor kunsthistorici, cultuur en kennis.

Een stukje global brain voor imagination. Just imagine!

Jaap van Till.

[105] My doctoral student, Mr. Nik Baerten, is preparing a dissertation on the role of agents in the cultural domain, ranging from low-level agents in the form of knowbots to high-level autonomous agents. For an introduction to his ideas see: Nik Baerten, Peter J. Braspenning, "Insight into the Inside," Proceedings of the Second Workshop on Intelligent Virtual Agents '99, ed. Daniel Ballin, Manchester: University of Salford, 1999, pp. 127-130.

[106] Internet governance is an important new field of study. For a recent American statement see: Susan Drucker and Gary Gumpert, Real Law at Virtual Space: : "Communication Regulation in Cyberspace,New York: Hampton Press, 1999. (The Hampton Press Communication Series. Communication and Law)". Cf. The Governance of Cyberspace. Politics, Technology and Global Restructuring, ed. Brian D. Loader, London: Routledge, 1997; Cyberdemocracy.Technologies, Cities and Civic Networks, ed. Roza Tsagarousianou, Damian Tambini and Cathy Brian London: Routledge, 1998. Cf. Intellectual Property in the Age of Universal Access, Pamela Samuelson, Peter Neumann, eds., New York: ACM Press, 1999.

[107]  For an interesting history of such movements see Akira Iriya, Cultural Internationalism and World Order, Baltimore: Johns Hopkins University Press, 1997. The author sees great value in common understanding derived from sharing cultural experiences. He does not deal sufficiently with the potentials of these international efforts to increase awareness of local cultural expression.

[108] Culture Counts, Financing Resources and the Economics of Culture in Sustainable Development, Conference sponsored by the Government of Italy, World Bank, UNESCO, Florence, Fortezza del Basso, October 1999.

[109] Adam M. Brandenburger, Barry J. Nalebuff, Co-opetition: 1. A Revolutionary Mindset That Redefines Competition and Cooperation; 2. The Game Theory Strategy That's Changing the Game of Business, New York: Doubleday, 1996.

[110]  See http://pespmc1.vub.ac.be/cybsysth.html

[111]  See http://www.termnet.at/

[112]  See http://gtw-org.uibk.ac.at/itinerat.html

[113]  See http://fid.conicyt.cl:8000/

[114]  See http://index.bonn.iz-soz.de/~sigel/ISKO/

http://www.isko.org

http://www.fh-hannover.de/ik/Infoscience/ISKO.html; www.hud.ac.uk/schools/cedar/isko.html

[115]  See http://www.bonn.iz-soz.de/wiss-org/Hhprog.htm

[116]  See http://www.simnet.org/

[117]  See http://www.brint.com/Systems.htm

[118]  See http://www.ida.liu.se/labs/iislab/people/patla/DL/index.html

[119]  See http://reimari.uwasa.fi/%7Eatn/research/

[120] See http://www.pastel.be/mundaneum/

[121]  See http://ksi.cpsc.ucalgary.ca:80/KSI/

[122]  See http://www.kmdi.org/index.htm

[123]  See http://www.eil.utoronto.ca/PIF/pif.html

[124]  See http://choo.fis.utoronto.ca/fis/OrgCog/

[125]  See http://terminology.uwasa.fi/nordterm/

[126]  See http://jucs.aifb.uni-karlsruhe.de/WBS/webpages.html

[127]  See http://www.aifb.uni-karlsruhe.de/WBS/broker/KA2.html

[128]  See http://www.gfkl.de/publikat.html

[129]  See http://www.mpi-sb.mpg.de/

[130]  See http://www.cdot.com/

[131]  See http://www.kr.org/kr/kr98/

[132]  See http://wwweti.eti.bio.uva.nl/

[133]  See http://www.mmi.unimaas.nl

[134]  See http://www.infonomics.org/

[135]  See http://www.ctit.utwente.nl/Docs/

[136]  See http://www.ce.unipr.it/internal.pages/neuro-page.html

[137]  See http://www.lib.cas.cz/knav/journals/eng/Neural_Network_World.htm

[138]  See http://limnatis.ummz.lsa.umich.edu/UCTwkshp/

[139]  See http://www.williams.edu/library/ejournals/descriptions/clad.html

[140]  See http://www.knowledgemedia.org/

[141]  See http://www.mediamanagement.org/netacademy/publications.nsf/mediamanagement_title

[142]  See http://www.geog.ucl.ac.uk/casa/martin/atlas/atlas.html

[143]  See http://kmi.open.ac.uk/

[144]  See http://www.csi.uottawa.ca/ifip.wg12.2/ramoni.html

[145]  See http://d3e.open.ac.uk/index.html

[146]  See http://sosig.ac.uk/roads/subject-listing/World-cat/orgtheo.html

[147]  See http://ccs.mit.edu/research.html

[148]  See http://www.sds.lcs.mit.edu/~dlt/

[149]  See http://www.isi.edu/

[150]  See http://www.santafe.edu/

[151]  See http://www.public.iastate.edu/~CYBERSTACKS/Aristotle.htm

[152]  See http://www.ic.arc.nasa.gov/ic/projects/bayes-group/group/autoclass/autoclass-intro.html

[153]  See http://www.seyboldseminars.com/Events/sf96/present/h_kest12/sld006.htm

[154]  See http://reference.cd-rom-directory.com/cdrom-2.cdprod1/002/761.Electronic.Dewey.shtml

[155]  See http://galt.cs.nyu.edu/students/fox/area.html#The Perspective Wall

[156]  See http://www.clr.utoronto.ca/KMAP/km.html

[157]  See http://www.geocities.com/Athens/8959/lc.html

[158]  See http://www.msms.doe.k12.ms.us/~jcarter/plearntx.html

[159]  See http://tred.cr.usgs.gov/cgi-bin/tred_taxon.cgi

[160]  See http://www.cs.umd.edu/users/north/infoviz.html

[161]  See http://www.willpower.demon.co.uk/thessoft.htm

[162]  See http://www.public.iastate.edu/~CYBERSTACKS/BigPic.htm

[163]  See http://ai.bpa.arizona.edu/papers/worm94/section3_5.html

[164]  See http://www.bdt.org.br/bin21/ws92/codata.html

[165]  See http://rndhouse.nrcs.usda.gov/plantproj/npdc/standrds.html

[166]  See http://www.itis.uisda.gov/itis/sources.html

[167]  See http://www.sun.ac.za/local/academic/natural/botany/idb/subject/botflor.html

[168] See: http://taxonomy.zoology.gla.ac.uk/

[169]  See http://www.xrce.xerox.com/people/grefenstette/grefenstette.html

[170]  At Wolverhampton working on Automatic RDF using DEWEY. See: http://www.scit.wlv.ac.uk/~ex/253/metadata.html

             c.jenkins@scitsc.wlv.ac.uk

[171]  See http://www.wam.umd.edu/~mturn/. Working on concepts of Metaphor and Blending. See: http://www.wam.umd.edu/~mturn/WWW/blending.html.

[172]  See http://www.banxia.co.uk/

[173]  See http://www.cyc.com

[174]  See http://www5.informatik.uni-erlangen.de/HTML/Literatur/abs-tex-dir/1995/Mast95:ACO/Mast95:ACO.html

[175]  See http://www.lincoln.ac.nz/educ/tip/55.htm

[176]  See http://www.eastgate.com/catalog/MappingHypertext.html

[177]  See http://www.psrg.lcs.mit.edu/~bvelez/std-colls/cisi/cisi-1419.html

[178]  see http://www.erlbaum.com/254.htm

[179]  See http://www.personal.kent.edu/~slis/zeng/fidcr97/first.htm

[180]  See http://www.dlib.org/dlib/january98/dolin/01dolin.html

[181]  See http://www.kitware.com

[182]  See http://www.cc.gatech.edu/gvu/people/Phd/sougata/chi95/sm_bdy.htm

[183]  See http://www.acm.org/pubs/citations/proceedings/csc/100348/p448-olagunju/

[184]  See http://bpaosf.bpa.arizona.edu/~dmitri/CSOM2/node8.html

[185]  See http://sbdhost.part.com

[186]  See http://www.ccic.gov/pubs/blue97/mfg/

[187]  See http://www.sei.cmu.edu/technology/edcs/CLUSTERS/IM/PC_lists.html

[188]  See http://gen.net/index.htm

[189]  See http://cic.cstb.fr/ILC/html/iai.htm.

[190]  See http://www.npac.sgr.edu/users/gcf/asopmasterB/foilsephtmldir/001HTML.html

              http://www.npac.sgr.edu/users/gcf/asopmasterB/fullhtml.html                                      

[191]  See http://tuovi.cern.ch/TuoviWDM

[192]  See http://www.ncms.org/

[193]  See http://www.anxo.com/

[194]  See http://java.ca.sandia.gov/imtl/Tour/sl23828a.html

[195]  See http://www.sandia.gov/mfgtech/NMTP/projects.html

[196]  See http://www.ccm.ecn.purdue.edu/

[197]  See http://www.interex.org/hpuxvsr/jan95/new.html#RTFToC33

[198]  See http://ce-toolkit.crd.ge.com

[199]  See http://www.aiim.org/wfmc/mainframe.htm

[200]  See http://www.workflowsoftware.com/

[201]  See http://ccs.mit.edu/

[202] See http://process.mit.edu/handbook.html

[203]  See http://www.waria.com/

[204]  See http://www.cit.gu.edu.au/research/ei95/clearinghouse.html

[205]  See http://www.cs.ucl.ac.uk/staff/A.Steed/work.html

[206]  See http://atwww.hhi.de/draft/USINACTSnorm/standardsframe1.htm

[207]  See http://www.placeware.com

[208]  See http://www.cs.usask.ca/grads/vsk719/academic/890/project2/node4.html

       Cf http://www.cs.usask.ca/grads/vsk719/academic/890/project2/node3.html

[209]  See http://www.ics.hawaii.edu/~jl/CSRC.html

[210]  See  http://www.w3.org/pub/www/collaboration/overview.html

[211]  See http://www.links2go.com/topic/Computer-Supported_Collaborative_Work

[212]  See http://mytilus.kenyon.edu/collab/collab.htm

[213]  See http://www.crg.cs.nott.ac.uk/~dns/conf/vr/cve98/

[214]  See http://www.hhdc.bicc.com/people/dleevers/papers/cycleof.htm

[215]  See http://www.darmstadt.gmd.de/ambiente/i-land.html

[216]  See http://www.cs.unc.edu/~dewan/prt/cb/icv_proj.text

[217]  See http://www.leeds.ac.uk/civil/research/cae/eime/Cec97.htm

[218]  See http://mice.sdsc.edu/introduction.html

[219]  See http://www.uk.infowin.org/ACTS/ANALYSYS/CONCERTATION/chains/si/desc_sid.htm

[220]  See http://www.al.wpafb.mil/cfb/hsf.htm

[221]  See http://www.control-systems.com/

[222]  See http://www.brint.com/Elecomm.htm

[223]  See http://www.virtualschool.edu/mon/TTEF.html

[224]  See http://www.rxrc.xerox.com/public/isd/us-overheads.htm

[225]  See http://www.cgw.com/cgw/Archives/1996/11/11story2.html

[226]  See http://www.virtualschool.edu/cox/CoxPSIR.html

[227]  See http://www.brint.com/BPR.htm

[228]  See http://mijuno.larc.nasa.gov/dfc/biblio/bpreB.html

[229]  See http://bprc.warwick.ac.uk/bp-site.html#SEC4

[230]  See http://www.brint.com/BPR.htm

[231]  See http://www.infogoal.com/dmc/dmcprc.htm

[232]  See http://www.eil.utoronto.ca/tool/BPR.html

[233]  See http://www.prosci.com/default.htm

[234]  See http://www.reengineering.com/

[235]  See http://www.km.org/

[236]  See http://www.ul.ie/~boylanj/knowlsp6.html

[237]  See http://207.82.250.251/cgi-bin/linkrd?hm___action=http%3a%2f%2fwww%2epractical%2dapplications%2eco%2euk

[238]  See http://www.isb.csiro.au/cis/lib/km.htm

[239]  See http://www.brint.com/km/

[240]  See http://www.brint.com/wwwboard/wwwboard.html.

[241]  See http://www.brint.com/cgi-bin/ubbcgi/Ultimate.cgi

[242]  See http://www.firstconf.com/ikm-usa/

[243]  See http://www.concentric.net/~Astorm/

[244]  See http://www.virtual-community.com/

[245]  See http://www.cudenver.edu/~mryder/itc_data/vlc.html

[246]  See http://www.cyber-designs.net/andres/isds/isds7920/gvt/

[247]  See http://lilikoi.com/vc/

[248]  See http://www.infonortics.com/vc/

[249]  See http://www.onlineinc.com/vircomm/

[250]  See http://www.gbn.org/bib.html

[251]  See http://www.gbn.org/bookclub/sortByTitle.html

[252]  See http://www.infonortics.com/infonortics/infodesc.html

[253]  See http://www.brint.com/NII.htm

[254]  See http://www.sgzz.ch/links/stp/lo/lo.htm.

       Cf. http://www.albany.edu/~kl7686/learnorg.html

[255]  See http://www.actionscience.com/argbib.htm

      Cf. http://ads-sun2.ucsc.edu/learnorg.html

[256]  See http://www.orglearn.nl/

[257]  See http://www.stanford.edu/group/SLOW/books1.html.

[258]  See http://sll-6.stanford.edu/whowhat.html

[259]  See http://www.brint.com/OrgLrng.htm

[260]  See http://www.brint.com/EmergOrg.htm.

[261]  See http://www.albany.edu/faculty/pm157/teaching/topics/orglearn.html

[262]  See http://www.snafu.de/~h.nauheimer/forum.htm

[263]  See http://learning.mit.edu/res/wp/learning_sys.html

[264]  See http://learning.mit.edu/

[265]  See http://www.virtual-organization.net/

[266]  See http://home.t-online.de/home/gerald.lembke/links.htm#anfang

      Cf . http://www.gpsi.com/lo.html

      Cf. http://www.syre.com/

[267]  See http://www.unesco.org/ngo/iau/tfit_international.html

[268] See http://www.adec.edu/virtual.html

[269]  See http://cade98.athabascau.ca/cade/cadehome.nsf/pages/englishhome

[270]  See http://www.links2go.com/topic/Virtual_Universities

        cf. http://www.soc.titech.ac.jp/japa/vpl/v-univ.html

        cf. http://starform.infj.ulst.ac.uk/billsweb/ACCESS/virtual.html

        cf. http://www.unh.edu/NIS/VU/

        cf. http://www.mgdolence.com/tours/virtual1.htm

        cf. http://www.online.uillinois.edu/links/universities.html

[271]  See http://www.ccon.org/hotlinks/vedu.html#top

[272]  See http://www.vu.org/index.html

[273]  See http://isds.bus.lsu.edu/cvs/index.html

[274]  See http://www.ryerson.ca/dmp/soc/sld020.htm

[275]  See http://www.co-nect.com/Schools/QA/HnSchools/may98resources.html

[276]  See http://www.virtualschool.edu/

[277]  See http://metalab.unc.edu/cisco/schools/elementary.html

[278]  See http://www.knight-moore.com/html/virtclass.html

[279]  See http://www.brint.com/Research.htm

[280]  See http://www.cudenver.edu/~mryder/itc_data/constructivism.html

[281]  See http://www.prainbow.com/cld/

[282]  See http://www.virtualschool.edu/mon/SocialConstruction/index.html#Sokal

[283]  See http://www.npac.syr.edu/users/jravitz/home_backup.html

[284]  See http://www.npac.syr.edu/users/jravitz/IDE_Model_long.html