I want to explore some epistemological consequences of our living in cyberspaces, which I take to be the product of our ability to digitize the material world, as well as to compose with the resulting digits, pixels, sound bites, magnetic markers, etc. Digitizing has been going on since Gutenberg’s invention of movable type. It has fueled industrial production. It has allowed the mass media to emerge. Now, it is accelerating technological developments towards a new kind of society that is providing us with new spaces.
Cyberspaces, so conceived, are furnished by:
To develop this contention, I am inviting you to join me on a journey along three intertwined paths that our leading digital technologies pave for us. All three paths direct us to different aspects of a new paradigm, a paradigm that put us, humans, into the driver seat of our own understanding. Along the first path I would like to carry computation to its technological conclusion and proceed from computing to interfacing and to a second-order kind of understanding.
Although long-range technological forecasts have been notoriously unreliable, one of the most consistent trends in computing derives from our efforts to increase its efficiency. In this pursuit, computers have become faster, smaller, cheaper, able to handle larger volumes of data, and hosting greater numbers of computational algorithms and particularly software. This trend can be observed everywhere and has created machines that are so small that their shape is no longer determined by their function but by how their users operate them. The desk calculators of the 60’s were the first to lose their bodies as they gradually were reduced to wristwatch sizes. The same trend is evidenced in the transformation of mainframe computers into desktop workstations and into laptops, consisting of not much more than what users need to see and handle -- but with capacities exceeding most earlier mainframes. Electronic cash, data gloves, virtual reality applications and software attest to even more radical disembodiments of technology. Miniaturizations, if not virtualizations, of this kind have given birth to a totally new kind of artifacts: the interface.
Interfaces relegate computational technology into the background of a user culture and this shifts design considerations into an entirely new domain.
To be clear, interfaces are not mere points at which humans and machines can be separated -- as the German word "Schnittstelle" might suggest. Interfaces are not mere arrangements of icons on a computer screen, graphically representing what a computer is doing -- as in Star Trek fictions or similar to what the operator control board of the 60’s mainframes was designed to do. Nor should interfaces be confused with the software by which users can direct the computations inside a machine.
In fact, Interfaces arise in interaction. They are dynamic and virtual entities that bring the rich repertoires of users’ conceptual models into interaction with computational algorithms, that no single individual, least of all their users, can fully understand, each reaching as deeply as necessary into the others’ operational domain. Interfaces can be likened to virtual versions of the cybernetic organisms or "cyborgs" we talked of in the 70’s. The Janus-faced character of interfaces defies mathematical description from either side of the divide: computational logic on one, and cognitive theory on the other.
Interface designers had to accept that the users of technology are not machine-like, an assumption that had fueled ergonomic considerations, but that they are knowledgeable, although in ways different from that of designers. Users’ understanding, creativity and intelligence enter interfaces as defining elements. Thus, in trying to understand interfaces, we cannot be concerned with technology as such, but need to understand the relations between users’ understanding of a technology and designers’ understanding of that same technology. These understandings undoubtedly are different from each other and from ours, without either being superior to the other on a priori grounds. Yet, the possibility of different ways of knowing existing side by side, and the reflexivity this entails, is incompatible with our dominant (positivist and mechanistic) scientific tradition, including the artificial intelligence approach to cognition, that aim at the construction of a coherent uni-verse and can privilege only one articulation as the legitimate one. Interfaces simply escape that tradition, a fact that signals the need for a radical shift in our thinking. Let me be clear in which ways this is so:
For one, any understanding is always someone’s understanding. It is always embodied in unarticulated background phenomena that are not part of consciousness. Mathematics, by contrast, is not intended as being bound to a particular body, nor is scientific theory.
For another, and this is more difficult to grasp, our understanding of someone else’s understanding (of technology), which I maintain is indispensable to the design of interfaces, recursively embeds another understanding into ours and thus becomes an understanding of understanding or a second-order understanding (analogous to Heinz von Foerster’s notion of a second-order cybernetics). This second-order understanding is altogether different from the first-order understanding that we grew up with and that natural scientists have taught us to pursue of things outside of ourselves, as spectators. Natural scientists construct their universe to be observed and understood but unable to have any understanding on its own. This Cartesianism may be convenient for the scientist and even justified when one wants to explain observations in causal or mechanistic terms. But, when one is concerned with intelligent uses of technology, one cannot start from the assumption that people are incapable of understanding their own actions. Nor can one start from the assumption that normal human beings are rational machines, act alike in like situations (are stimulus-driven), and live in the same disembodied universe that scientists do construct. In practice, second-order understanding means acknowledging not only where understanding is embodied (for example whose theory it is) but also who else populates that understanding (whose understanding is being theorized).
Finally, the fact that a second-order understanding is inconceivable from a first-order point of view accounts for much of our present blindness in not seeing what the new technologies actually do in our world.
But before I look into this blindness, let me conclude my first path by recommending that second-order understanding should be a goal of the emerging paradigm. As I have shown, its need arises from the tendency of computational technologies to remove themselves from everyday understanding and view. I will now enter the second path toward what we seem unable to see and I will start with our current understanding of what networks are doing.
Until recently, computers gave us relatively self-contained spaces to work with. Networking them has radically altered the form of present cyberspaces. Today, networks consume an extraordinary amount of space, often measured in gigabits. They span great distances, transmit with lightening speeds, and touch nearly everyone on the globe by one medium or another.
However, what I find far more remarkable than the staggering growth of cyberspaces, which we praise almost universally, is that we justify the new technologies, the furniture of cyberspaces, mainly in terms of information. This started in 1948 when Claude E. Shannon published his Mathematical Theory of Communication which, against his lifelong protest, was popularly christened "information theory." In the 60’s IBM started to call computers information processors. In the 70’s we began to design information systems. Now we have information products, information super-highways, national information infrastructures; and we justify all of these technologies in terms of the information they provide, ideally to everyone, everywhere, anytime. What amazes me most about these justifications is that they seem to echo all the values we have inherited from the 17th Century, suggesting a nearly seamless transition from the Age of Enlightenment to an Age of information -- as if nothing had changed since, except that information is no longer a scarce commodity.
In the 17th Century, knowledge became a noun that signified tangible objects we could acquire, manipulate and consume. Now we talk of hard information which merely adds mobility -- from any place to any other place -- to the 17th Century idea of knowledge, and justifies trading it as a commodity. In the 17th Century we started to believe that true or objective knowledge would liberate us from the superstitions, prejudices, and irrational beliefs of the Middle Ages. Now, we define information as an objective and self-evidently valid entity outside of us -- meaning is no longer an issue -- and we believe that the more information we acquire the better decisions we make and the better our lives are going to be. In the 17th Century we developed ideas of equality, universal education, and equal chance for everyone. Now we value the free flow of information, freedom of expression, universal access, uniform standards, and common languages.
Above all, in the 17th Century we started to cherish knowledge for its own sake, founded the institution of science on the premise that knowledge could be politically neutral, and celebrated mathematical representations of this knowledge, thereby obliterating all needs to address the diverse ways different cultures may come to know. The prophets of an Information Age seem to believe, much as the prophets of the Enlightenment did, that "being informed" is all we need to be concerned with. What people do with the information is another matter. This separation has been haunting us for 300 years. Yet, we continue to use the library as a model of information systems. Even the World-Wide-Web is nothing more than a digital bulletin board on which everyone can post information for everyone else to see -- much as Martin Luther posted his 95 theses -- except that we now think on a global scale.
I believe we have gone seriously wrong in describing the digital revolution in terms of this notion of information. We are no longer the 17th Century Cartesian beings who were conceived of as busily accumulating disinterested knowledges by mapping their outside world into the interiors of their minds. We live today. We interface with sophisticated technologies of our own making. We continually language new worlds into being, and we participate in the worlds of others. For something to be information, it must make a difference in our lives. I hope to show that this difference does not lie between the inputs and outputs of information systems or media but in the co-ordination they initiate and sustain.
For example, networks might show us the best airplane connection presently available. But buying a ticket on that "information" triggers the co-ordination of an unimaginable number of individual, institutional and physical activities that finally get us from here to there. This duality of (the individualist notion of) information and (a cultural and organizational notion of) co-ordination can be observed in nearly every instance of network use, whether it comes to show alternative routes to bypass traffic jams, whether it informs on when to buy or sell stocks, whether it enables surgeons to examine a human body in virtual reality, or whether it keeps track of prices, profits and the politics within a corporation. I suggest that even the good old telephone foremost is a medium of co-ordination: We "stay in touch" with others, aligning our own lives to theirs. We arrange meetings, make and confirm promises, negotiate deals, reach consensus, etc. -- all of these activities initiate, confirm or maintain alignments, synchronicities, complementarities, coalitions, that is, they co-ordinate people’s behavior. Co-ordination is rarely limited to the immediate participants in one network. Other especially social networks may become affected as well, drawing larger numbers of people into a co-ordination.
Designing networks in "information-technological" terms not only inscribes information notions into such technologies and encourages their use in these terms, it also renders the co-ordination they facilitate into an externality, into a latent or unarticulated background phenomenon, into something unattended. Since cyberspaces take on global dimensions, the unattended co-ordinations that networks do facilitate also become more difficult to recognize by conventional means, rendering co-ordination increasingly invisible. There have been reasonable speculations that many areas of the Western World, air traffic control for one, banking for another, and, not the least, large administrations, would have collapsed and pull the whole capitalist world down were it not for the use of computers. I suggest this to be due to the unattended co-ordinations they facilitate. Let me mention three conceptual shifts networking encourages.
Networking is questioning the hierarchical forms of subordination and control that have ruled many areas of our lives through the industrial era. Interactive networks have reduced the attractiveness of one-way mass media. They have decentralized management, created horizontal corporate structures, and, on a personal level, they enable us to work from any convenient location, to carry our "office" with us wherever we go -- to name just a few simple examples. The downfall of the Soviet Union can be traced to an attempt to hold on to hierarchical models that digital technologies could no longer support. The technology-induced shift from hierarchical control to heterarchical co-ordination is growing far faster than our understanding of it.
Networking generates totally new social forms: bulletin boards, chat rooms, news groups, collaboratoria, virtual universities, businesses that are constituted entirely within nets, etc. Such virtual communities arise spontaneously within nets, synchronize the lives of their members, and sustain themselves as long as people see virtues in participating in them. Notwithstanding common belief that communication technologies foster a global community with common knowledge and shared values (see Marshall McLuhan’s thesis), networking has enabled different political cultures to become even more distinct from each other. Conflicts of interest, knowledge gaps, cultural diversities and national identities are increasing, rather than declining -- although such differences may be acted out differently within nets than without. The massive expansion of network uses throughout the world is co-ordinating coalitions on a scale that leaves traditional methodologies groping while their participants actually feel quite on top of them. This apparent paradox too makes the case for a new paradigm a pressing one. It also calls for a new sociology of networking.
Finally, networking directs us to a new image of what human essentially are. Since the Enlightenment, we have been celebrating our individualism (our self-containedness). Combined with the notion of knowledge as a representation of objective facts (the correspondence theory of truth) and aided by linear technologies -- machines that amplify power as well as those that amplify vision -- this has caused us to believe that we are supreme observers and rulers of our world. In networks, however, we are not only observing each other, we are also co-creating our realities relative to each other. In networks, the human mind reveals itself as an essentially distributed phenomenon, in circulation, and social in nature, much as Gregory Bateson already proposed. Memories are not in the brain alone but extended to texts, recalled in interaction with others, and reconstructed from access to multiple channels of communication. In nets, human understanding is essentially incomplete, dialogical, and always open to participation by other fellow beings. Co-ordination can be facilitated but not caused. We enter a net as agents with concerns. We adjust to what we hear, read, or experience, and when we are not on a net, we nevertheless act in accordance with our construction of others’ understanding. We can shift our participation from one net to another, much as we are able to shift from one discourse to another, assuming different personalities in each, and giving different accounts of who we are and what we do. Second-order understanding is both a prerequisite and an outcome of networking. In networks, human identities never stay quite the same.
In the exposé for Expo 2000, I read with interest the proposal of a pilot fish, one for each visitor, to provide guidance by answering individual questions. This is a marvelous opportunity for demonstrating the new technologies. However, if this fish merely gives authoritative answers to visitors’ questions then it would remain stuck in outdated information notions. In view of the co-ordinations I claim these technologies do facilitate, why not allowing these fish to "talk among each other" and to respond by taking into account what other visitors have told their fish to say. They could then offer advice not only on matters pertaining to the exhibition but also on how other visitors are seeing that exhibition, how to meet similar minded people, how to locate a friend, how to avoid long lines, or how to locate places where one would be left alone. This would resemble micro-cosmically the kind of co-ordinations that internetting introduces worldwide.
As a second goal of the emerging paradigm, I am recommending that we look to co-ordination as the defining consequence of digital technologies, especially of networking, and reconsider who we are as "networked beings." – even if this challenges our traditional self-understanding. Dropping the prefix "information-" in favor of "co-ordination-" creates a gestalt switch of considerable consequence on how we see the new technologies.
On my last path I would like to examine what blinds us to co-ordination, how we might overcome this inability to see and I will start this exploration where we had left the interfaces -- which is what remains in cyberspace after data, algorithms, and networks have taken up their spaces.
The design of interfaces has taught us that the most natural way of handling the complexities of cyberspaces is by enabling interfaces to be language-like, utilizing metaphors, metonymies, and icons that are self-evident, require little instruction, generate internal motivations for use, and are easily supported by user-cultures. Metaphors are an essential ingredient of language. But it is important to note that one may not appreciate the role of metaphors from a perspective on language as a system of representations (which belongs to the Cartesian view of the world). Instead, I am taking language as a process, a process of interaction between knowledgeable speakers of a language, as languaging. From this perspective, metaphors, metonymies, and icons are no longer about things, they are being used or languaged by particular people, in particular contexts, and with envisioned results. In a nutshell, one can say that metaphors transfer experiences from familiar to less familiar domains in need of perceptual reorganization and behavioral guidance. Metonymies economize on our conceptions and perception, and icons evoke images of particular experiences.
Languaging also is a social process, not merely a use of tropes, as rhetoricians want us believe. I would even say it is the fundamental social process for it requires the co-presence of other speakers and it arises out of a long history of interactions within a community. Like word uses, computer uses have their histories as well. Bill Gates reminds us: Every generation of technology grows out of a previous generation and, in the transition from one to the next, it is easy for us to get stuck by missing the turn a new generation of technology is taking. A language without a community of speakers is a dead language (of which scholars might know just enough to write about). This is also true for technology. A technology without a social network of stakeholders (producers, promoters, users, etc.) is a dead technology (whose artifacts would be lucky to enter a museum and stay there on life-support). The point is that the human use of technology, languaging, and community are intricately connected through individual perceptions.
With this in mind, I want to revisit the lesson learned along our second path and say now that "information" is part of a metaphorical complex we are languaging. Metaphors are important for their entailments. A metaphor in use is not mere talk but also shapes our perceptions and consequent actions. It is the use of the metaphor of human information processing (on which much of the cognitive sciences depend), the use of the metaphor of an information superhighway (which Al Gore, US Vice President, introduced to direct the US public towards the new technologies), the use of the container metaphor of communication (which entails that information is carried by signals from a sender to a receiver), and the use of many related metaphors that lead us to perceive information as a thing-like entity that exists apart from our understanding. In the absence of alternative metaphors, the use of this dominant metaphor prevents us from seeing how networking technologies could facilitate co-ordination. We confuse our perceptions with what is outside of us when failing to realize the metaphorical ground of how we see. We indeed are languaging information technologies into a mutually (inter-subjective) verifiable existences by calling them that way, by translating our descriptions of them into their design, and by using them in the very terms we reserve for them. To not cause any misunderstanding, languaging alone may not change what these technologies ultimately do, but languaging surely affects our perceptions, how we build and interface with the technological world, in what becomes real to us. As George Lakoff has shown us, without an awareness of the tremendous constraints metaphors impose on our cognition, we are governed by them. Without an awareness of our languaging we are, as Heinz von Foerster noted, double blind: We do not see (certain things that other metaphors could bring forth) and we do not see our not seeing this.
The cure for our 300-year-old blindness lies in our consciously deviating from established linguistic practices, for example, by inventing a new vocabulary (as Richard Rorty suggests), by introducing new metaphors (as George Lakoff would recommend here), or by creating different communities to language with. The reason that the Cartesian paradigm and the associated notion of information still dominates our thinking as it does, is that so many people continue to keep languaging it into being. Several powerful institutions even thrive in it and depend on its continuance. Moreover, when the terms of this paradigm become inscribed into a particular technology, as I have argued it is, this technology is likely to constitute the material support for the very conception that gave rise to it and thus adds to its resilience. Linguistic practices possibly are the easiest to alter although probably the most difficult to become aware of. I see my role here and today as trying to bring the latent consequences of the new technology into the foreground of our own languaging. This symposium and Expo 2000 could make a difference in how we continue to speak of and exhibit our digital world and thus prepare us for the 21st Century.
We might want to take the above mentioned pilot fish not as an idea but as a metaphor and talk of its entailments. A pilot fish is fun to have, of course, but as a metaphor, it entails that visitors are helpless without guidance, are in anticipated need of information. Moreover, since fish say little to each other, even in crowded tanks, visitors may not get the idea of communicating with each other and about each other through their pilot fishes. I cannot offer a compelling metaphor yet, but would look for one that entails the ability of visitors to access how other visitors present themselves to their fishes and engage each other in mediated conversations pertaining to the exhibition. Perhaps the metaphor of a party host might accomplish this better than that of a fish. A good host remembers everyone and graciously facilitates the very conversations that guests hope to engage in. But, whether it is a fish or a party host, any metaphor that enters processes of communication among systems designers will drive the development of the software as well as suggest its subsequent use. And when it has done so, this would have demonstrated how the use of this technology was languaged into being and that the interfaces that do arise primarily are linguistic artifacts and only secondarily determined by technological considerations. Wherever this is so, we will have to replace our traditionally mechanistic models of reality by linguistic ones. Thus, it turns out that cyberspaces are not only vast but also underdetermined by data, computations, interface software and networks. Languaging determines the remainder.
Having reached the end of the journey I had promised you, I am proposing that we take languaging as a community’s way of bringing forth interfaces with artifacts its members can understand and operate. This conceives of language -- not as a means of conveying information, but, in Humberto Maturana’s words -- as a "co-ordination of co-ordination" (he adds "of actions"). Thanks to our new digital technologies, languaging, the co-ordination of co-ordination, is now vastly extended. It is beyond our understanding in Cartesian observer terms while quite manageable by its participants and through their co-ordinating practices. Languaging becomes the third leg on which our new paradigm can stand. It offers us a window to our understanding of others’ understanding. It directs our perceptions of technology to the co-ordinations it affects, and it allows us to reflexively alter many social practices we consider deficient.
Second-order understanding, co-ordination, and languaging/community are neither right nor wrong but a way of speaking (understanding, acting, and being) that encourages the use of linguistic models of artifacts instead of the mechanical ones, which we have used since the 17th Century and in ignorance of human agency and responsibility. Above all, linguistic models promise to keep our inquiries (into technology and media) human-centered and our interfaces understandable, useful, compelling, and alive.
The shift in understanding, that the digital technologies urge us to consider, is truly mind opening. I invite you to walk through some of the implications of the emerging paradigm and discover on your own the possibilities it makes available to us all.