The most engaging interactive narrative relies upon flow; that is, uninterrupted participation in the unfolding action. Poor interaction design can interrupt flow and degrade the experience.
Interaction can be described as many things. Catchwords abound; "Engaging," "Immersive," "Participatory," "Responsive," and "Reactive."
Interactivity is a continuing increase in participation. It's a bidirectional communication conduit. It's a response to a response. It's "full-duplex." Interaction is a relationship. It's good sex. It's bad conversation. It's indeterminate behavior, and it's redundant result. It's many things, none of which can be done alone. Interaction is a process that dictates communication. It can also be a communication that dictates process. It provides options, necessitates a change in pace, changes you as you change it.
1.4.1: Interactivity Isn't a Feature of a Medium
This is why, like smoke and fire, communication is implied wherever there is interactivity.
Interaction operates on something. It is a form of dealing with pre-existing material. It's modification, not generation. This means the role of the author, or the person that is generating material, is both more difficult and more important than before digital interaction because increased attention has to be paid to what is being generated. But its also formalistic and, because it operates on something it is governed by rule sets.
Interactivity requires rule sets and constraints in order to function smoothly. Consider the rules of driving: traffic lights, street signs, sidewalks, dotted lines and speed limits. We interact, ultimately, with one another while driving and it's always a curious lesson to see who is flipping off whom for what reasons.*
In the Middle East drivers generally honk to say "I am here" and in North America drivers generally honk to say "You shouldn't be there." These are unspoken rule sets of interaction just as shaking right or alternately kissing on the cheek are rule sets for introduction. They govern our interactions.
As Nathan Shedroff, design consultant, founder of Vivid Design Studios, and author of the recent publication Experience Design puts it,
"... interactivity (so far) can really only occur between two people, whether or not they use a device between them to aid in the experience. This, then, is the key: interaction is what people can (and get to) do. It's not about things moving on screen. It's not about a particular technology..."
With any good interaction the rulesets are iterative and often unconscious, providing a framework for minimizing damage and maximizing meaning. In the case of traffic lights and the interaction of driving it's getting from point A to point B. The stoplights don't tell you where to go, they just tell you when. The constraint of the grid of the street isn't there as a means of dictating generals, but specifics. If you want to get from the northwestern corner of town to the south-eastern it's perfectly possible, you just have to take lots of little 90-degree turns.
The fact that you can conduct general decisions within the framework of specific guidelines is a key trait in good interaction design. Interaction isn't a feature of a medium. It's a process of communication that, like any form of communication, follows a set of rules and guidelines.
1.4.2: Three Principles of Interaction
These may also be named "Interaction Design Constraints."
Interaction, like any other form of communication art, can be informed by a set of principles. These principles guide the quality and depth of the interaction. If the principles are considered in the process of development, the quality of the design can be improved.
Three principles of interaction are:
input / output
inside / outside
open / closed
The First Principle, the principle of Input / Output: Input should create output and the output should create input. It's the interaction cycle's ability to add information that defines the interaction's quality.
First, the response time between the input and the output should be quick enough for the user to have a clear sense of what change they are affecting on the system. In the early days of the web Stanford, Microsoft, and Xerox-PARC all spent many hours showing that a person won't wait more than 20 seconds for a page to download. After 20 seconds he or she clicks to another page or stops the download. This is because there was a need to know that some change was being affected to the systemwithin 20 seconds.
Second, the ability to control the input should be present. If you push a button next to a door you expect someone to answer. The input should facilitate more input. And the input should provide the user with a new capability. As this happens the line between stimulus and response thins. And as the line between stimulus and response thins, the depth of immersion increases. This is why you can't do something else if you're immersed. This is why, if it's really interactive, it's consuming.
The Second Principle, the principle of Inside / Outside: A Dialogue should be created between the Internal and External Worlds.
"Inside-outside" refers to the relationship of two sorts of interaction. I also call this the difference between "inside the skull" and "outside the skull" interactivity.
Inside-the-skull interactivity is a process of extending what the user already knows. It is the world of the reader's imagination. Take, for example, reading. It works with existing iconography (the alphabet) and metaphor (little red riding hood) and relies on the reader's interior understandings to build a visualized and emotional suspension of suspicion. "Inside-the-skull" is the world of meaning. As William James puts it, "Fantasy or Imagination, are the names given to the faculty of reproducing copies of originals once felt." Inside-the-skull is the art, metaphor, and subtle cues that build things like dreams.
Outside-the-skull interactivity is based on what we are experiencing on an empirical, or experiential, level. The framerate of the video game, the haptic feedback of the joystick, the hues of the colors, or the 32-bit stereo depth are all elements of craft, not art. These timed and physical elements are the components of interactivity that many authors think are the only pieces worth paying attention to. This is a mistake because technology is not only an extension of ourselves, it is a reformation of the world around us. Authors of interactivity that are not paying attention to both the subject and the object of an interaction (the subjective and objective perspectives) are missing one of the key values of interactivity. This key value is the proportion of inside-the-skull and outside-the-skull information that makes the art of interactivity interesting.
The writers of the video game Tomb Raider and Final Fantasy have done a marvelous job of capitalizing on this by increasing their out-of-the-skull narration. They released movies. They increased their audience's understanding of the "backstory"the implied narration of the video gamebut also, by making Lara Croft a living character on the screen with a photorealistic environment (and, uh, topology) they increased the visual depth of the video game for game players.
In 1992 Paul Sermon, an installation artist who has worked at the edges of digital technology for more than a decade, was commissioned to produce a project he named "Telematic Dreaming" in which there were two rooms. Each room contained a bed, a camera that was above the bed, and a projector that was next to the camera. One museum visitor was in each room, lying on each bed. The result: Two people were seen lying next to each other, but, of course, only one was physically there. The other was a projected image from the other room.
The success of the project was based on its ability to work with both inside-the-skull and outside-the-skull interaction. If you were lying on the bed your external world was being immediately informed that there was an image of a person there next to you. The power of this project was that the sight of the projected image of the person overlapped with the intimate meaning of the bed (the internal, inside-the-skull, world).
The Third Principle, Open / Closed: The system should get better the more it's used.
Closed systems are boring. Open systems have something to give back. This can be tested by going outside and kicking two things. First kick a brick. As soon as you kick it, the brick will move. It's a response that is expected (and potentially painful). Next, go kick a person.
The reason why the interaction with the human is more intense than the interaction with the brick is because the interaction with the human gives something back that is unpredictable. The human is independent and unpredictable. The human is an open system. The brick just rolls overit's a closed system.
The real indeterminacy is in how the person will respond, not whether they know they have been kicked. This introduces second-order effects because the person might just jump away or will try to kick you back.
As many software programmers have learned, indeterminacy is the characteristic of a system that gives the system its independence. If you have a system that has a kind of dynamic equilibrium it will be more robust, more capable of handling change, and, therefore, more interactive and participatory. These are characteristics of its independence.
Open systems are more complicated, less predictable, and more interesting than closed systems. Algorithmically generated geometry, such as 3d-flythrough landscapes, are a good example. Algorithmically generated personalities, such as high-end artificial intelligence systems, are an even better example. But what remains the most unpredictable, independent, and captivating of all interactions is other people. There is no predicting the behavior with certainty, but there is almost always a context that defines the response.
This is the reason why multiuser gaming environments (an example of Interactive Narrative) such as the movie A.I. online component, Cloudmakers [2.5.2], or multi-user games such as Ultima Online [3.5.5] have started to take such a share of the time we spend with interactive systems.
These principles of interaction, input / output, inside / outside, and open / closed, can be used to guide authors as they develop narratives that use interaction.
1.4.3: Four Steps of Interaction
Interaction is composed of steps that, like dance choreography, music notation, rhetoric, or any other form of communication art, can be outlined to better understand its basic process. These steps guide the form and shape of the interaction. If the steps are understood prior to designing an interactive system, the quality of the design can be increased. To clarify, these are steps, not (as in 1.4.2) principles. These are actions that a reader follows. These steps are intended to complement and work with the principles listed previously.
The principles are a means of guiding development. The steps are a means of evaluating the result of that development.
Interactivity is, like plot, based on fascination and captivation. It is how people get pulled into a process that continues to draw them deeper and deeper. Interaction can be broken down into four steps which, if the interaction design is done well, generates an increased interest in further interaction. The steps go like this:
Note how each of these steps drives the following.
Observation: The reader makes an assessment.
Exploration: The reader does something.
Modification: The reader changes the system.
Reciprocal Change: The system tries to change the reader.
In any system, a simple level of familiarity is necessary to act. And, before any action takes place a kind of awareness of first-level options are necessary. First-level options might include the identification of things like buttons or levers or stairs. The reader might ask, Do I move or does the environment move around me? Do images or text represent some kind of code or set of codes? What is possible? In Myst, an interactive narrative published by Broderbund and authored by the Rand brothers, readers experience this very effect. They are dropped into an environment in which they need to use their skills of observation to determine their abilities in the environment.
After first level options are discovered, a second level is then moved to in which capabilities are explored. The reader finds out what she can and can't do and, effectively, stretches out her hand and finds that she can make a change. But it's a process of unintentional discovery, not conscious change.
If a reader has made an assessment and done something based on that context, the reader will change the interactive system. The reader bridges context to decision. This is the leap from unintentional discovery to conscious change. At this point, the reader knows at least some of his or her abilities and uses them with intent to modify the system. The modification was created for the user by the author, and because it was allowed (and sufficiently motivated) the level of interaction in the system is increased.
And if its interactive and the reader is engaged, the system changes the reader's actions. The fact that there is reciprocal change is one of the defining steps of high-latency interaction. Without reciprocal change the system might as well be a brick or a doorbell rather than a person who has the ability to be somewhat indeterminate and interactive.
Repeat: The reader makes another assessment.
By this time the system is rolling and by going back to step 1, the process deepens and the interaction increases. If all goes well, the system then begins to improve for the person, the inside-the-skull and outside-the-skull worlds start to mix, and input creates more output.
Let's go back; Interactivity is an increase in a reader's participation. It's a bidirectional communication conduit. It's a response. Interaction is a relationship. It's mutually executed change. It's indeterminate behavior, and the redundant result. As far as narrative is concerned it amounts to providing the reader with the ability to alter specifics in the plot.
1.4.4: Designing Information for Interactivity Redundancy and Context - Cues of Interaction
You open your eyes and its completely black. One of those dark situations where you almost feel the blackness pressing on your head... completely numb and silent. Ahead of you there's a point of light. You have a piece of information because you have a piece of difference. Information is difference.
Next imagine that the piece of lightthis small, yellow pinprick in the fabric of the darknessbegins to get taller. There's change, therefore more difference, therefore more information. The line breaks off another to its side, and another, spilling out so that it looks like this:
The width of the linesthere seems to be fat and narrowand the spaces between the lines give us more information.
Two narrow, two fat. Space. Three narrow, two fat. Space. Then it repeats.
It might look familiar to you but I doubt that you can read this without a barcode scanner. Here's the same information in a different context (in this example, a different iconography or alphabet):
The repetition begins to generate a pattern. How we interpret pattern, however, is another issue. We can recognize this more clearly because we might read it as Greco-Roman if not English. It's the same information, but in a different, more familiar (and therefore more informative) format. The ability to predict the pattern is based first on its redundancy and second on its context. In this case its context seems to live in the Greco-Roman alphabet, so probably a romance language, but beyond that we don't have much more information (because these damn things are just floating in space).
A clue: 2 3 4 5 6 7
And the same pattern in its most easily recognized, and most redundant, state:
This would have been easier to recognize had the context of the Greco-Roman alphabet been transcribed in a little more familiar (and, in this case, more iconographic) method. Because the letters have a context they make it more iconographic. An icon is a contextualized image.
These precepts should be intuitive to any interface designer, storyteller, graphic designer, or interaction designer worth their picture. The precepts run like so:
Difference provides information
Repetition provides redundancy
Redundant information (a.k.a. "repetition with variation") provides context
Context allows prediction
Prediction allows participation
Participation is the cornerstone of interaction
I've found that people get very excited when they learn they can predict things because this allows them to participate. They suddenly have a grasp on time that they didn't before, the world seems more manageable, their role in it comprehensible.
But finally, these are all simply means of building metaphor's launch pad. A metaphor is a super-set of symbology. It's a meta-message that allows for very complicated forms of communication. We rely on it whenever we tell a story.
"Meta-messages," as Bateson calls them, come in all shapes and sizes. Their key characteristic, however, is to convey meaning that reaches beyond their information.* Adages, metaphors, and fables all do this and so they serve as strong guides for ways to develop interface, narrative, and to design elements of interactivity that help readers better understand what they are able to do, what the effects will be, and how they can do it.
The adage "information is not knowledge" is one way of representing this idea. But in the world of interaction design this adage can be pushed further. Really, knowledge isn't worth much more than information if it doesn't allow for action. In the world of interaction design, action becomes the reason for information.
The key is relying on the inside-the-skull world of the reader.
1.4.5: Designing Time for Interactivity
The Spectra of Permanent and Temporary Times
Writing on cave walls, sending letters, scrawling in books, pecking at a keyboard, and scribbling up a diary are all methods we use to make time and the stories of our lives permanent on parchment, paper, and monitor. Writing is an effort to escape death, perhaps, or a recognition that time is the stuff of life, but despite writing's best efforts, it still generally lacks the luster and the shine of the moment it describes. It's still a description.
"Epiphany" comes from the ancient Greek, εþιφανοs or "epiphanos" which loosely means "to make manifest." Epiphany is a term that James Joyce coined to express the moment when the reader understands the entire arc of the story as a single thought. It's a telescoping of events into a single menu, of sorts. This is a foreshortening of the story and a compression of information that, according to Joyce, is an act of authorship.
There are strange moments in most reading when we realize that a chunk of time is missing, is repeated, or is looping back on itself. Literature has an arsenal of tools to facilitate this process. "Foreshadowing" and "Epiphany" are two of them.
The act of writing, narrative's corpus primus, has always beenat least on some levelan attempt to escape time.
We don't always notice it but some experiences seem more susceptible to time. Dreams quickly fade in the morning, we keep mementos as physical kinds of memory (hence the word), and inscriptions seem barely solid enough to weather the winds of change. So we make efforts to write in a way that is permanent. The tablet of Isis, a kind of topology of narrative built into stone has lasted for thousands of years. Pioneer 11, launched in 1973 continues to float through cold space as you read, and will probably continue to do so until its either read by extraterrestrial eyes or gets sucked into the nuclear center of some unmapped sun. A plaque on the space probe contains a compact narrative outlining our position in the solar system, who we are, and when Pioneer 11 was launched. The engravings and the stories it holds are intended to outlast the perspectives of the writers and the readers of the story.
Some narrative has only a temporary existence. The ancient Greeks used to have huge tracts of narrative they would repeat in a lyrical voice that would last for many night-times of around-the-fire singing with each epic continuing for many nights. In fact it's assumed that Homer never actually wrote the Iliad or the Odyssey but that he, like most schooled Greeks of the day, simply recited the epic from heart. Homer was just the guy that finally took the time to distill these cultural songs into a format that made it more durable to listeners from a culture of only a few degrees difference from our own. And he did it with a guitar.*
Homer, really, didn't use a guitar. The guitar was invented some 1,500 years after Homer's day, following the Moorish invasions of Spain. The guitar is a child of the tar, which, depending on the number of strings and shape might (and sometimes does) be named tar, dotar, ektar, setar, etc. But Homer probably played the lyra or kithara.
Bob Dylan pulled the same stunt in covering the old blues tunes of the American South. The Irish are alleged to have kept their stories, called "finger alphabets," in the tips of their fingers. They memorized them by assigning different syllables or letters to different parts of their fingers.
These forms of narrative are as lost as the breath of the dead that sang them. Like the Oroborus that chews its own tail, these stories that used time were eaten by it, and their death forced a kind of change in the way the stories were told.
The Spectra of Slow and Fast Times
We don't always notice it, but time, like a flock of birds, can move in radical and unpredictable ways. Time flies quickly when you're having fun, it drags an hour before the workday ends, and sleep washes it all away.
Narrative has some sharply honed tools for articulating the elements of time that go slowly or move too quickly for us to consciously map. In most narratives, sleeping isn't generally detailed (though video artist Bill Viola has done a fine job of investigating this strange space of time), nor are the long moments of silent solitude in which our protagonist might be staring off, bored and thoughtless, at the toe of his shoe, waiting for the train that will carry him to the battlefield.
If you've ever noticed how your brain accelerates with a healthy shot of adrenaline it will make some sense as to why narrative increases detail at moments of greatest conflict. Events that require solutions warp the weft of time. Narrative moves slower or faster, depending on the kind of problem being addressed, and the speed at which time appears often has a relationship to the place we are when it happens.
Our subjective experience of time is often dilated at moments of intense choice. When we are on the stage in front of an audience involved in improvisation, performance, reading, or speaking, the "length" of time seems shortened and severely protracted. Suspense, for example, is just thata suspension between two events that are further apart than you'd anticipated. Expectation and surprise, likewise, rely on the relationships of points in time. Suspense is just one of the aspects of time that is commonly used in narration.
But the use of time in narrative is a complicated thing and when we introduce information displayand specifically interactive information displaynonlinear choices need to be allowed within the context of the linear story.
Narrative has traditionally followed a linear path. Because narration comes from a verbal tradition, linguistic communication was necessarily bound in linear time, making linearity the strongest influence on how we've told stories.
This is changing because more and more stories are told visually. Consider Chris Ware's comics [2.5.5], a glance at a newspaper's photos, or the links that are highlighted on a web page. None of these forms of storytelling are necessarily linear, but they all tell, from a perspective, what happened.
The job of writing has been far easier for historic authors than it has for authors of science fiction or fiction authors like James Joyce who have made the brave choice to toggle back and forth between events at different points in the story. People like Joyce were pioneers in considering how to use time in other than linear fashion and might have unintentionally invented hypertext.
Because interaction includes decision-making, and because decision-making is not necessarily linear, we need to learn how to tell stories that facilitate this approach. Like Joyce, we need to invent new modes of thinking about time.
Time, as a tool that a writer uses, can move in strange ways. And digital media, with things like back buttons and the ability to accelerate, decelerate, link, and close, changes how time is used in narration.
Events in narrative generally follow what came before, and can often precede them that same way. If this seems confusing to you, its because emerging forms of literature are encouraging us to think differently about time.
I walk up to a door and push a button. A doorbell sounds inside. I push it again. The door opens and an old man answers the door.
Time does not need to be ordered in a sequential fashion. But that's our perception of it. I push the button, the bell rings, and someone answers the door. We're inclined to say the bell rang because I pushed the button, but we might never know whether I pushed the button because the bell rang. Maybe the bell rang because the man was about to come to the door. The cause doesn't need to precede the effect.
Time does not need to be universal, but we perceive it as such. Please imagine everything has already happened at once. In the preceding fable, an old man answered the door because I pushed the button. Maybe in another time he wasn't there at all. Or maybe he was and didn't feel like answering. Let's consider that there are at least three different tracks of time that exist, we just happen to be experiencing only one of them. Imagine all of the possible tracks of all the possible events running smoothly along simultaneouslyyou just happen to be riding on one of them.
Time does not need to be ordered in a linear fashion. Assuming the above is plausible, then why wouldn't it be possible for a single cause to split into several effects? Or, vice versa, we can mix in the first two examples with this forked approach in which the old man's cause might become my effect as well as the door's effectboth of which preceded the cause. In other words, maybe I rang the doorbell and someone answeredboth because the bell rang. Multiple effects might have a single cause, or multiple causes might have a single effect.
Time might be considered as a volume. Imagine yourself as someone that had no sense of space as a surrounding entity. Your first perception of space is as a camera that's mounted on someone's shoulder. You would see space moving toward you as the person moved, then it might slow down and stop altogether, slide sideways, then suddenly move toward you again. It's possible that we understand time this wayin an apparently linear way, not because it is time's nature to behave this way, but because it is our nature to perceive it like that.
So much for that; there are a lot of different possibilities. Forms of interactivity need to take this into account at least as much as forms of narrative have over the past century. It's an issue of plot as much as an issue of use-case scenario. Our process of considering and deciding impact one another, as does reaction and action, within the framework of cause and effect. These are principles that inform any structure of interactivity, but because narrative and modern literature use time in ways that are not always as we perceive (or even understand) them, sequence and reaction can get unorthodox, let alone incomprehensible.
This is one of the primary arteries of interactivity: it's about understanding new methods of articulating time and human decisions within that framework and how we can relate to them both conceptually and spatially.
1.4.6: Designing Decisions for Interactivity
Let's return to our use-case scenario, still considering it as an interactive kind of plot. This is an expression of events that takes place over a period of time. That period of time is determined not as much by the author as it is by the reader. The reader carries his or her own sense of time into the interaction model. He or she controls the pacing and it's up to the interaction designer to see that they are still conducted to the appropriate step at the appropriate time: when they want.
Figure 1.5 Use Case Scenario example #2.
Notice that time is represented as a spatial arrangement of decisions and that it can be moved back and forth along the line of the main flow. This is surprisingly similar to the Freytag Triangle. But the author of this system has a great deal of influence over how the reader reacts to it.
The Tyranny of Interaction Design
There are differences of opinions among designers of interactive narrative. Some authors I've spoken with rant about the influence that a designer can have over the interaction of a game. In some cases readers have said that they are not playing a game, but rather jumping through the hoops that the game designers have put in their way. This "hoop jumping" is an example of the kind of design that is implicit in many interactive systems. The design of the interaction, and the influences the designer has, can become excessive and force readers to spend time doing things they may not otherwise choose.
Consider the video game that has levels and in order to get to the next level you have to perform some simple function. Or you have to repeat a single function within a prescribed error margin. This might be jumping Donkey Kong or Super Mario from a barrel into a hole, for example.
But it becomes frustrating if, after several tries, the goal hasn't been reached and the player is still there, sweating over the split second that will either allow them to go forward or put them back to where they were five minutes previous.
This is a form of tyranny and poor interaction design. It should be avoided when possible.
In most cases it should be considered that the goal of an interactive narrative is not to author the narrative, but to provide a context and an environment where the narrative can be discovered or built by the readers of that story. In this way designers and authors of interactive narrative are far more like architects than they are like writers. The author considers the interactions and movements of readers of the story and work to accommodate that reading that can happen from many different sides.
As Doug Church, one of the most respected game designers in the United States, puts it:
"Those of us doing 'immersive simulation' strive to make the game the player's, not the designer's. While we, as designers, are clearly creating the environment and rules, we hope to allow the player to act, plan, and decide. Working on a talk several years ago, I was talking to a co-worker (Marc LeBlanc) at Looking Glass about this and defined it as "getting the designer off of the stage, and pulling the player onto it". He described that as 'abdicating authorship', not feeling like we have to be 'in control' of everything. It is important to realize this doesn't mean 'abdicating responsibility,' for creating the rules and procedures of the world is an act of authorship that defines the space. But at the same time, a carefully authored environment can abdicate the specific control to the player, who can then make and fulfill their own plans and decisions. Done well, this leads to more investment from the player, as they realize that the world is about them, and that they matter."
1.4.7: The Emerging Forms of Interactivity in Communication
As we've seen, most entertainment and communication toolsets have adapted quite handily to the digital medium. Video, audio, photographs, text, and most communication technologies that originally relied on the airwaves of the analog have found comfortable homes in the wires of the digital.
Electronic technologies, and specifically the emergence of color television, have brought us closer and closer to the hallucinogens of imagination and storytelling. Television in particular has helped each viewer to participate in that comfortable space of collective awareness and distributed narrative that happens when millions of people are all seeing the same sound, watching the same image, and dreaming the same dream. It's a powerful thing. It was electronic technology, and the fact that this technology could now live with us in our homes, that first introduced us to a new narrative space of technologic interaction.
Before electronic media was distilled into its digital form, television channels and radio stations were some of our first opportunities to choose the evening's entertainment in our own living rooms. My first impressions of Robert Louis Stevenson were through a battery-powered radio in the northern wintertips of Maine. It was 1977 and anachronous, to be sure. We would huddle around the radio and listen to Treasure Island through the tinny, rattling speaker and a massive ethereal cornucopia of words and images would spill deserted beaches of white sand, huge clipper ships with full-thrown sails, sharpened sabres dripping with hot blood, and the occasional mocking parrot spreading its wings in the bright sun into our remote, wintertime cabin.
There was, strictly speaking, some choice: we could listen to Treasure Island or we could hear about the coming snowstorms.
Then, maybe six or seven years later I spent time watching how people watched television. Not having grown up with a television it has always had a severely anesthetic and hypnotic impact on me. Consequently, I've always been a keen observer of viewing behavior. After trying to watch Lee Majors defeat Sasquatch I would become infuriated with Billy Reidel as he would change the channel in the middle of the battle royale. For him, as someone that had grown up with television, the channel switching was part of the program experience. "Surfing" (as web usage has lately, and ineptly, been dubbed) seemed to involve a persistent entering and exiting of the material being watched. The dwell-time and form of attention for traditional network television was very different from radio partly because of the speed with which channels could be accessed.
The remote control had something to do with this. It was the convenience of pressing a single button as opposed to getting off the sofa and twiddling a dial that, at least partly, facilitated this change in concentration and attention. But other things facilitated interruption and mode-shifts. The volume could be turned down, the box was small and could be looked away from, the signal could get interrupted. But even more than the remote control, the presence of broadcast commercials (full-volume mini-narrative advertising interruptions of the larger narrative) at high-tension points in the story, caused us to restructure the way we considered stories and the attention span we brought to them.
The remote control and the ever-intruding advertisement facilitated a different kind of attention in viewers of television. It facilitated a nesting of narrative and a kind of attention that was very facile with mode-switches, context-swapping, and interruptability. This form of intertwined and entangled attention span of the television viewer fast revealed itself in the graphic design of television content. Camera cuts, character introduction, music pacing, color contrast, volume, and even story structures themselves were built to grapple with the viewer's need to flip over to something faster and more hypnotic. Watch any music video on VH1 or MTV; compare "Weakest Link" to poor old Vanna White; or watch a batch of contemporary Saturday morning cartoons and you'll soon see that the drum that these shows march to is increasing its beat.
The tradition of the interruption that binds together a larger narrative has been inherited by digital media. Early BBSsthe greenhouse nurseries of MUDs, MOOs, MUCKsand electronic mail systems were built, from the ground up, to be a thing that you could enter, use, be interrupted in the middle of, use again, and leave. Unlike a radio-based narrative, the participation wasn't dictated by a set amount of time. It was left to you, as when reading a book, to decide how long and how much. This was already implicit in most electronic mediabut digital media in particular held nonlinear participation, interruption and resumption as part of its assumed capabilities from the start.
This idea of interrupt-and-resume is deeply embedded in the command line. Konrad Zuse*, a German researcher and engineer, was 31 when he completed a prototypical programmable calculator he named Z1. It was automatic, it was mechanical, and it was digital. But it was the first binary machine based on Boolean algebra, which was an important step. The command-line input used something we might recognize as a keyboard and the output was displayed on electric lamps that hung overhead.
Konrad Zuse also developed of a basic programming system known as "Plankalkül" with which he designed a chess playing program. A copy of his first digital binary computer is on display in the Museum für Verkehr und Technik in Berlin.
By this time IBM (then awkwardly named "Computing - Tabulating - Recording Company") had manufactured almost 1,500 punch card machines. Seeing what Zuse had done, they were quick to adopt this interface innovation. By 1940 Bell labs had teletypes running with multiple, remote input keyboards chained to a single machine. Only one could be used at a time and when it was the output was displayed at the same location. Only nine months later at a mathematics conference a teletype keyboard in Hanover, New Hampshire was connected to that same machine in New York. Conference goers were able to use the machines remotely.
These innovations in interaction happened because the computer, unlike the television, is always waiting for you to tell it what to do. Its time is determined by your presence (at least for now).
The interactive capabilities of the command line is massive because it was developed as a means of providing users with remote-controlled actions such as "Run," "Print," and "Copy." The command line is far from dead and its implications are still being explored today. Mode-switching is implied with a command line. At a primitive level, the computer presented the idea of switching channels of concentrationof switching modeswith the command line. But there were other ideas in there as well.
The integration of the graphical frontend with the computational backend has been a recent development in interactivity that's introduced a realm of possibilities that we see in contemporary interfaces such as the Macintosh and Windows operating systems. Xerox-PARC and several other research institutes were working on a graphical pointing system during the 1980s and the invention caught on as soon as the cognitive leap from "Data" to "Image" was made. The idea of graphical computing didn't initially include a mouse, but was later added because the authors sensed a need for a new way to interact with the space other than keyboard (which hardly considers space at all). It wasn't until Apple integrated this new interface conventionthis hardware based form of interaction named The Mousethat it really discovered any commercial success. But the leap from command line to image is not a difficult jump for us to make as we look back, but certainly something that was a little hard to see if you were there as it was happening.
Command lines are fascinating because, like tropes, they require cognitive participation. The graphical interface is probably an improvement to human-computer interaction because we don't need to remember as much to get the same task done, but I wonder if this shift from command-line to graphical-user interface is a bit like the shift from radio to television. The radio was a trickle of static interspersed with words. The task of listening to the radio was more focused, demanded more of the listener, and its smaller flow of information forced listeners to pay more attention over a longer time. It was an inside-the-skull interaction mode. The command line is like this. There are always help systems to remind you of what's there, but you only begin to work with those systems of interaction once you've remembered them. And the act of remembering can be difficult.
The graphical user interface, or GUI, came along and we could see where on the screen a particular thingbe it an action or an object, a verb or a nounlived. When using a GUI, you might remember that, in the menu at the top of the screen, one slot over and two slots down is the copy command. Or, as your command-line capabilities increased, you might remember that a combination of two buttons is the copy command. The web, the graphical version of the Internet, simply took this basic idea and extended the metaphor out of the command of the individual computer to the command of multiple computers.
Web publishing and chat rooms have largely defined our understanding of interactivity. Chronologically and commercially the Internet followed the Internet-prototypes known as CD-ROMs. And CD-ROMs were the commercial leveraging of data storage devices such as the hard-drive or floppy disk. But the transition from data storage to CD-ROM to Internet has been one that has allowed access to more data in a faster and more convenient way. The curve is simple, reallyit's entirely quantitative at this point. And, as I sit at my desk and wait for data to arrive onscreen, it's easy to see that this trend will continue. Now that the Internet has gone through the same suspiciously similar curve of high-acceleration accompanied by tremendous collapse that the CD-ROM publishing industry went through in the early '90s we may begin to understand that this is part of the cycle of these technologies. We can anticipate mobile technologies and, later, ubiquitous computing, to follow the same path.
Enhanced Television, too, has followed this trend of relying on data storage to increase its commercial heft. The trend, again, will be predictable in the coming decade: more is better. It's the whole idea behind video on demand. Store more data so customers have more choices. The only real challenge most enhanced television manufacturers face is how to make that data accessible to the customer when they want it.
These trends of quantitative increase show us which features of interactivity are inherently digital. Issues of access, mass storage, and transport, are not the inherent issues of interaction. The interactive is not contained within the digitalit's the other way around. We're just learning how to make the digital medium more interactive.
But trying to provide access while viewers are switching modes, stopping, starting, speeding up, slowing down, leaving, entering, getting bored, excited, confused, and progressively poorer with each tap on the button; these are interaction design problems of a different sort that will continue to evolve far past the coming decade.