The architecture of the mouse

Category:

The mouse is a potent prosthetic. When placed in front of our desktop we do not even have to think consciously about reaching for it. Mark Wigley’s eulogy to this seemingly humble but transformative technology highlights the power that such a discrete device can have on the human ecosystem, providing a seamless interface between body and brain that is still only to be dreamt of in architecture.

I reach out for it, touching its compact form ever so lightly to wake up my monitor. Yet I never really see the mouse. Even when reaching for it, my eye is already on the screen, drawn towards the imminent glow and heading immediately into the image that appears, pulling the rest of my body into the chair.

Without realising it, my fi ngers have wrapped themselves around the plastic object. It quietly nestles inside the hand with its smooth contours politely echoing the soft interior of the palm. It starts to move, busily but inconspicuously darting back and forth across a small space on the table until I am done. Unseen and unfelt, the mouse has to disappear in order to work. It has to be both part of my body and part of the computer, binding two organisms into one, allowing the electrical signals in the nervous system to stimulate and be stimulated by the electrical signals in the computer. The role of the mouse is to simply attach a thin wire to the hand, linking our organic and inorganic circuits. Its relentless smoothness in shape and frictionless movement across the table fuse the gap between human and machine. The wire reaching out from between the fingers becomes a crucial part of our biology.

The unassuming yet ever present mouse is a remarkable prosthesis, radically extending the capacity of the body. More precisely, it sustains a new body able to move in new ways, in new spaces, starting with the sense that one is moving through the seemingly virtual space of the computer. This transformative power of the mouse is tied to the simple logic of the generic graphic user interface, the set of icons on the screen that suggest that bodily movements on the desktop are actually movements in a virtual ‘desktop’, with its ‘documents’, ‘folders’ and ‘trash cans’ manipulated by bodily gestures of ‘cutting’, ‘pasting’, ‘dragging’ and ‘dropping’. This first mimetic step from the horizontal desktop in your room to the vertical desktop in your computer supports the wider multidimensional ability to move through other rooms, cities, social networks and data sets. Movement across a few square inches of desktop is amplifi ed into mobility across whole worlds. To reach for the mouse is to reach into an exponentially expanding space. The unnoticed dance of the mouse just beyond the corner of the eye becomes the basis of a radical transformation of the species.

Yet who is it that reaches for the mighty mouse in the morning? It is not a temporarily incapacitated creature of the digital world, completing itself each day by wiring itself in. Nor is it a pre-computer mind and body transformed into something digital each time it connects to the computer. The seemingly simple gesture of connecting is even more radical. The way of thinking and acting of the person who unconsciously reaches to touch the mouse has already been changed by it, as have most of the surrounding people and objects. The very form of our environment, discourse, relationships and actions is now dependent on the fact that there are so many mice in the world, with a single manufacturer able to celebrate the birth of its billionth mouse at the end of 2008. Even those without computers are profoundly touched by them. The enigma posed by all prosthetics is that their transformative extension goes way beyond the literal extension of particular bodies at particular times. You can be affected by a prosthetic before using it, after using it, or without ever using it. The prosthetic effect lives on without the prosthetic itself.

Indeed, the ultimate effect of the mouse is that the mouse itself can become redundant. The idea of the computer as a discrete object with a mysterious interior now gives way to massively distributed systems accessed through the lightest possible local interfaces. The object becomes nothing but interface, a portable device suspended in a cloud environment of programs and data sets. The processor, input and output are increasingly compacted into the single plane of a sensitive touchscreen as computer, television and phone converge into a single platform. It has become hard to look in any direction without seeing such screens on walls, on the back of airline or taxi seats, on the front of appliances, in your lap or on your wrist. The interface moves ever closer to the body. The thin plane of the ‘handheld’ is ever present in the pocket, literally warmed by the body until cradled within the palm of one hand and cupped against the face when talking or protected by the other hand that softly strokes its surface to connect, with the body literally completing the electronic circuit through capacitive touchscreens that sense variations in the electrostatic fi elds within the outer layers of our skin. One no longer needs to move towards an interface or even to extend an arm. The interface is already well inside our reach, touching the body before we touch it. Or, more precisely, the act of reaching out has become even more compact, even intimate, with the sliding of the fi ngers across a screen. This intimacy intensifi es the prosthetic amplifi cation. The shrinking device has an ever expanding reach. It is as if we can literally touch any distant cloud by hardly moving. The endless repetitive movements required just to stay in one spot, breathing and heartbeat, are now much greater than those required to reach out to the furthest points of the world.

A history of 20th-century prosthetics can be written in terms of the ever smaller movements of the fi ngers that have ever greater effects over ever larger domains. The 19th-century pull on a lever gave way to the fl ip of a switch, to the push of the button, to the click of the mouse, to the tap on a

touchpad, to the lightest stroke of a screen. The inch or so of movement in the light switch at the turn of the 20th century has been steadily reduced to the barest fl icker of body electricity in the skin. The most delicate of signals can act massively across ever greater distances around and beyond the planet. This trajectory towards increasingly powerful micro movements is also a story of domestication. To reach out to the world is simultaneously to pull the world inside.

The mouse at once connects us to the digital landscape and brings the digital in. It is not by chance that the mouse was a crucial component of the first ‘personal’ computers in the early 1980s. It was the mouse itself that made the computer personal, literally domesticating the digital environment by bringing it inside the home. It tamed the digital in the way the light switch domesticated electricity. The simple switch allowed electricity to be brought into the home or, more precisely, it allowed people to literally live inside an ever expanding electric circuit. The ability to fl ip between on and off became the ability to enter or leave.

The circuitry originally mirrored the basic architecture of the house, with each room having its own light switch, then each doorway, as the walls started to be packed with wires. But the circuitry soon became more complex than the rooms. The architecture was multiplied and complicated as the house started to steadily fi ll with buttons, which exponentially multiplied in number until even the simplest domestic spaces now have hundreds of buttons, increasingly gathered together in dense clusters on remote controls, keyboards and keypads. The size and defl ection of each button gets ever smaller as their effects increase. Architecture is unthinkable outside this relentless yet discrete layer of micro-switches. Everyday life involves pressing countless buttons. These buttons defi ne the spaces we occupy more than the walls. In a sense, they have become the walls, perforating the physical structure with new kinds of opening and new kinds of closure. An ever increasing number of surfaces in the room respond to an ever more intimate touch and a ghostly galaxy of tiny glowing pilot lights marks the lurking electronic intelligence constantly surrounding us, awaiting our caress.

The not so humble mouse played a key role in this architectural evolution, systematically reconfi guring our relationship to signals, to circuitry in general, irreversibly expanding the human ecosystem out into the digital environment and simultaneously bringing the digital inside the house, the personal space and even the body itself.

As the mouse gives way to the touchscreen, the architectural metaphor of the desktop remains. Indeed, it becomes ever more detailed, with increasingly precise textures, shadows, colours, refl ections, animations and sounds. If the graphic user interface was our first familiar stepping stone into the mysterious depths of digital space, it has not been left behind. On the contrary, the more the input device collapses into the fl atness of the screen itself, the more the desktop image seems to gain a three-dimensional density. The deeper the dive into the digital, the more realistic the platform has to seem, as if to reassure us or to train us to see the digital in physical terms, which is precisely what makes it virtual. The daily dive into the computer is not a leap from analogue to digital or from real to simulation, but a choreographed blurring of the two, a smoothing over to activate a continuous interactive circuit.

After all, the desktop in the graphic user interface is not simply a reassuring image of a physical desktop. The physical desktop is itself already an iconic image that tries to stabilise the indeterminacy of thinking, writing, reading, storing and communicating. The traditional desktop is an architecture in the sense that it is a reassuring image of order within an indeterminate space of exchange. The 14thcentury idea of the desk as a portable angled box to read and write on gives way to the idea of the desktop as a fl at space of organisation, an abstract organisational plane, as exemplifi ed in the fl oating abstract rectangle of the 20th century offi ce desk with its associated file cabinets. Not by chance does the image of the desktop as a two-dimensional plane become the generic interface at the exact moment when the computer becomes small enough to be used with an existing desk. Desktop computing simultaneously places the computer on the desk and the image of the desk inside the computer. The visual logic of the horizontal desktop is mirrored in the vertical screen, with the body of the user literally inserted into the space in between the two images and the mouse acting as the hinge. The user can even see the actual refl ection of the desktop in the glass of the monitor, superimposed on the iconic representation of the desktop. In the moment that the mouse connects the circuitry of the body and the circuitry of the computer, the architecture in the room is hinged to the architecture in the screen.

The role of the mouse is therefore first and foremost architectural. Indeed, the contemporary experience of space is unthinkable outside an object that is designed to be overlooked. The spaces we occupy and the way we occupy them turn on an inconspicuous prosthetic whose own disappearance, losing its wheel, then its ball and then its umbilical wire before slipping away, is the fi nal proof of its transformative effect. The massive force of the humble mouse only becomes evident as it leaves, reinforcing Marshall McLuhan’s central argument from the early 1960s that the prosthetic effect of each new technology is so shockingly intense that we only see technologies for what they are in the moment they are superseded. Or, to say it the other way around, each transformative technology makes the previous technology visible for the first time. The new regime of the lightest touch reveals the mouse in the very moment of making it redundant.

As the mouse starts to leave the room with the successful completion of its almost halfcentury campaign to quietly re-engineer our species, we can re-examine the prosthetic logic in architecture. This central logic includes the architectural effect of prosthetics, the effect of prosthetics on architects, the effect of the prosthetic argument itself on architectural discourse, and the role of architecture in the evolution of prosthetics. A specifi c discourse about prosthetics played an important role in 20th-century architecture and a discourse about architecture played an equally critical role in the development of 20th-century prosthetics. To see how the mouse was born at this exact intersection of prosthetics and architecture in the early 1960s opens up the possibility to see the organism of 20th-century architecture in a different light. Prosthetics are always at once technological and biological.

More precisely, the prosthetic is the moment that technology becomes part of biology. As the technological extension that reaches out to the environment becomes part of the animal that is reaching out, both the species and its environment evolve. Ultimately, to think of the intimacy between architecture and prosthetics is to see architecture in radically ecological terms, not just in the traditional sense of the circulation of organic and inorganic material in the slow dance of organisms and environment, but in terms of the ecology of images and ideas. Finally, it is to understand ideas themselves as technologies, to see thought in material terms, as became literal in the evolution of the computer.

The first mouse was carved in wood in 1964 and migrated between research labs before heading out into the world as a crucial component of the first personal computer in 1982. It was invented at the Augmentation Research Center that had been set up a few years earlier at Stanford Research Institute in Menlo Park, California, by the electrical engineer Douglas Engelbart to develop timesharing collaborative digital environments.

Engelbart argued that it was necessary for humans and computers to respond to each other interactively in real time to ‘amplify’ human intelligence in the face of the massive scale, speed and complexity of the problems facing humanity. He insisted that our brains needed to be linked to computers and thereby to each other in such a way that both man and machine would start to ‘co-evolve’.

A key reference point for this sense of co-evolution was the 1960 paper on ‘Man- Computer Symbiosis’ by the psychologist JCR Licklider that called for a radical blending of human and machine. Licklider had been active in the postwar cybernetic circles that treated machines as organisms and organisms as machines, but he wanted to go beyond the model of the human organism that is prosthetically extended by technology towards the idea of human-aided machines. The human would become the prosthetic attachment to the machine organism before a fi nal seamless blending of the two: ‘It seems likely that the contributions of human operators and equipment will blend together so completely in many operations that it will be diffi cult to separate them neatly in analysis. Such a blurring of user and machine was accomplished by the mouse that emerged out of Engelbart’s systematic attempt to reduce psychological and physical friction between human and computer. Before settling on the mouse, his team tested every possible input device, including hand, head, back, foot and a knee-activated lever which was actually the most responsive and led to the more radical proposal to directly attach accelerometers to the body to use its movements to control the electronics, thereby allowing the body to literally move inside the computer program.

Engelbart repeatedly referred to Licklider’s argument when calling for ‘augmentation’ devices that would enable people to work together in new ways. ‘Augmented Man, and a Search for Perspective’, his December 1960 abstract submitted for a 1961 computer science conference on ‘Extending Man’s Intellect’, treats the computer as a crucial ‘symbol manipulation artefact’ that can extend intellect as one of a number of prosthetics in a wider ‘augmentation system’. It radicalised Licklider’s recently published fl ipping of machine-aided human into human -aided machine by saying that what is most human in this ‘ever closer working relationship’ is the desire to keep helping machines to expand intelligence even when the intelligence now belongs to the machine: But the computer, as a demandaccessible artifact in man’s local environment, must be viewed as but one component in the system of techniques, procedures, and artifacts that our culture can provide to augment the minds of its complex problem solvers. As we imagine the development of an ever closer working relationship between the individual and a computer, we can foresee an ever increasing range of exciting possibilities for redesigning the rest of the augmentation system to take fuller advantage of the computer.
These possibilities promise marked increases in the effectiveness with which an individual can apply his basic mental capabilities to his role in the solution of society’s complex problems whose solutions, we must recognize, depend now and for some time to come upon such individual effectiveness. And when the day comes that intelligent machines begin to usurp his role, our individual would hardly still be human if he didn’t want to continue developing his augmentation system to extend to the limit his ability to pursue comprehension in the wake of the more intelligent machines.

What is human in the end is the evolution of the machine. Engelbart repeated the argument in the same month when proposing a oneyear ‘Augmented Human Intellect Study’ to develop the conceptual framework for a new kind of research programme devoted to the ‘long evolution’ of information technology from the book to the pencil, to the desk, then typewriter, telephone, duplicating equipment and beyond. He argues that the key to upgrading problem-solving ability is to improve the match between the inherent capabilities of the central nervous system and its outer environment via the ‘peripheral sensing systems’. He points to the ability of language to externalise our thoughts and for graphical systems to allow those thoughts to be recorded and manipulated in front of us, but argues that the conventional division between internal and external manipulation will soon blur as new external capabilities will transform the internal ones. The result of the study, which was partially supported by the US Air Force Offi ce of Scientifi c Research that had provided Engelbart’s first funding in 1959, was the October 1962 proposal for an Augmentation

Research Center to test the ways that human and computer can go beyond ‘cooperation’ towards ‘co-evolution’. The proposal calls for an intellectual expansion equivalent to the physical expansion of mobility since the horse and the sailboat. The centre would aim to initiate such a transformation by combining and developing all four of the means of augmentation that humans have evolved: artefacts, language, methodology and training.

Particular attention would be paid to processes that belong neither to the internal world of the user nor to the external world of the artefact, but to a new shared world between them. Licklider helped fund the first years of the intellect augmentation centre a few months later when he became head of the Information Processing Techniques Offi ce of the Defence Department’s Advanced Research Projects Agency (ARPA), but the funding for the tests of input devices that resulted in the mouse came from NASA in 1964 and the technical report on the success of the mouse in being ‘natural’ was not completed until July 1965: ‘A user soon fi nds it very easy to keep his eyes on the screen and cause the bug [cursor] to move about upon it as quickly and naturally as if he were pointing his finger (but with less fatigue). The winning wooden device was enclosed in moulded plastic in 1967 and was refi ned until the first public demonstration in 1968 along with the matching system of multiple ‘moveable windows’ on the computer screen. For the demonstration, Engelbart had already worked with Herman Miller to redesign the keyboard, mouse and swivel chair combination that was envisaged for the office of the future. But such a move from lab to offi ce and then to home would take another 14 years. The transformative combination of mouse and windows first moved to the Xerox laboratory at Palo Alto with key members of Engelbart’s team, including Bill English, the engineer who had done the input device tests and built the first mouse. The graphical interface got smoother there with the development of the desktop metaphor in 1970, and the mouse itself became smoother when English replaced the two wheels with a ball in 1972. This relentless smoothing of mouse and graphic interface continued as Apple appropriated the idea in the early 1980s and Microsoft immediately appropriated the idea from Apple, with the quest for smoothness in the man–machine interface still ongoing today with the ever more responsive multitouch screens.

A basic concept of drawing underlies this evolution. Engelbart symptomatically began his 1968 demonstration of the interface by describing the screen as a blank piece of paper. In a key reversal of the convention of computer monitors, his screen was white and the type was black. The attempt to move the logic of the offi ce into the machine and the machine into the offi ce starts by having the computer simulate paper. Later in the demonstration, Engelbart shows the cursor moving across the screen in response to the movements of the mouse. A live video feed of the hand grasping the mouse is superimposed on the screen to show the pointer echoing its every move, as if the hand is simply doing a drawing. It is the freedom of the drawing hand to move with any speed in any path to any point in the space that transforms the interface. A number of drawings actually appear in the presentation to exemplify the system, with each line acting as a link to layers of stored information. The drawings condense access to information, and information is used to construct drawings. Even the presentation itself is treated as a drawing within the presentation. Ultimately, the promise of the system is to turn complex data into drawings that can be manipulated either consciously by the user or automatically by the machine. The mouse gains its power by allowing the user to draw in the space of information. Such sense of drawing was already embedded in ‘As We Think’, the key article by Vannevar Bush from near the end of the Second World War that was cited by Engelbart in his original proposal for an augmentation research programme.

Having been the director of the military efforts of all US scientists, Bush argued that such an effort to extend man’s physical power through weapons should now be redirected towards extending mental power. Since the growing amount of information exceeds our ability to digest it, Bush speculates that there could be a new piece of furniture called the Memex, a desk with a translucent horizontal surface for entering, viewing and manipulating data that would be stored in its legs and indexed through the multidimensional associations of ‘links’ as distinct from the normal linear fi ling systems. In addition to the conventional keyboard, notes could be added to the images projected onto the underside of the translucent surface by drawing on the surface with a new kind of stylus, ‘just as though he had the physical page in front of him’. Information and body would meet at a drawing surface. Engelbart’s accomplishment is to establish this ‘just as though’ sense that the user is drawing on a simple piece of paper resting on a desktop. The eventual addition of the image of a virtual desktop to form the generic graphical user interface was almost an inevitable consequence of this underlying idea of thinking through manipulating form in almost unconscious acts of drawing.

This interface architecture has had infi nitely more effect on our species than the work of any architect or architectural school. Yet it was not by chance that Engelbart used the ‘augmented architect at work’ as his introductory example in the pivotal 1962 framework document for the Augmentation Research Center. He described the future architect trying out various designs for a building on a large screen, seeing images of the building in its site from different angles, taking measurements from the image by using the ‘pointer’, making changes, giving generic specifi cations for fl oors, doing interior fi xtures, calculating sun angles, modelling traffi c within the building, locating the greatest drain on utilities, adding notes and storing this ‘thought structure’ so that it could be worked on by a different architect. It is not the architecture of the physical object being manipulated that counts for Engelbart, but the fact that the architect manipulates a structure of information, a ‘thought structure’.

The 1962 document ends by describing the augmented computer user 10 years in the future working directly on nearly horizontal display surfaces ‘like the surfaces of a drafting table … as intently as a draftsman works on his drawings’ to construct, modify, detail and embellish a logical ‘structure’. The user is able to zoom in and move through the structure through rapid movements of the hand on the horizontal surface ‘so that your feel for the whole structure and where you are in it can stay with you. Information is literally treated as an architecture. In the 1968 presentation, Engelbart insisted on the need to work with the ‘complex information structures’ that are not normally able to be visualised and manipulated by humans. The ability to access and reshape multidimensional structure is the main point of the whole augmentation research lab. Architecture is the lead example because of its inherent visualisation and manipulation of multidimensional data sets. The real architecture in the example is not the one being manipulated in the screen but the architecture of the interface itself.

The human ecosystem so obviously includes layer upon layer of electronic systems and those systems are no longer simply outside and around the limits of the body but deep inside the body, moving and multiplying its limits. The fundamentally surgical mission of the architect has necessarily evolved. After all, architects never simply design for a given human body. They actively redesign the body. Each project imagines the body differently, constructing new possibilities for our fl esh. Architecture itself is a prosthetic art and has always been so. Yet it is almost by defi nition or normative commission out of touch with the radicality of everyday life. Architecture acts as a shock absorber by continuously redrawing a line between organism and environment, a line of defence against the speed and complexity of our own evolution. This line might be the only gift of the architect and might only exist through architecture. Architects will keep defi ning themselves as architects by redrawing it, yet it could not be more fragile. Even the professional responsibility to continuously redraw the line will require new skills, starting with new histories. It is fi nally time to reconnect our fi eld with the radical body and brain made possible decades ago by the discretely revolutionary architecture of the self-effacing mouse. As the mouse retires, architecture might wake up to the radical past, recalibrate and reboot

Give me some rating stars: 
No votes yet