Joseph E. Davis is Research
Assistant Professor of Sociology at the University of Virginia, Program
Director of the Institute for Advanced Studies in Culture, and Co-Director of
the Center on Religion and Democracy.
N. Katherine Hayles. How We Became Posthuman: Virtual Bodies in
Cybernetics, Literature, and Informatics. Chicago: The University of
Chicago Press, 1999.
In Steps to an Ecology of Mind, the anthropologist and scientist
Gregory Bateson repeatedly uses a simple example to challenge taken-for-granted
assumptions about the body and the self. Consider, he says, a blind man with a
stick. "Where," Bateson asks, "does the blind man's self begin?
At the tip of the stick? At the handle of the stick? Or at some point halfway
up the stick?" 1 If you answer the handle, if you assume
that the man is defined by his physical boundaries, then, according to Bateson,
you are wrong. In fact, for Bateson, "these questions are nonsense"
because in the cybernetic viewpoint he is advocating, individuals are pathways for
information; they are part of communicational systems "whose boundaries do
not at all coincide with the boundaries either of the body or of what is
popularly called the 'self' or 'consciousness.'" 2 The
stick is an important pathway of information for locomotion, and so no boundary
line between the stick and the man can be relevant to the communicational
network of which both are part. In cybernetic theory, neither the boundaries of
the body nor the self are stable; informational feedback loops can be both
internal and external to the subject. Both humans and machine, in this view,
are essentially information patterns, and so the distinction between them is
blurred and even erased. Although the pioneers of cybernetics sometimes shrank
from the implications of their theory for understanding human beings, others
were not so shy, including science fiction writers and later theorists in the
tradition.
Gregory Bateson was an
important figure in the early stages of cybernetics and his
blind-man-with-stick example nicely captures something of the radical
implications of the theory for thinking about subjectivity. More recently, the
cybernetic tradition has merged into new research programs in artificial life,
cognitive science, and virtual technologies. In these fields, the underlying
assumptions about human beings continue to evolve and be transformed. Indeed,
according to N. Katherine Hayles and other scholars, the new assumptions
represent a break with the rational and unified subject of post-Cartesian
philosophy (the "human") sufficient to warrant the term
"posthuman." In this sense, the "post" in posthuman refers
to the superseding of one definition of the human with another. However, there
is an alternative meaning of posthuman that takes the radical implications of
the cybernetic tradition much further. In this more common meaning, the
"post" in posthuman refers to the overcoming of human biological
limitations by technological transformation. Humans will, in large part, become
machines. In some visions of this "postbiological" future,
intelligent machines even displace the human race entirely.
If talk of a
postbiological future sounds like something right out of science fiction, well,
it is. The cybernetic tradition has never been limited to scientists; it has
also grown and been nurtured within popular culture through science fiction,
and the cultural implications of human-machine interfaces have been most
clearly drawn out there. Accordingly, it seems fitting to begin with a work of
science fiction. The posthuman, however, is also far more than science fiction,
and N. Katherine Hayles, in her important and demanding book, How We Became Posthuman, sketches a
story of how technological visions in real science are spurring a fundamental
re-thinking of what it means to be human.
Arthur C. Clarke's
first novel, Against the Fall of Night,
was a disappointment to him, so he decided to rework it. A long sea voyage from
England to Australia in 1954 and 1955 gave him the opportunity, and The City and the Stars was published in
1956. In the preface, Clarke notes that the progress of science had made some
of the ideas expressed in his first book, begun in 1937, seem naïve, while
opening up "vistas and possibilities quite unimagined" earlier.
"In particular," he writes, "certain developments in information
theory suggested revolutions in the human way of life even more profound than
those which atomic energy is already introducing." 3
Some of these developments, according to Ed Regis in his delightfully engaging
and highly informative Great Mambo
Chicken and the Transhuman Condition, were contained in the work of Claude
Shannon, whom Clarke met in 1952, and whose mathematical theory of
communication suggested, in Regis' words, "a deep link, never before
noted, between man and machine." 4 In The City and the Stars, Clarke
endeavored to work out some implications of these ideas.
For a billion years
humans have lived a tranquil existence in the vast city of Diaspar. Under a
canopy of eternal daylight, the citizens go about their lives free of troubles
and worries. There is no shortage of anything they need. Inhabitants spend
their time pursuing the arts, intellectual interests, and a myriad of virtual
recreational possibilities. Having conquered matter, they have no need to work.
They simply frame the appropriate thought, and whatever is necessary or desired
materializes. Having engineered the body to perfection, all are beautiful and
free of "those ills to which the flesh was once heir." 5
In fact, they have "virtual immortality." At some point, eons
earlier, the ancestors of Diaspar learned how to store themselves on computers
(as a pattern of electric charges). "We do not know," according to
one of the novel's characters, "how long the task took. A million years,
perhaps." 6 But the secret, this character tells us, was
in recognizing that: "A human being, like any other object, is defined by
its structure—its pattern. The pattern of a man, and still more the pattern
which specifies a man's mind, is incredibly complex." 7
Eventually humans "learned how to analyze and store the information that
would define any specific human being—and to use that information to recreate
the original." 8 Further, the character argues that the
"way in which information is stored is of no importance; all that matters
is the information itself. It may be in the form of written words on paper, of
varying magnetic fields, or patterns of electric charge. Men have used all
these methods of storage, and many others." 9 Because
humans are "disembodied patterns" of information, in other words, the
material substrates they might occupy are interchangeable.
Not only death but also
birth has been abolished in Diaspar. The number of inhabitants is therefore
fixed. At any given moment, however, only a fraction of the total population is
actually living and walking the streets. The rest are enjoying an
"interval of nonexistence." Each person lives a thousand years. At
the end of this allotted span, people return to the Hall of Creation. Their
bodies cease to exist, and their minds are transferred to the Memory Banks of
the Central Computer, where they will be stored for some "apparently
random" length of time. Then one day, they will awaken in new bodies and
leave the Hall of Creation to begin a new "cycle of existence,"
carrying forward those memories from previous cycles they had decided to save.
In this way, the population of the city remains constant, and each person gets
a fresh start with new friends and different interests.
In addition to storing
matter and minds and regulating the population level, the Central Computer is
also the final authority over the city. Although Diaspar has a ruling body, the
Council, it seldom needs to meet and is itself subordinate to the Central Computer,
into which the original designers of the city have programmed all that is
needed for an eternal existence. As humans merge with machine in the Hall of
Creation, so the machine merges with the human. "It was difficult,"
according to the novel's narrator, "not to think of the Central Computer
as a living entity, localized in a single spot, though actually it was the sum
total of all the machines in Diaspar." 10 Still, even if
"not alive in the biological sense, it certainly possessed at least as
much awareness and self-consciousness as a human being." 11
In a new introduction
to The City and the Stars written in
2000, Clarke notes that more than once he has felt "involved in a
self-fulfilling prophecy." 12 We do not yet have
computers with self-consciousness, nor can human minds be stored on computers.
But while these futuristic speculations were science fiction in the 1950s, it
did not take long before the computer revolution and the same developments in
information theory that inspired Clarke were leading others to conclude that
such man-machine fusions were in fact scientifically possible.
Regis, in Great Mambo Chicken, traces some of the
proposals, which began as early as 1964. Each author insists in his or her own
way that the mind is the essence of the person and the physical body
dispensable; that it is possible, to quote an IBM researcher, to
"implement the human being in alternative hardware" 13
; and that with the mind stored on a computer immortality would be achieved.
Regis ends his survey with the then most influential of such proposals, that of
Hans Moravec in his book Mind Children.
Moravec, then director of the Mobile Robot Lab at Carnegie Mellon University,
restates the basic premises of the earlier schemes, emphasizing the separation
of the "message" (information conveyed) from the medium on which it
is encoded. The "essence of a person, say myself," he writes, is
"the pattern and the process going on in my head and body,
not the machinery supporting that process. If the process is preserved, I am
preserved. The rest is mere jelly." 14
To Moravec, then, the
mind—the pattern—might be separated from the brain—the machinery—without any
loss of self. He views this separation as a technological possibility and
suggests several possible methods for transferring the mind to computer
databanks (a process now commonly called "uploading"), where it could
be stored and transmitted. He also views this separation as desirable and
necessary if humans are not to be left out of the "magical world to
come" by the superintelligent robots who will otherwise displace us. With
a "mind transfer" Moravec writes, "many of your old limitations
melt away." 15 You can "communicate, react, and
think a thousand times faster" 16 ; you can travel over
information channels to anywhere such channels go; you can make backup copies
of yourself; you can selectively merge another person's memories with your own;
you can inhabit a marvelous robot body; and so on. In the coming
"postbiological world" the possibilities are endless. The citizens of
Diaspar had nothing on the mind children.
In a 1998 review of
Moravec's follow-up book Robot: Mere
Machine to Transcendent Mind, 17 novelist Charles Platt
argues that Mind Children secured
Moravec's status as a "truly radical techno-visionary." 18
For Platt, who writes science fiction, this was a compliment. Others were less
impressed. The reviewer for The New York
Times, M. Mitchell Waldrop, a science journalist, wrote that "'Mind
Children' comes perilously close to the kind of uncritical gee-whiz that gives
technological optimism a bad name." 19 The Washington Post reviewer, Noel
Perrin, a Dartmouth professor, said he "would guess it to be the most
lurid book ever published by Harvard University Press," and that "it
may seem easy to dismiss Moravec as yet another mad scientist." 20
According to Platt, "Joseph Weizenbaum, professor emeritus at MIT's
Artificial Intelligence Lab, disliked Moravec's interest in 'perfecting' human
beings and warned that Mind Children
was as dangerous as Mein Kampf."
21 It was Moravec's Robot,
which carries his predictions about the social impact of intelligent robots
even further, that in part prompted Bill Joy's famous Wired magazine article "Why the Future Doesn't Need Us." 22
The other influence on Joy was inventor Ray Kurzweil's bestseller, The Age of Spiritual Machines. Like
Moravec, Kurzweil foresees a future in which humans "will be software, not
hardware," 23 as we increasingly "port"
ourselves to a new computing substrate and leave our "old slow
carbon-based neural-computing machinery behind." 24 Joy,
cofounder and chief scientist of Sun Microsystems, found these blithe visions
of the end of the human race as we know it to be deeply dystopian and alarming
enough for him to suggest that we move to relinquish, through government ban,
some of the technologies that might bring them about.
The critical reactions
to Moravec, Kurzweil, and others of their persuasion tend to focus on the
sensational claims about a soon-to-be-realized fusion of human and machine. Of
course, the incorporation of machines into human bodies happens all the time,
be it pacemakers, artificial joints, cochlear implants, or other prosthetic
devices. As much as ten percent of the U.S. population may be
"cyborgs" (for cybernetic organism) in this sense, but unlike the
science fiction image of the cyborg—the vastly enhanced Six Million Dollar Man,
for example—the overwhelming majority of such interventions are simply to
compensate for deficiencies in normal functioning. Kevin Warwick, the
cybernetics professor at the University of Reading, who in 1998 began
implanting chips with transmitters in order to monitor and affect his
biological responses, is a rare bird. 25 The capacity of
medical prosthetics to disrupt traditional categories or destabilize the
human/machine boundary seems limited. The prospect of intelligent machines
becoming our evolutionary heirs and people uploading themselves into computers
(to become "ex-humans" in the language of Moravec's Robot), however, is a different matter.
Now we are talking about a radical violation of boundaries, a cyborg vision of
complete fusion, and even the end of Homo
sapiens. If we grant such prospects a status beyond science fiction, then
the reactions of a Weizenbaum or Joy become more understandable.
What critics typically
do not contest is the underlying view of the human person that informs such
extreme posthuman visions. The machine metaphor for thinking and talking about
the human body has a long history. Anthropomorphism, the attributing of human
characteristics to animals and inanimate things, is also a very common
tendency. Pet owners, for instance, do it all the time. But in Moravec, Kurzweil,
and others, metaphor has been effectively replaced with equivalence, an
equivalence achieved not so much by giving human characteristics to cybernetic
machines as by redefining what being human means. This redefinition has
implications much wider than dreams of uploading, and it is not limited to the
popular science writings of "radical techno-visionaries." Rather, it
informs a nascent, yet clearly discernable shift toward a new model of
subjectivity. In How We Became Posthuman,
N. Katherine Hayles traces how this model arose and the forms it is beginning
to take. Inspired by Mind Children
and its "nightmare" uploading scenario, Hayles wanted to know how
mind came to be conceived as utterly disconnected with embodiment. Pursuing
this question, she was "led into a maze of developments that turned into a
six-year odyssey." 26 She soon found that in his
assumptions Moravec was "far from alone" (1).
The essays that
comprise How We Became Posthuman tell
three interrelated stories that span the years from 1945 to the present. The
first, and most central, is how "information
lost its body" (2), that is, how it came to be conceptualized as a
kind of "immaterial fluid" that can flow unchanged across various
material substrates and around the globe. The second story is how "the cyborg was created as a technological
artifact and cultural icon" (2) in the postwar years. And the third
and still unfolding story, deeply interwoven with the first two, is how "a
historically specific construction called the
human is giving way to a different construction called the posthuman"
(2). To tell these stories, Hayles goes back to the theories, researchers, and
artifacts in the cybernetic tradition to explore those moments when choices
were made for disembodiment and alternative interpretations were rejected.
Along the same historical trajectory, she also examines, in parallel with the
scientific texts, important contemporaneous works of science fiction, such as
Bernard Wolfe's Limbo of the 1950s
and Philip K. Dick's novels of the 1960s, that were influenced by cybernetics
and share many of its assumptions. These texts are important, she argues,
because they touch on issues "that the scientific texts only fitfully
illuminate," such as "the ethical and cultural implications of
cybernetic technologies" (21). In addition, the literary texts also
"actively shape what the technologies mean and what the scientific
theories signify in cultural contexts" (21).
In telling these
stories, Hayles has two principal goals. The first is to contest the
"systematic devaluation of materiality and embodiment" (48) that runs
throughout the cybernetic tradition and now informs the cultural perception of
virtual technologies. In her deft analysis, there was nothing inevitable or
technologically determined about the process that led to the separation of
information from materiality and to the equating of humans and computers. It
was the result of historically specific negotiations in historically specific
cultural contexts. Hayles demystifies those negotiations, showing, for example,
how decisions about how to conceptualize information that were appropriate in
an engineering context got extrapolated into wider contexts where they led to
unwarranted conclusions, and how theory building often proceeded by first
inferring simplified abstractions from the particularity and complexity of the
world and then turning around and identifying those abstractions as the general
form from which the particularity and complexity derive (a move Hayles calls
the "Platonic backhand").
At the same time, her
history is a "rememory," showing how voices arguing for the
importance of embodiment were present throughout the tradition and had to be
overcome to arrive at abstractions like bodiless information and dreams of
uploading. These alternative voices are resources for and give hope to her
second goal: to "recover a sense of the virtual that fully recognizes the
importance of the embodied processes constituting the lifeworld of human beings"
(20). Hayles wants to embrace the possibilities of information technologies and
recognizes that doing so will leave our experience and understanding of
subjectivity changed. For many, the posthuman means "envisioning humans as
information-processing machines with fundamental similarities to other kinds of
information-processing machines, especially intelligent computers" (246).
This is the basic version of the posthuman as a model of subjectivity that
Hayles challenges. But this is not, she argues, the only view, and she suggests
another possibility, one where mind and body are a unity.
The posthuman signals
important shifts in underlying assumptions away from the model of the liberal
humanist subject of Enlightenment thought. In order to elucidate these shifts,
Hayles uses C. B. Macpherson's classic text, The Philosophy of Possessive Individualism: From Hobbes to Locke,
to identify the qualities of the "human" in that tradition. We are
human in the liberal humanist view because we possess ourselves. "The
human essence," Macpherson writes, "is freedom from the wills of others, and freedom is a function of
possession" (as quoted in Hayles 3). This freedom presumes an autonomous
self with a free will and agency that can be clearly distinguished from the
"wills of others." Further, the liberal subject is identified with
the rational mind and with consciousness as the seat of identity. Although
still a nascent concept and complexified by the multiple contexts in which it
is arising, the posthuman challenges these presumptions. Rather than an
autonomous "natural" self, the "posthuman subject is an amalgam…of
heterogeneous components, a material-informational entity" whose
boundaries are not stable but shifting (3). This "collective heterogeneous
quality," in turn, undercuts individual agency because it "implies a
distributed cognition located in disparate parts that may be in only tenuous
communication with one another" (3-4). Finally, the posthuman complicates
the liberal humanist notion of self-will "because there is no a priori way
to identify a self-will that can be clearly distinguished from an
other-will" (4). In one important respect, however, the posthuman
continues the liberal tradition. The emphasis remains on cognition not
embodiment, though not in their wildest dreams would Hobbes or Locke have
thought of the human body in posthuman terms: as an "accident of history
rather than an inevitability of life" (2) or as the "original
prosthesis" that could be extended or replaced with "other
prostheses" (3).
Hayles does not seek to
recuperate the liberal humanist subject—far from it. She is concerned with
overcoming the mind/body split and is in broad agreement with criticisms coming
from perspectives, such as those of feminism and postcolonial theory, that see
the liberal humanist construction of subjectivity as deeply implicated in
efforts to dominate and oppress. In fact, for her, the greatest mistake would
be to graft "the posthuman onto a liberal humanist view of the self"
(286-7), which is what she thinks Moravec is trying to do. Hayles see the
posthuman as an opportunity to get "out of some of the old boxes and
[open] up new ways of thinking about what being human means" (285). And
now is the time. The liberal humanist subject is being dismantled, and no
successor has yet clearly emerged. Many parties are contesting, but what
"trains of thought" will constitute the posthuman have not "been
laid down so firmly that it would take dynamite to change them" (291).
Granted, Hayles says, some "current versions of the posthuman" are
deeply problematic, even pointing "toward the anti-human and the apocalyptic,"
but not to worry, "we can craft others" (291).
Toward the posthuman,
Hayles is both unafraid and optimistic. She is unafraid because human being is
embodied and the body is a "resistant materiality" that cannot be
left behind. Moravec and his ilk may deny it, but others, such as researchers
in evolutionary biology, affirm that the complexities of embodiment
"affect human behaviors at every level of thought and action" (284).
Thus Hayles concludes that human embodiment itself establishes a clear
"limit to how seamlessly humans can be articulated with intelligent
machines" (284). Further, she writes in another place: "Human mind
without human body is not human mind. More to the point, it doesn't exist"
(246). The interface between humans and intelligent machines does undermine the
old liberal humanist subject, she maintains, but we need not fear apocalyptic
scenarios.
Hayles is optimistic
about the possibility of crafting an embodied version of the posthuman because
her analysis of the history of cybernetics shows that the posthuman does not
require the emphasis on disembodiment it has acquired. She is also optimistic
because she believes that the common assumption that "pattern"
(information) and "presence" (physicality) are opposites and exist in
an antagonistic relationship is not required either. With information
technologies, she argues, pattern tends to dominate presence, making pattern
seem the essential reality. Hence, for example, "money is increasingly
experienced as informational patterns stored in computer banks rather than as
the presence of cash" (27). This dominance, however, does not make the
physical world disappear; "information in fact derives its efficacy from
the material infrastructures it appears to obscure" (28). There is an
"illusion of erasure" (28) here but we don't have to be tricked by
it. Rather, she argues, we might better see "pattern and presence as
complementary rather than antagonistic" (49). Doing so defeats false
polarities and suggests new avenues for rethinking the human-machine interface.
How, then, in Hayles'
view should we envision the posthuman? Recall that Hayles wants to get
"out of some of the old boxes." In the old boxes, the "self is
envisioned as grounded in presence, identified with originary guarantees and
teleological trajectories, [and] associated with solid foundations and logical
coherence" (286). This is the account, in her view, that underwrites
projects of domination and must be abandoned lest we repeat the mistakes of the
past. Indeed, in many contexts, this account is mostly dead already (hence, one
meaning of the past tense in the book's title). While not proffering a complete
alternative, she argues that the posthuman provides resources for the
construction of one. In this conception of the human,
emergence replaces teleology; reflexive epistemology replaces
objectivism; distributed cognition replaces autonomous will; embodiment
replaces a body seen as a support system for the mind; and a dynamic
partnership between humans and intelligent machines replaces the liberal humanist
subject's manifest destiny to dominate and control nature. (288) Hayles allows that "this is not
necessarily what the posthuman will mean" (288, original emphasis), but
only what it might mean.
But, what in the end,
does it mean? What, as a practical matter, are the implications of emergence
replacing teleology or distributed cognition replacing autonomous will? Hayles
doesn't provide much in the way of real world examples, and the ones she does
use aren't particularly illuminating. Early in the book she says that she now
finds herself saying things like "Well, my sleep agent wants to rest, but
my food agent says I should go to the store" (6). This is certainly an odd
way to talk, and Hayles draws significant conclusions from it. "Each
person," she claims, "who thinks this way begins to envision herself
or himself as a posthuman collectivity, an 'I' transformed into the 'we' of
autonomous agents operating together to make a self" (6). A better
example, which Hayles doesn't use, comes from Mary Catherine Bateson's book Composing a Life. Reflecting on her
life, Bateson, the daughter of Gregory Bateson and a minor figure in Hayles'
book, was disgruntled by her struggles to bring coherence to her disparate
experiences. Moving away from the "stubborn struggle toward a single
goal," she adopted a more fluid, protean approach to life as "an
improvisatory art." 27 A roughly similar view can be
found in some of the psychological literature, popular and professional, which
calls for dropping the older notion of life stages and even for celebrating
"multiple selves." In the multiple personality disorder literature,
for example, some multiples celebrate their ability to dissociate creatively
and so reject therapy that seeks to integrate their alters. People are
definitely talking about themselves in new ways. But what does it mean for how
they think about themselves, say, as moral agents? Hayles thinks that
"serious consideration needs to be given to how certain characteristics
associated with the liberal subject, especially agency and choice, can be
articulated within a posthuman context" (5), but then leaves it at that.
Without specificity about the implications of her alternative, it is hard to
judge what is gained and what is lost in her vision.
Hayles' lack of
specificity, however, should not be allowed to obscure the important work of How We Became Posthuman. Her history of
how "information lost its body" is just the sort of
"rememory" that we need if we have any hope of resisting a
disembodied model of subjectivity. Despite her optimism, she documents
convincingly the cultural perception that information is distinct from and more
essential than materiality, that "pattern" does not depend on any
particular embodiment. Moravec, as Hayles says, is "far from alone,"
and it is easy to add to the list: Norbert Wiener, father of cybernetics,
proposing as early as the 1950s that it was "theoretically possible to
telegraph a human being" (1); molecular biology treating "information
as the essential code the body expresses" (1); artificial life
researchers, along with figures in nanotechnology, virtual reality, artificial
intelligence, and cognitive science, treating humans as information processing
machines; science fiction from Arthur Clarke to the contemporary cyberpunk
literature, as well as popular science writers like Ray Kurzweil, defining
humans as patterns that could be immortal with the right hook-up; and the
technologies of everyday life, from ATMs to the Internet, reinforcing the
impression that pattern is predominate over presence. There's even the
conservative writer Tom Wolfe telling the 2002 graduating class of Duke
University that we mustn't "kid ourselves." The "bottom line of
neuroscience," he avers, is that: "We're all concatenations of molecules
containing DNA, hard wired into a chemical analog computer known as the human
brain, which as software has a certain genetic code." 28
All these and others contribute to a powerful reductionistic illusion about
human being, an illusion that Hayles helps us to unmask.
Further, although
Hayles does not draw out the comparison, in her insightful mapping of key
assumptions of the emerging posthuman, we can see parallels with the more
familiar, though no less multifarious concept of the postmodern. The posthuman
and the postmodern proceed along different lines, but both reach strikingly
similar conclusions about human subjectivity. Like the posthuman, the
postmodern challenges the liberal humanist subject in fundamental ways. The
postmodern, too, emphasizes a reflexive epistemology, the disruption of
boundaries, and the rejection of teleology. Like distributed cognition, the
postmodern emphasizes a "dispersed subjectivity," and the postmodern
views the self as multiple rather than stable and centered, much like the
notion of the self as an amalgam of heterogeneous components. Moreover, in the
postmodern the body is also devalued, its materiality, to quote Hayles,
"is secondary to the logical or semiotic structures it encodes"
(192). What makes this parallelism noteworthy is the sharply different sources
of these two accounts. The postmodern has its roots in the analysis of
discourse in the humanities and has often been vigorously attacked by
scientists or dismissed as an intellectual fad. The posthuman, by contrast, is
arising from cutting-edge science itself. That both are imagining human
subjectivity in similar ways is significant because it suggests deeper cultural
shifts are at work, shifts that remain unexplored.
Compared with the
postmodern, the posthuman may prove to be the more consequential carrier of
these shifts. It is coming from science, which still retains immense authority.
Even more importantly, it is arising hand-in-hand with powerful new
technologies. "Given market forces already at work," Hayles foresees
that it is all but "certain that we will increasingly live, work, and play
in environments that construct us as embodied virtualities" (48). Ray
Kurzweil, meanwhile, predicts that "the primary political and
philosophical issue of [this] century will be the definition of who we
are." 29 Both are right, and Hayles' signal contribution
is to show us that we need not be passive bystanders, that we can actively and
constructively intervene.
1 Gregory Bateson, Steps
to an Ecology of Mind (Chicago: The University of Chicago Press, 2000) 318.
2 Bateson 319. 3 Arthur C. Clarke, The City and the Stars & the Sound of
Mars (New York: Warner, 2001) no pagination in introductory material. 4
Ed Regis, Great Mambo Chicken and the
Transhuman Condition: Science Slightly Over the Edge (Reading:
Addison-Wesley, 1990) 149. 5 Clarke 25. 6
Clarke 18. 7 Clarke 18. 8 Clarke 18. 9
Clarke 18. 10 Clarke 69. 11 Clarke 69. 12
Clarke, no pagination in introductory material. 13 Regis 153.
14 Hans Moravec, Mind
Children: The Future of Robot and Human Intelligence (Cambridge, MA:
Harvard University Press, 1988) 117, original emphasis. 15
Moravec 112. 16 Moravec 112. 17 Hans
Moravec, Robot: Mere Machine to
Transcendent Mind (New York: Oxford University Press, 1999). 18
Charles Platt, "Who Can Replace a Man?" The Washington Post Book World (8 November 1998): 9. 19
M. Mitchell Waldrop, "The Souls of the New Machines," The New York Times (1 January 1989): 10.
20 Noel Perrin, "Anything We Can Do They Can Do
Better," The Washington Post Book
World (23 October 1988): 8. 21 Platt 9. 22
Bill Joy, "Why the Future Doesn't Need Us," Wired (April 2000): 238-62. 23 Ray Kurzweil, The Age of Spiritual Machines: When Computers
Exceed Human Intelligence (New York: Penguin, 1999) 129. 24
Kurzweil 126. 25 See Kevin Warwick, "Cyborg 1.0," Wired (February 2000): 145-51. 26
N. Katherine Hayles, How We Became
Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics (Chicago:
The University of Chicago Press, 1999) 2. Subsequent references to this work
will be made parenthetically in the text of this essay. 27
Mary Catherine Bateson, Composing a Life
(New York: The Atlantic Monthly Press, 1989) 4, 3. 28 As
quoted in Jacques Steinberg, "Commencement Speeches; Along With Best
Wishes, 9/11 Is a Familiar Graduation Theme," The Washington Post (2 June 2002): 38. 29
Kurzweil 2.