SFWRITER.COM > Nonfiction > The Future is Already Here
The Future is Already Here:
Is There a Place for
Science Fiction in the
Twenty-First Century?
by Robert J. Sawyer
A speech presented November 10, 1999, at the Library of Congress,
Washington, D.C. and December 1, 1999, at The Universitat Politècnica
de Catalunya's Premio UPC de Ciencia Ficción Awards Ceremony,
Barcelona, Spain
Published as the cover article in the Autumn 2000 edition of
Foundation: The International Review of Science Fiction.
Copyright © 1999
by Robert J. Sawyer
All Rights Reserved
There are countless definitions for that amorphous entity we call
science fiction, but one of the most succinct is that employed by
Kim Stanley Robinson, author of the famed Mars trilogy:
"Science fiction stories are stories set in the future." And, of
course, for decades now, we've thought of the 21st century, the
dawn of the third millennium, as the very embodiment of the
future.
But now, the future is here. We're right on the doorstep of the
21st century, and, indeed, the year 2001, with all the resonances
that magic figure has had for us since the film of the same name
debuted thirty-odd years ago, will soon be a historical date.
If the future is already here, what role does science fiction
have in it? Was SF a literature of the 20th century, the way
gothic romances were a literature of the 19th? Or is there a
place a societal role for science fiction in the new
millennium?
To answer that question, it's necessary, of course, to define the
current societal role of science fiction, and that role, I firmly
believe, comes out of the central message of most of the
memorable, ambitious stories in the genre.
Now, of course, there are those who think that fiction is not the
place for messages: "If you want to send a message, call Western
Union" the old American telegram company used to be
standard advice given in creative-writing classes. Still,
whether the authors are consciously aware of it or not, all
fiction does convey messages or fundamental moral statements.
Before I delve into what the central message is for science
fiction, let's set the stage by first looking at another genre
closely allied with science fiction another category with its
own publishing imprints and dedicated magazines. I'm talking
about mystery fiction.
What is the fundamental message present in every mystery story?
There's one that, in fact, is virtually required without it,
the story falls completely apart. The central moral statement of
all mystery fiction is this: "Don't commit murder, because you
won't get away with it." In just about every mystery novel, a
character tries to take the life of another human being. And in
just about every one, despite clever planning on the part of the
murderer, the killer is brought to justice.
Now, let's assess how successful the writers of mystery fiction
have been at convincing the general public of the truth of their
fundamental assertion "Don't commit murder, because you won't get
away with it." Do we still have murder? Yes. Are murder rates
decreasing? No. Despite hundreds of thousands of iterations on
this theme in mystery stories from Edgar Allan Poe through Agatha
Christie to Sara Paretsky a theme which, put another way, is
often stated as, "There's no such thing as a perfect crime"
there has been no societal change. Murder is rampant.
And that's good news for the mystery-fiction writers of the
world. It means they have job security. It means they still
have work to do. It means their message still needs to be heard.
But what about me and my colleagues? What of the SF writers of
the world? How good have we been at communicating our central
message? And, indeed, what is the central message of SF?
To my way of thinking, the central message of science fiction is
this: "Look with a skeptical eye at new technologies." Or, as
William Gibson has put it, "the job of the science-fiction writer
is to be profoundly ambivalent about changes in technology."
Now, certainly, there are science-fiction writers who use
the genre for pure scientific boosterism: science can do no
wrong; only the weak quail in the face of new knowledge. Jerry
Pournelle, for instance, has rarely, if ever, looked at the
downsides of progress. But most of us, I firmly believe, do take
the Gibsonian view: we are not techie cheerleaders, we aren't
flacks for big business or entrepreneurism, we don't trade in
utopias.
Neither, of course, are we Luddites.
Michael Crichton writes of
the future, too, but he's not really a science-fiction writer; if
anything, he's an anti-science-fiction writer.
Indeed, both Gregory Benford and I have discussed with our shared
agent, Ralph Vicinanza, why it is that Crichton outsells us. And
Ralph explained that he could get deals at least approaching
those Crichton gets if and this was an unacceptable "if" to
both me and Greg we were willing to promulgate the same
fundamental message Crichton does, namely, that science always
goes wrong.
When Michael Crichton makes robots, as he did in
Westworld, they run amuck, and people die. When he clones
dinosaurs, as he did in Jurassic Park, they run amuck and
people die. When he finds extraterrestrial life, as he did in
The Andromeda Strain, people die.
Crichton isn't a prophet; rather, he panders to the fear of
technology so rampant in our society a society, of course,
which ironically would not exist without technology. His mantra
is clearly the old B-movie one that "there are some things man
was not meant to know."
The writers of real SF refuse to sink to fear-mongering, but
neither do we overindulge in boosterism both are equally
mindless activities.
Still, we do have an essential societal role, one being fulfilled
by no one else. Actual scientists are constrained in what they
can say even with tenure, which supposedly ensures the right
to pursue any line of inquiry, scientists are in fact muzzled at
the most fundamental, economic level. They cannot speculate
openly about the potential downsides of their work, because they
rely on government grants or private-sector consulting contracts.
Well, the government is answerable to an often irrational public.
If a scientist is dependent on government grants, those grants
can easily disappear. And if he or she is employed in the
private sector, well, then certainly Motorola doesn't want you to
say cellular phones might cause brain cancer; Dow Chemical didn't
want anyone to say that silicone implants might cause autoimmune
problems; Philip Morris doesn't want anyone to say that nicotine
might be addictive.
Granted, not all those potential dangers turned out to be real,
but even considering them, putting them on the table for
discussion, was not part of the game plan; indeed, suppressing
possible negatives is key to how all businesses, including those
built on science and technology, work.
There are moments increasingly frequent moments during
which the media reports that, "Science fiction has become science
fact." Certainly one of the most dramatic recent ones was made
public in February 1997. Ian Wilmut at Roslin Institute in
Edinburgh had succeeded in taking an adult mammalian cell and
producing an exact genetic duplicate: the cloning of the sheep
named Dolly.
Dr. Wilmut was interviewed all over the world, and, of course,
every reporter asked him about the significance of his work, the
ramifications, the effects it would have on family life. And his
response was doggedly the same, time and again: cloning, he
said, had narrow applications in the field of animal husbandry.
That was all he could say. He couldn't answer the
question directly. He couldn't tell reporters that it was now
technically possible for a man who was 35 years old, who had been
drinking too much, and smoking, and never exercising, a man who
had been warned by his doctor that his heart and lungs and liver
would all give out by the time he was in his early fifties, to
now order up an exact genetic duplicate of himself, a duplicate
that by the time he needed all those replacement parts would be
sixteen or seventeen years old, with pristine, youthful versions
of the very organs that needed replacing, replacements that could
be transplanted with zero chance of tissue rejection.
Why, the man who needed these organs wouldn't even have to go to
any particular expense just have the clone of himself created,
put the clone up for adoption possibly even an illegal
adoption, in which the adopting parents pay money for the child,
a common enough if unsavory practice, letting the man recover the
costs of the cloning procedure. Then, let the adoptive parents
raise the child with their money, and when it is time to harvest
the organs, just track down the teenager, and kidnap him, and
well, you get the picture. Just another newspaper report of a
missing kid.
Far-fetched? Not that I can see; indeed, there may be adopted
children out there right now who, unbeknownst to them or their
guardians, are clones of the wunderkinds of Silicon Valley or the
lions of Wall Street. But the man who cloned Dolly couldn't
speculate on this possibility, or any of the dozens of other
scenarios that immediately come to mind. He couldn't speculate
because if he did, he'd be putting his future funding at risk.
His continued ability to do research depended directly on him
keeping his mouth shut.
The same mindset was driven home for me quite recently. I am
co-hosting a two-hour documentary called
"Inventing the Future: 2000 Years of Discovery" for the Canadian version of The
Discovery Channel, and in November 1999 I went to Princeton
University to interview Joe Tsien, who created the "Doogie Mice"
mice that were born more intelligent than normal mice, and
retained their smarts longer.
While my producer and the camera operator fussed setting up the
lighting, Dr. Tsien and I chatted animatedly about the
ramifications of his research, and there was no doubt that he and
his colleagues understood how far-reaching they would be.
Indeed, by the door to Dr. Tsien's lab, not normally seen by the
public, is a cartoon of a giant rodent labeled "Doogie" sitting
in front of a computer. In Doogie's right hand is his computer's
pointing device a little human figure labeled "Joe": the
super-smart mouse using its human creator as a computer mouse.
Finally, the camera operator was ready, and we started taping.
"So, Dr. Tsien," I said, beginning the interview, "how did you
come to create these super-intelligent mice?"
And Tsien made a "cut" motion with his hand, and stepped forward,
telling the camera operator to stop. "I don't want to use the
word `intelligent,'" he said. "We can talk about the mice having
better memories, but not about them being smarter. The public
will be all over me if they think we're making animals more
intelligent."
"But you are making them more intelligent," said my
producer. Indeed, Tsien had used the word "intelligent"
repeatedly while we'd been chatting.
"Yes, yes," he said. "But I can't say that for public
consumption."
The muzzle was clearly on. We soldiered ahead with the
interview, but never really got what we wanted. I'm not sure if
Tsien was a science-fiction fan, and he had no idea that I was
also a science-fiction writer, but many SF fans have wondered why
Tsien didn't name his super-smart mice "Algernons," after the
experimental rodent in Daniel Keyes's Flowers for
Algernon.
Tsien might have been aware of the reference, but chose the much
more palatable "Doogie" a tip of the hat to the old TV show
Doogie Howser, M.D., about a boy-genius who becomes a
medical doctor while still a teenager because, of course, in
Flowers for Algernon, the leap is made directly from the
work on mice to the mind-expanding possibilities for humans, and
Tsien was clearly trying to restrain, not encourage, such leaps.
So, we're back to where we started: someone needs to
openly do the speculation, to weigh the consequences, to consider
the ramifications someone who is immune to economic pressures.
And that someone is the science-fiction writer.
And, of course, we do precisely that and have done so from the
outset. Brian Aldiss, and many other critics, contend that the
first science-fiction novel was Mary Shelley's
Frankenstein, and I think they're right. In that novel,
Victor is a scientist, and he's learned about reanimating dead
matter by studying the process of decay that occurs after death.
Take out his scientific training, and his scientific research,
and his scientific theory, and, for the first time in the history
of fiction, there's no story left. Like so much of the science
fiction that followed, Frankenstein, first published in
1818, is a cautionary tale, depicting the things that can go
wrong, in this case, with the notion of biological engineering.
Science-fiction writers have considered the pluses and minuses of
other new technologies, too, of course. We were among the first
to weigh in on the dangers of nuclear power memorably, for
instance, with Judith Merril's 1948 short story "That Only a
Mother" and, although there are still SF writers (often, it
should be noted, with university or industry positions directly
or indirectly involved in the defense industry) who have always
sung the praises of nuclear energy, it's a fact that all over the
world, governments are turning away from it.
The October 18, 1999, edition of Newsweek carried an
article which said, "In most parts of the world, the chance of
nuclear power plant accidents is now seen as too great. Reactor
orders and start-ups have declined markedly since the 1980s.
Some countries, including Germany and Sweden, plan to shut down
their plants altogether ... Nuclear-reactor orders and start-ups
ranged from 20 to 40 per year in the 1980s; in 1997 there were
just two new orders, and five start-ups worldwide. Last year
[1998] construction began on only four new nuclear reactors."
Why the sharp decline? Because the cautionary scenarios about
nuclear accidents in science fiction have, time and again, become
science fact. The International Atomic Energy Agency reports
that there were 508 nuclear "incidents" between 1993 and 1998, an
average of more than one for each of the world's 434 operating
nuclear power plants.
It certainly wasn't out of the scientific community that the
warnings were first heard. I vividly recall being at a party
about fifteen years ago at which I ran into an old friend from
high school. She introduced me to her new husband, a nuclear
engineer for Ontario Hydro, the company that operates the nuclear
power plants near my home city of Toronto. I asked him what
plans were in place in case something went wrong with one of the
reactors (this was before the Chernobyl accident in 1986, but
after Three Mile Island in 1979). He replied that nothing could
go wrong; the system was foolproof. Although we were both early
in our careers then, we were precisely fulfilling our respective
societal roles. As an engineer employed by the nuclear industry,
he had to say the plants were absolutely safe. As a
science-fiction writer, I had to be highly skeptical of any such
statements.
Science fiction has weighed in on ecology, overpopulation,
racism, the abortion debate (which is also fundamentally a
technological issue the ability to terminate a fetus without
harming the mother is a scientific breakthrough whose moral
ramifications must be weighed), and, indeed, science-fiction has
been increasingly considering what I think may be the greatest
threat of all, the downsides of creating artificial intelligence.
From William Gibson's Hugo-winning 1984 Neuromancer in
which an organization known as "Turing" exists to prevent the
emergence of true AI to my own Hugo-nominated 1998
Factoring Humanity,
in which the one and only radio message Earth receives from another
star is a warning against the creation of AI, a last gasp from
biologicals being utterly supplanted by what they themselves had
created without sufficient forethought.
Which brings us back to the central message of SF: "Look with a
skeptical eye at new technologies." Has that message gotten
through to the general public? Has society at large embraced it
in a way that they never did embrace "Don't commit murder because
you will never get away with it"?
And the answer, I think, is absolutely yes. Society has co-opted
the science-fictional worldview wholly and completely. Do we now
build a new dam just because we can? Not without an
environmental-impact study. Do we put high-energy power lines
near public schools? Not anymore. Did we all rush out to start
eating potato chips made with Olestra, the fake fat that robs the
body of nutrients and causes abdominal cramping and loose stools?
No.
And what about the example I started with cloning? Indeed,
what about the whole area of genetic research?
Well, when the first Cro-Magnon produced the first stone-tipped
wooden spear, none of his hirsute brethren stopped to think about
the fact that whole species would be driven to extinction by
human hunting. When the United States undertook the Manhattan
Project, not one cent was budgeted for considering the societal
ramifications of the creation of nuclear weapons despite the
fact that their existence, more than any other single thing,
shaped the mindset of the rest of the century.
But for the Human Genome Project, fully five percent of the total
budget is set aside for that thing SF writers love to do the
most: just plain old noodling thinking about the
consequences, the impacts, that genetic research will have on
society.
That money is allocated because the world now realizes that such
thinking is indispensable. Of course, the general public doesn't
think of it as science fiction to them, thanks to George "I
can't be bothered to look up the meaning of the word parsec"
Lucas, SF is the ultimate in escapism, irrelevant to the real
world; it's fantasy stories that only happened a long time ago,
in a galaxy far, far away.
I'm not alone in this view. Joe Haldeman has observed that
Star Wars was the worst thing that ever happened to
science fiction, because the general public now equates SF with
escapism. According to The American Heritage English
Dictionary, escapism is "the avoidance of reality through
fantasy or other forms of diversion." I do not read SF
for escapism, although I do read it for entertainment (which is
the same reason I do a lot of my non-fiction reading). But I,
and most readers of SF, have no interest in avoiding reality.
And yet, SF is seen as having nothing to do with the real world.
At a family reunion in 1998, a great aunt of mine asked me what
I'd been doing lately, and I said I'd spent the last several
months conducting research for my next science-fiction novel.
Well, my aunt, an intelligent, educated woman, screwed up her
face, and said, "What possible research could you do for a
science-fiction book?" SF to her, as to most of the world, is
utterly divorced from reality; it's just crazy stuff we make up
as we go along. And so the bioethicists, the demographers, the
futurists, and the analysts, may not think of themselves as using
the tools of science fiction but they are.
Our mindset the mindset honed in the pages of
Astounding, the legacy of John Brunner and Isaac Asimov,
of Judy Merril and Philip K. Dick is now central to human
thought. Science-fiction writers succeeded beyond their wildest
dreams: they changed the way humanity looks at the world.
Years ago, Sam Moskowitz quipped that anyone could have predicted
the automobile but it would take a science-fiction writer to
predict the traffic jam. In the 1960s, my fellow Canadian,
Marshall McLuhan, made much the same point,
saying that, contrary to the designers' intentions, every new
technology starts out as a boon and ends up as an irritant.
But now, everyone is a science-fiction writer, even if
they never spend any time at a keyboard. When a new technology
comes along, we all look at it not with the wide eyes of a kid on
Christmas morning, but with skepticism. The days when you could
tell the public that a microwave oven would replace the
traditional stove are long gone; we all know that new
technologies aren't going to live up to the hype. About the only
really interesting thing the microwave did was create the
microwave-popcorn industry and, of course, microwave popcorn,
fast and convenient, is also loaded down with fatty oils to aid
the popping, taking away the health benefits normally associated
with that food item. The upside, the downside popcorn, the
science-fictional snack.
And what I'm talking about is a science-fictional, not a
scientific, perspective. As Dr. David Stephenson, formerly with
the National Research Council of Canada and a frequent science
guest at SF conventions, has observed, scientists are taught from
day one to write in the third-person passive voice: they
distance themselves from their prose, removing from the
discussion both the doer of the action and the person who is
feeling the effects of the action.
But SF writers do what the scientists must not. We long ago left
behind the essentially characterless storytelling practiced by
such early writers as George O. Smith. We now strive for
characterization as sophisticated as that in the best mainstream
literature. Or, to put it another way, science fiction has
evolved beyond being what its founding editor, Hugo Gernsback,
said it should be: merely fiction about science. Indeed, even
Isaac Asimov, known for a rather perfunctory approach to
characterization, knew full well that SF was about the impact
progress has on real people. His definition of science fiction
was "that branch of literature that deals with the responses of
human beings to changes in science and technology."
And those responses, of course, are often irrational, based on
fear and ignorance. But they are responses that cannot be
ignored: we science-fiction readers and writers do
share this planet with the ninety percent of human beings who
believe in angels, who believe in a literal heaven and hell, who
reject evolution. As much as I admire Arthur C. Clarke and I
do, enormously the most unrealistic thing about his fiction is
how darn reasonable everyone is.
On May 31, 1999, CBC television had me appear on its
current-affairs program Midday to discuss whether or not
the space program was a waste of money; I was debating a woman
who worked in social services who thought all money including
the tiny, tiny fraction of its gross domestic product that
Canada, or even the U.S. for that matter, spends on space
should be used to address problems here on Earth.
And her clincher argument was this I swear to God, I'm not
making this up: "We should be careful about devoting too much
time to science. The people who lived in Atlantis were obsessed
with science, and that led to their downfall."
My response was to tell her that perhaps if she spent a little
more time reading about science, she'd know that Atlantis was a
myth, and she wouldn't make an ass out of herself on national
television. But the point here one that I will come back to
is this: she already understood the central 20th-century
science-fictional premise of looking carefully at the
ramifications of new technologies, such as space travel. But she
was unable to look at them rationally, because of her
faulty worldview, a worldview that rendered her incapable of
separating myth from reality, fact from fiction.
If the central message of science fiction has indeed been
co-opted by the public at large if, as I think is true, Frank
Herbert's Dune did as much to raise consciousness about
ecology as did Rachel Carson's Silent Spring then what
role is there for science-fiction writers in the new century?
I always say whenever a discussion at a science-fiction
convention brings in
Star Trek
as an example, we've hit
rock bottom; you can't imagine Ruth Rendell turning to Scott
Turow at a mystery-fiction conference and saying, "You know, that
reminds me of that episode of Murder, She Wrote, in
which ...'" But I am going to invoke Star Trek here as an
example of how quaint and embarrassing SF ends up looking when it
continues to push an old message long after society has gotten
the point.
In the original Star Trek, we saw women and black people
in important positions. Uhura, the mini-skirted bridge officer,
was hardly the most significant black example; much more
important were the fact that Kirk's boss, as seen in the episode
"Court-Martial," was a black man, played with quiet dignity by
Percy Rodriguez, and that the ship's computers, as seen in "The
Ultimate Computer," were designed by a Noble-prize-winning black
cyberneticist, played with equal dignity by William Marshall.
During the era of Martin Luther King and the Watts riots, it was
a powerful, important statement to have the white captain of the
Enterprise deferring to black people; as Marshall observed
thirty years later, the single most significant thing about his
guest-starring role was that he, an African-American, was
referred to as "Sir" throughout the episode.
But time passed. In 1993, Paramount made much of the fact that
we were going to see a black man as the leader on Star Trek:
Deep Space Nine despite the fact that, by this point,
blacks had been elected to prominent political positions
throughout the United States, and even in South Africa, a bastion
of racism in the 1960s, a black man, Nelson Mandela, was about to
become president. But, somehow, Star Trek thought it was
making a profound statement.
And then, just as embarrassingly, two years later, we were
supposed to be stunned by the fact that
on Star Trek: Voyager, a woman was the captain of a
starship this, despite the fact that countries
from Great Britain to India to Canada had all already had
female prime ministers, that women had risen to prominence
in all walks of life.
My colleagues and I have long tried to reflect reality in our
fiction, and so, naturally, we have diverse casts in our stories.
Kurt Vonnegut's famous statement that the most unrealistic thing
about science fiction is the preponderance of Americans
practically nobody, he correctly observed, is an American was
no longer news to anybody. And, by all means, in a
Star Trek of the 1990s we should indeed have seen women and
non-whites in prominent roles. But to make it the
message, to try to pass it off as a gutsy thing to do,
looked ridiculous.
Indeed, David Gerrold famously quit working on Star Trek: The
Next Generation back in 1987 in part because of that series'
failure to address the reality that a lot of people are gay in
its depiction of the future; Star Trek had become
irrelevant, because the only messages it was comfortable sending
out were ones already fully received by the audience.
And, I firmly believe, SF as a whole is now in danger of being
perceived as just as quaint, just as dated, just as irrelevant,
as the current Star Trek is.
In our search for a new role, should we fall back on the one the
media has so often cast us in that of predictors of the
future? I don't think so. Many SF writers, myself included, are
content to occasionally call themselves "futurists," if that
helps get us TV or radio interviews, but we aren't really
(indeed, I'm not sure that anybody really is, in the
modern sense of the term, as someone who claims to be able to
predict future trends; Bill Gates is the world's current
technological leader a futurist if ever there was one and
he, of course, is the same man who once said that no one would
ever need a computer with more than 640K of memory).
No, when what we science-fiction writers have written about comes
to pass, it usually means society has screwed up. The last thing
George Orwell wanted was for the real year 1984 to turn out
anything like the vision portrayed in his novel.
Orwell, of course, wrote his book in 1948 he simply reversed
the last two digits to make it clear that he was really writing
about his present day. Science fiction is indeed very much a
literature of its time, and should, of course, be read in
historical context.
Still, anyone who needs further convincing that science fiction
isn't a predictive medium need only look at the events of the
last few decades. Numerous science-fiction writers predicted
that the first humans would set foot upon the moon in the 1960s,
but none of us predicted that we would abandon the moon
indeed, all manned travel beyond Earth orbit just three years
later. Exactly twelve human beings have walked on the Moon; a
mere dozen people (all white, all male, all American hardly a
representative sampling, but, then again, all of this occurred
back when the original Star Trek's message of an
interracial future was one that hadn't yet been fully received)
and there is no sign that that number will increase in the
next couple of decades.
We science-fiction writers also utterly missed the fall of the
Soviet Union, something that now, in retrospect seemed inevitable
indeed, it was amazing it lasted as long as it did. But we
were writing books like Norman Spinrad's Russian Spring
right up to the day of the collapse.
And, perhaps most significant of all, we completely missed the
rise of the Internet and the World Wide Web. The genre that gave
us Isaac Asimov's Multivac, Arthur C. Clarke's HAL 9000,
Robert A. Heinlein's Mycroft Holmes, and even William Gibson's
Wintermute completely failed to predict how the computer
revolution was really going to unfold.
Of course when something new comes along such as the terrible
plague of AIDS we're quick to weight in with speculations.
But we're usually so far off the mark that the results end up
seeming laughable. Poor Norman Spinrad again: his vision of a
world of people having sex with machines instead of, of
course, simply wearing condoms because of the threat of AIDS,
as outlined in his 1988 story "Journals of the Plague Years,"
seems absolutely ridiculous and alarmist when we look at it now,
a scant decade later.
Some science-fiction writers still gamely try to set stories in
the far future a hundred, two hundred, a thousand years down
the road. But the predictive horizon is moving ever closer. No
one can make a prediction about what the world will be like even
fifty years from now with any degree of confidence. What will be
the fruits of the Human Genome Project? Will nanotechnology
really work? Will true artificial intelligence emerge? Will
cold fusion, or another clean, unlimited energy source, be
developed? Will humans upload their consciousnesses into
machines? And what wild cards things we haven't even thought
of yet will appear?
As Bruce Sterling has observed, people in the future won't even
eat; as Nancy Kress has postulated, with Beggars in Spain,
they may not even sleep. What likely predictions could we
possibly make about such beings?
In May 1967, Arthur C. Clarke revealed his now-famous
"Third Law" during a speech to the American Association of Architects:
"Any sufficiently advanced technology is indistinguishable from
magic." The question, of course, is how far ahead of us is
"sufficiently advanced" and the answer, I believe is fifty
years; the world of 2050 is utterly beyond our predictive
abilities. With the accelerating rate of change, any year-2000
guess as to what 2050 would be like is almost certainly going to
be as far off base as a guess Christopher Columbus might have had
about what 2000 would be like.
The pressure for SF to change has been building for a long time.
In North America, the sales of science fiction books that aren't
related to Star Trek, Star Wars, or other media
properties, are the worst they've ever been. Sales are down
about fifty percent across the board from 1990, and the
readerships of the principal SF magazines Analog and
Asimov's have been cut in half. There is no doubt that
the reading public is turning away from SF in droves.
The prime cause of the decline in SF readers is that today's
young people are finding all the things that have always
attracted young people to SF big ideas, sense of wonder,
action, wish-fulfillment fantasies, stunning visual imagery,
nifty aliens, engaging characters more readily in movies, TV,
role-playing games, computer games, and on the Internet than in
the pages of printed works.
There's no doubt that we've been outclassed in terms of visual
imagery by the wizards at Industrial Light and Magic. Any space
battle or alien vista we might care to describe they can realize
more vibrantly in pictures than we can with words. To put it
crudely: in the past, many of our finest SF writers, including
Robert Silverberg and Mike Resnick, supplemented their income by
writing pornographic novels. But there's almost no market left
for porno fiction: what's now shown on videotape is much more
vivid and real than anything the reader can imagine. Well, as
went novels with titles like Nurses in Need, so, too, will
go the space opera that was once a staple of printed SF.
SF will have to change if it is to survive. The public
wants something other than what we've been giving them. One
change we'll likely see is a move away from the far future as a
setting for stories. I don't even think we need to invoke Kim
Stanley Robinson's criterion that SF stories must be set in the
future; I took great pleasure in setting my novel
Frameshift, for instance, entirely in the present day, and
suspect we'll see it become much more common for serious SF
novels to have contemporary settings.
Indeed, if science fiction is going to have relevancy in the next
century, it must assert itself to be part of real life, not
far-off tales of escapism. And that brings me back to where we
started. We need a new message for the new millennium. Far be
it from me to try to impose an agenda on SF but I think the
agenda is already there, implicit in many of our texts, and,
indeed, explicit in the actual name of our genre: science
fiction.
One of the great intellectual embarrassments of the 20th century
is that five hundred years after Copernicus deposed Earth from
the centre of the universe, virtually every newspaper carries a
daily astrology column the horoscopes but astronomy gets,
at best, a column once a week, and in many papers not even that.
It's likewise embarrassing that a hundred and forty years after
the publication of The Origin of Species, ignorant people
are still succeeding in outlawing the teaching of the fact of
evolution.
And it's mortifying that while the SF section of bookstores
shrinks like a puddle under noonday sun, the "New Age" section
full of fabricated stories penned by charlatans grows like a
cancer.
If there is a message science fiction can promulgate for the 21st
century a message that the world needs to hear it is this:
the rational, scientific worldview is the only perspective that
effectively deals with reality.
And, at the risk of repeating myself, let me emphasize again that
reality is indeed what science fiction is all about. I cringe
with embarrassment every time I see that stupid t-shirt not quite
concealing a massive belly at a science-fiction convention:
"Reality is just a crutch for people who can't handle science
fiction." What a ridiculous, offensive statement! Science
fiction in its probing of the deep questions, in its abiding
concern with moral issues, in its unrelenting quest to expose
truth and speculate on consequences, even in its most
mind-bending explorations of the quantum nature of the universe
is, more than any other form of entertainment, absolutely
about reality.
And reality is the totality of everything; not to invoke
Star Trek again, but in the movie Star Trek IV it is
revealed that Kiri-kin-tha's First Law of Metaphysics is that
"nothing unreal exists," a statement no less profound than
Descartes's "I think therefore I am."
The scientific method is the single greatest tool of
understanding ever devised by humanity. Observe phenomena.
Propose an explanation for why the phenomena are as they have
been seen to be. Devise an experiment to test whether your
explanation is correct. And, if that experiment fails and
this is the powerful part; this is where the beauty comes in
discard the explanation, and start over again.
There will be those who argue that there are other ways of
gaining insight to the nature of reality: mystic experiences,
contemplation in the absence of experimentation, divine insight,
consulting ancient texts. Such methods are demonstrably inferior
to the scientific method, for only the scientific method welcomes
the detection of error; only the scientific method allows for
independent verification and replication.
Now, some will say, well, that's the western view, and, after
all, to paraphrase Damon Knight, hardly anyone is a westerner.
Maybe so, but it must be recognized that science fiction
is, in fact, a western genre. Fantasy, perhaps, can trace
roots all over the world, but science fiction, born of Mary
Shelley, nurtured by Jules Verne and H. G. Wells, grew out of the
industrial revolution. It is inexorably tied up with western
thought.
And the scientific method is the crowning glory of western thought
the glory that allowed us not to simply declare, as the United
States's founders did, that it is "self-evident that all men are
created equal" while they still held slaves, but rather that
allowed us to prove, through genetic studies that showed that
genetic variation within races is greater than the average
deviation between the races, and through psychological and
anatomical studies that showed that the sexes are equally endowed
intellectually, that in fact racism and sexism have no rational
basis.
Stephen Jay Gould recently wrote a book called Rocks of Ages:
Science and Religion in the Fullness of Life, in which he
argues that the spiritual and the rational should have a "loving
concordant," but are in fact "nonoverlapping magisteria"
utterly separate fields, with some questions solely appropriate
to the former and others exclusively the province of the latter.
I reject that: I don't think there's any question,
including the most basic philosophical conundrums of where did we
come from, why are we here, what does it all mean, and, indeed,
the biggest of them all, is there a God, that cannot be most
effectively addressed through the application of the scientific
method, especially with its absolute requirement that if an idea
such as the superstition of astrology is disproven, then it
must be willingly discarded.
How can science have anything meaningful to say about whether
there is a God? Easily. If the universe had an intelligent
designer, it will show signs of intelligent design. Some argue
that it clearly does: the relative strengths of the four
fundamental forces that drive our universe gravitation,
electromagnetism, the strong nuclear force, and the weak nuclear
force do seem to have been chosen with great care, since any
substantial deviation from the present ratios would have resulted
in a universe devoid of stars or even atoms.
Likewise, the remarkable thermal properties of water most
notably, that it expands as it freezes and that it has higher
surface tension than any other fluid except liquid selenium
seem specifically jiggered to make life possible.
Do these facts prove whether or not God exists? No not yet.
But the best response to those who say science doesn't hold all
the answers is to say, on the contrary, science does indeed hold
all the answers we just don't have all the science yet.
My favourite review of my own work was a recent one for
FlashForward
by Henry Mietkiewicz in The Toronto
Star, who said, "Sawyer compels us to think rationally about
questions we normally consider too metaphysical to grapple with."
But I'm hardly alone in this. Science fiction right back to such
great works as Arthur C. Clarke's short story "The Star" and
James Blish's A Case of Conscience, through Carl Sagan's
Contact, and, more recently Mary Doria Russell's The
Sparrow, and, if I may, my own Nebula-winning
The Terminal Experiment
and Calculating God,
show that SF, because it embraces the scientific method, is the
most effective tool for exploring the deepest of all questions.
So, does science fiction have a role in the 21st century?
Absolutely. If we can help shape the Zeitgeist, help inculcate
the belief that rational thought, that discarding superstition,
that subjecting all beliefs to the test of the scientific method,
is the most reasonable approach to any question, then not only
will science fiction have a key role to play in the intellectual
development of the new century, but it will also, finally and at
last, help humanity shuck off the last vestiges of the
supernatural, the irrational, the spurious, the fake, and allow
us to embrace, to quote poet Archibald Lampman, "the wide awe and
wonder of the night" but with our eyes wide open and our minds
fully engaged. Then, finally, some 40,000 years after
consciousness first flickered into being on this world, we will
at last truly deserve that name we bestowed upon ourselves:
Homo sapiens Man of Wisdom.
Robert J. Sawyer "the dean of Canadian science fiction,"
according to The Ottawa Citizen won the 1995 Best Novel
Nebula Award for
The Terminal Experiment;
that book, as
well as his novels
Starplex,
Frameshift,
and Factoring Humanity
have all been finalists for the Hugo
Award. Rob's twelfth novel,
Calculating God, was
published by Tor in June 2000. Visit his extensive web site
(called "the largest genre writer's home page in existence" by
Interzone) at www.sfwriter.com.
More Good Reading
Another speech by Robert J. Sawyer
Is SF endangered?
More nonfiction by Rob
Information on Rob as a speaker
A list of Rob's keynotes
My Very Occasional Newsletter
HOME • MENU • TOP
Copyright © 1995-2024 by Robert J. Sawyer.
|