STORY NOTES




“THE MERCHANT AND THE ALCHEMIST’S GATE”

Back in the mid-1990s the physicist Kip Thorne was on a book tour, and I heard him give a talk in which he described how you could—in theory—create a time machine that obeyed Einstein’s theory of relativity. I found it absolutely fascinating. Movies and television have encouraged us to think of time machines as vehicles you ride in, or else some kind of teleporter that beams you to different era. But what Thorne described was more like a pair of doors, where anything that goes in or comes out of one door will come out or go into the other door a fixed period of time later. Several questions raised by vehicular or transporter-style time machines—what about the movement of the Earth, why haven’t we seen visitors from the future yet—were answered by this type of time machine. Even more interesting was the fact that Thorne had performed some mathematical analysis indicating that you couldn’t change the past with this time machine, and that only a single, self-consistent timeline was possible.

Most time-travel stories assume that it’s possible to change the past, and the ones in which it’s not possible are often tragic. While we can all understand the desire to change things in our past, I wanted to try writing a time-travel story where the inability to do so wasn’t necessarily a cause for sadness. I thought that a Muslim setting might work, because acceptance of fate is one of the basic articles of faith in Islam. Then it occurred to me that the recursive nature of time-travel stories might mesh well with the “Arabian Nights” convention of tales within tales, and that sounded like an interesting experiment.


“EXHALATION”

This story has two very different inspirations. The first was a short story by Philip K. Dick called “The Electric Ant,” which I read as a teenager. In it the protagonist goes to a doctor for a routine visit and is told, to his utter surprise, that he’s actually a robot. Later on, he opens up his own chest and sees a spool of punch tape that’s slowly unwinding to produce his subjective experience. That image of a person literally looking at his own mind has always stayed with me.

The second was the chapter in Roger Penrose’s book The Emperor’s New Mind in which he discusses entropy. He points out that there’s a sense in which it’s incorrect to say we eat food because we need the energy it contains. The conservation of energy means that it is neither created nor destroyed; we are radiating energy constantly, at pretty much the same rate that we absorb it. The difference is that the heat energy we radiate is a high-entropy form of energy, meaning it’s disordered. The chemical energy we absorb is a low-entropy form of energy, meaning it’s ordered. In effect, we are consuming order and generating disorder; we live by increasing the disorder of the universe. It’s only because the universe started in a highly ordered state that we are able to exist at all.

The idea is simple enough, but I had never seen it expressed that way until I read Penrose’s explanation. I wanted to see if I could convey that idea in fictional form.


“WHAT’S EXPECTED OF US”

There’s a sketch by Monty Python about a joke that’s so funny that anyone who hears or reads it dies laughing. It’s an example of an old trope that has acquired the name “the motif of harmful sensation”: the idea that you could die simply by hearing or seeing something. Or, depending on the version, by understanding something; in the Monty Python sketch, English speakers could safely recite the German version of the joke as long as they didn’t understand what they were saying.

Most versions of this trope involve some element of the supernatural; for example, horror fiction often features cursed books that drive people mad. I was wondering if a nonsupernatural version of this might be possible, and it occurred to me that a truly convincing argument that life was pointless might qualify. It’s not something that would work instantaneously; the argument would take time to fully sink in, but that just means it would spread further as people repeated it to others while they mulled it over.

The safeguard against this, of course, is that even an airtight argument won’t convince everyone who hears it. Arguments are simply too abstract to sway most people. A physical demonstration, on the other hand, would be much more effective.


“THE LIFECYCLE OF SOFTWARE OBJECTS”

Science fiction is filled with artificial beings who, like Athena out of the head of Zeus, spring forth fully formed, but I don’t believe consciousness actually works that way. Based on our experience with human minds, it takes at least twenty years of steady effort to produce a useful person, and I see no reason that teaching an artificial being would go any faster. I wanted to write a story about what might happen during those twenty years.

I was also interested in the idea of emotional relationships between humans and AIs, and I don’t mean humans becoming infatuated with sex robots. Sex isn’t what makes a relationship real; the willingness to expend effort maintaining it is. Some lovers break up with each other the first time they have a big argument; some parents do as little for their children as they can get away with; some pet owners ignore their pets whenever they become inconvenient. In all of those cases, the people are unwilling to make an effort. Having a real relationship, whether with a lover or a child or a pet, requires that you be willing to balance the other party’s wants and needs with your own.

I’ve read stories in which people argue that AIs deserve legal rights, but in focusing on the big philosophical question, there’s a mundane reality that these stories gloss over. It’s similar to the way movies always depict love in terms of grand romantic gestures when, over the long term, love also means working through money problems and picking dirty laundry off the floor. So while achieving legal rights for AIs would be a major step, another milestone that would be just as important is people putting real effort into their individual relationships with AIs.

And even if we don’t care about them having legal rights, there’s still good reason to treat conscious machines with respect. You don’t have to believe that bomb-sniffing dogs deserve the right to vote to recognize that abusing them is a bad idea. Even if all you care about is how well they can detect bombs, it’s in your best interest that they be treated well. No matter whether we want AIs to fill the role of employees, lovers, or pets, I suspect they will do a better job if, during their development, there were people who cared about them.

Finally, let me quote Molly Gloss, who gave a speech in which she talked about the impact that being a mother had on her as a writer. Raising a child, she said, “puts you in touch, deeply, inescapably, daily, with some pretty heady issues: What is love and how do we get ours? Why does the world contain evil and pain and loss? How can we discover dignity and tolerance? Who is in power and why? What’s the best way to resolve conflict?” If we want to give an AI any major responsibilities, then it will need good answers to these questions. That’s not going to happen by loading the works of Kant into a computer’s memory; it’s going to require the equivalent of good parenting.


“DACEY’S PATENT AUTOMATIC NANNY”

In general I’m incapable of writing a story around a specified theme, but on rare occasions it works out. Jeff VanderMeer was editing an anthology built around museum exhibits of imaginary artifacts: artists would create illustrations of the artifacts, and writers would provide descriptive text to accompany them. The artist Greg Broadmore proposed the idea of an “automatic nanny,” a “subrobotic machine, designed to look after an infant,” and that felt like something I could work with.

The behaviorist psychologist B. F. Skinner designed a special crib for his daughter, and there’s a persistent myth that she grew up psychologically damaged and eventually committed suicide. It’s completely false; she grew up healthy and happy. On the other hand, consider the psychologist John B. Watson, known as the founder of behaviorism. He advised parents, “When you are tempted to pet your child, remember that mother love is a dangerous instrument,” and he shaped views on child-rearing for the first half of the twentieth century. He believed that his approach was in the best interests of the child, but all of his own children suffered from depression as adults, with more than one attempting suicide and one succeeding.


“THE TRUTH OF FACT, THE TRUTH OF FEELING”

Back in the late 1990s I heard a presentation about the future of personal computing, and the speaker pointed out that one day it would be possible to keep a permanent video recording of every moment of your life. It was a bold claim—at the time, hard disc space was too expensive to use for storing video—but I realized he was right: eventually, you’d be able to record everything. And even though I didn’t know what form it would take, I felt certain this would have a profound impact on the human psyche. Intellectually we are aware that our memories are fallible, but rarely do we have to confront it. What would it do to us to have a truly accurate memory?

Every few years, I would be reminded of this question and think about it again, but I never made any headway on building a story around it. Memoirists have written eloquently about the malleability of memory, and I didn’t want to simply rehash what they’ve already said. Then I read Walter Ong’s Orality and Literacy, a book about the impact of the written word on oral cultures; while some of the stronger claims in the book have come under question, I still found it eye-opening. It suggested to me that there might be a parallel to be drawn between the last time a technology changed our cognition and the next time.


“THE GREAT SILENCE”

There are actually two pieces titled “The Great Silence,” only one of which can fit in this collection. This requires a little explanation.

Back in 2011 I was a participant in a conference called “Bridge the Gap,” whose purpose was to promote dialogue between the arts and the sciences. One of the other participants was Jennifer Allora, half of the artist duo Allora & Calzadilla. I was completely unfamiliar with the kind of art they created—hybrids of performance art, sculpture, and sound—but I was fascinated by Jennifer’s explanation of the ideas they were engaged with.

In 2014 Jennifer got in touch with me about the possibility of collaborating with her and her partner, Guillermo. They wanted to create a multiscreen video installation about anthropomorphism, technology, and the connections between the human and nonhuman worlds. Their plan was to juxtapose footage of the radio telescope in Arecibo with footage of the endangered Puerto Rican parrots that live in a nearby forest, and they asked if I would write subtitle text that would appear on a third screen, a fable told from the point of view of one of the parrots, “a form of interspecies translation.” I was hesitant, not only because I had no experience with video art, but also because fables aren’t what I usually write. But after they showed me a little preliminary footage I decided to give it a try, and in the following weeks we exchanged thoughts on topics like glossolalia and the extinction of languages.

The resulting video installation, titled “The Great Silence,” was shown at Philadelphia’s Fabric Workshop and Museum as part of an exhibition of Allora & Calzadilla’s work. I have to admit that when I saw the finished work, I regretted a decision I made earlier. Jennifer and Guillermo had previously invited me to visit the Arecibo Observatory myself, but I had declined because I didn’t think it was necessary for me to write the text. Seeing footage of Arecibo on a wall-sized screen, I wished I had said yes.

In 2015, Jennifer and Guillermo were asked to contribute to a special issue of the art journal e-flux as part of the fifty-sixth Venice Biennale, and they suggested publishing the text from our collaboration. I hadn’t written the text to stand alone, but it turned out to work pretty well even when removed from its intended context. That was how “The Great Silence,” the short story, came to be.


“OMPHALOS”

What we now call young-earth creationism used to be common sense; up until the 1600s, it was widely assumed that the world was of recent origin. But as naturalists began looking at their environment more closely, they found clues that called this assumption into question, and over the last four hundred years, those clues have multiplied and interlocked to form the most definitive rebuttal imaginable. What would the world have to look like, I wondered, for it to confirm that original assumption?

Some aspects were easy to imagine: trees without growth rings, skulls without sutures. But when I started thinking about the night sky, answering the question became significantly harder. Much of modern astronomy is premised on the Copernican principle, the idea that we are not at the center of the universe and are not observing it from a privileged position; this is pretty much the opposite of young-earth creationism. Even Einstein’s theory of relativity, which presupposes that physics should look the same no matter how fast you’re moving, is an outgrowth of the Copernican principle. It seemed to me that if humanity really were the reason the universe was made, then relativity shouldn’t be true; physics should behave differently in different situations, and that should be detectable.


“ANXIETY IS THE DIZZINESS OF FREEDOM”

In discussions about free will, a lot of people say that for an action of yours to be freely chosen—for you to bear moral responsibility for that action—you must have had the ability to do something else under exactly the same circumstances. Philosophers have argued endlessly about what exactly this means. Some have pointed out that when Martin Luther defended his actions to the church in 1521, he reportedly said, “Here I stand, I can do no other,” i.e., he couldn’t have done anything else. But does that mean we shouldn’t give Luther credit for his actions? Surely we don’t think he would be worthier of praise if he had said, “I could have gone either way.”

Then there’s the many-worlds interpretation of quantum mechanics, which is popularly understood to mean that our universe is constantly splitting into a near-infinite number of differing versions. I’m largely agnostic about the idea, but I think its proponents would encounter less resistance if they made more modest claims about its implications. For example, some people argue that it renders our decisions meaningless, because whatever you do there’s always another universe in which you make the opposite choice, negating the moral weight of your decision.

I’m pretty confident that even if the many-worlds interpretation is correct, it doesn’t mean that all of our decisions are canceled out. If we say that an individual’s character is revealed by the choices they make over time, then, in a similar fashion, an individual’s character would also be revealed by the choices they make across many worlds. If you could somehow examine a multitude of Martin Luthers across many worlds, I think you’d have to go far afield to find one that didn’t defy the church, and that would say something about the kind of person he was.

Загрузка...