There were interstices in my work with Dr. Kuroda—protracted lacunae while I waited for his text replies or for him to direct me to link to another bit of code he had written.
In those gaps I sought to learn more about Caitlin, about this human who had reached down and helped draw me up out of the darkness.
There was no Wikipedia entry on her, meaning, I supposed, that she was not—yet!—noteworthy. And—
Ah, wait—wait! Yes, there was no entry on her, but there was one on her father, Malcolm Decter… and Wikipedia saved not just the current version of its entries, but all previous versions, as well. Although there was no mention of Caitlin in the current draft, a previous iteration had contained this: “Has one daughter, Caitlin Doreen, blind since birth, who lives with him; it’s been speculated that Decter’s decline in peer-reviewed publications in recent years has been because of the excessive demands on his time required to care for a disabled child.”
That had been removed thirteen days ago. The change log gave only an IP address, not a user name. The IP address was the one for the Decter household; the change could have been made (among other possibilities) by Caitlin, her parents, or that other man—Dr. Kuroda, I now knew—that I had often seen there.
The deletion might have been made because Caitlin had ceased to be blind.
But…
But it seemed more likely that this text was cut because someone—presumably Caitlin herself—didn’t like what it said.
But I was merely inferring that. It was possible to more directly study Caitlin—and so I did.
In short order, I read everything she’d ever put publicly online: every blog post, every comment to someone else’s blog, every Amazon.com review she’d written. But—
Hmm.
There was much she had written that I could not access. Her Yahoo mail account contained all the messages she had received, and all the messages she had sent, but access was secured by a password.
A nettlesome situation; I’d have to do something about it.
LiveJournal: The Calculass Zone
Title: Changing of the Guard
Date: Saturday 6 October, 00:55 EST
Mood: Astonished
Location: Waterloo
Music: Lee Amodeo, “Nightfall”
I got a feeling I’m going to be pretty scarce for the next little while, folks. Things they be a-happenin’. It’s all good—miraculous, even—but gotta keep it on the DL. Suffice it to say that I told my parents something el mucho grande tonight, and they didn’t freak. Hope other people take it as well as they did…
Even though she was exhausted, Caitlin updated her LiveJournal, skimmed her friends’ LJs, updated her Facebook page (where she changed her status to “Caitlin thinks it’s better to give than to receive”), and then checked her email. There was a message from Bashira with the subject, “One for the math genius.”
When she’d been younger, Caitlin had liked the sort of mathematical puzzles that sometimes circulated through email: they’d made her feel smart. These days, though, they mostly bored her. It was rare for one to present much of a challenge to her, but the one in Bashira’s message did. It was related to an old game show, apparently, something called Let’s Make a Deal that had starred a guy named Monty Hall. In it, contestants are asked to pick one of three doors. Behind one of them is a new car, and behind each of the others is a goat—meaning the odds are one in three that the contestant is going to win the car.
The host knows which door has the car behind it and, after the contestant picks a door, Monty opens one of the unchosen ones and reveals that it was hiding a goat. He then asks the player, “Do you want to switch to the other unopened door?”
Bashira asked: Is it to the contestant’s advantage to switch?
Of course not, thought Caitlin. It didn’t make any difference if you switched or not; one remaining door had a car behind it and the other had a goat, and the odds were now fifty-fifty that you’d picked the right door.
Except that that’s not what the article Bashira had forwarded said. It contended that your chances of winning the car are much better if you switch.
And that, Caitlin was sure, was just plain wrong. She figured someone else must have written up a refutation to this puzzle before, so she googled. It took her a few minutes to find what she was looking for; the appropriate search terms turned out to be “Monty Hall problem,” and—
What the hell?
“…When the problem and the solution appeared in Parade, ten thousand readers, including nearly a thousand Ph.D.s, wrote to the magazine claiming the published solution was wrong. Said one professor, ‘You blew it! Let me explain: If one door is shown to be a loser, that information changes the probability of either remaining choice—neither of which has any reason to be more likely—to 1/2. As a professional mathematician, I’m very concerned with the general public’s lack of mathematical skills. Please help by confessing your error and, in the future, being more careful.’ ”
The person who had written the disputed answer was somebody called Marilyn vos Savant, who apparently had the highest IQ on record. But Caitlin didn’t care how high the lady’s IQ was. She agreed with the people who said she’d blown it; she had to be wrong.
And, as Caitlin liked to say, she was an empiricist at heart. The easiest way to prove to Bashira that vos Savant was wrong, it seemed to her, would be by writing a little computer program that would simulate a lot of runs of the game. And, even though she was exhausted, she was also pumped from her conversations with Webmind; a little programming would be just the thing to let her relax. She only needed fifteen minutes to whip up something to do the trick, and—
Holy crap.
It took just seconds to run a thousand trials, and the results were clear. If you switched doors when offered the opportunity to do so, your chance of winning the car was about twice as good as it was when you kept the door you’d originally chosen.
But that just didn’t make sense. Nothing had changed! The host was always going to reveal a door that had a goat behind it, and there was always going to be another door that hid a goat, too.
She decided to do some more googling—and was pleased to find that Paul Erdös hadn’t believed the published solution until he’d watched hundreds of computer-simulated runs, too.
Erdös had been one of the twentieth century’s leading mathematicians, and he’d co-authored a great many papers. The “Erdös number” was named after him: if you had collaborated with Erdös yourself, your Erdös number was 1; if you had collaborated with someone who had directly collaborated with Erdös, your number was 2, and so on. Caitlin’s father had an Erdös number of 4, she knew—which was quite impressive, given that her dad was a physicist and not a mathematician.
How could she—let alone someone like Erdös?—have been wrong? It was obvious that switching doors should make no difference!
Caitlin read on and found a quote from a Harvard professor, who, in conceding at last that vos Savant had been right all along, said, “Our brains are just not wired to do probability problems very well.”
She supposed that was true. Back on the African savanna, those who mistook every bit of movement in the grass for a hungry lion were more likely to survive than those who dismissed each movement as nothing to worry about. If you always assume that it’s a lion, and nine times out of ten you’re wrong, at least you’re still alive. If you always assume that it’s not a lion, and nine times out of ten you’re right—you end up dead. It was a fascinating and somewhat disturbing notion: that humans had been hardwired through genetics to get certain kinds of mathematical problems wrong—that evolution could actually program people to be incorrect about things.
Caitlin felt her watch, and, astonished at how late it had become, quickly got ready for bed. She plugged her eyePod into the charging cable and deactivated the device, shutting off her vision; she had trouble sleeping if there was any visual stimulation.
But although she was suddenly blind again, she could still hear perfectly well—in fact, she heard better than most people did. And, in this new house, she had little trouble making out what her parents were saying when they were talking in their bedroom.
Her mother’s voice: “Malcolm?”
No audible reply from her father, but he must have somehow indicated that he was listening, because her mother went on: “Are we doing the right thing—about Webmind, I mean?”
Again, no audible reply, but after a moment, her mother spoke: “It’s like—I don’t know—it’s like we’ve made first contact with an alien lifeform.”
“We have, in a way,” her father said.
“I just don’t feel competent to decide what we should do,” her mom said. “And—and we should be studying this, and getting others to study it, too.”
Caitlin shifted in her bed.
“There’s no shortage of computing experts in this town,” her father replied.
“I’m not even sure that it’s a computing issue,” her mom said. “Maybe bring some of the people at the Balsillie on board? I mean, the implications of this are gigantic.”
Research in Motion—the company that made BlackBerrys—had two founders: Mike Lazaridis and Jim Balsillie. The former had endowed the Perimeter Institute, and the latter, looking for a different way to make his mark, had endowed an international-affairs think tank here in Waterloo.
“I don’t disagree,” said Malcolm. “But the problem may take care of itself.”
“How do you mean?”
“Even with teams of programmers working on it, most early versions of software crash. How stable can an AI be that emerged accidentally? It might well be gone by morning…”
That was the last she heard from her parents that night. Caitlin finally drifted off to a fitful sleep. Her dreams were still entirely auditory; she woke with a start in the middle of one in which a baby’s cry had suddenly been silenced.
“Where’s that bloody AI expert?” demanded Tony Moretti.
“I’m told he’s in the building now,” Shelton Halleck said, putting a hand over his phone’s mouthpiece. “He should be—”
The door opened at the back of the WATCH mission-control room, and a broad-shouldered, redheaded man entered, wearing a full-bird Air Force colonel’s service-dress uniform; he was accompanied by a security guard. A WATCH visitor’s badge was clipped to his chest beneath an impressive row of decorations.
Tony had skimmed the man’s dossier: Peyton Hume, forty-nine years old; born in St. Paul, Minnesota; Ph.D. from MIT, where he’d studied under Marvin Minsky; twenty years in the Air Force; specialist in military expert systems.
“Thank you for coming in, Colonel Hume,” Tony said. He nodded at the security guard and waited for the man to leave, then: “We’ve got something interesting here. We think we’ve uncovered an AI.”
Hume’s blue eyes narrowed. “The term ‘artificial intelligence’ is bandied about a lot. What precisely do you mean?”
“I mean,” said Tony, “a computer that thinks.”
“Here in the States?”
“We’re not sure where it is,” said Shel from his workstation. “But it’s talking to someone in Waterloo, Canada.”
“Well,” said Hume, “they do a lot of good computing work up there, but not much of it is AI.”
“Show him the transcripts,” Tony said to Aiesha. And then, to Hume: “ ‘Calculass’ is a teenage girl.”
Aiesha pressed some keys, and the transcript came up on the right-hand big screen.
“Jesus,” said Hume. “That’s a teenage girl administering the Turing tests?”
“We think it’s her father, Malcolm Decter,” said Shel.
“The physicist?” replied Hume, orange eyebrows climbing his high, freckled forehead. He made an impressed frown.
The closest analysts were watching them intently; the others had their heads bent down, busily monitoring possible threats.
“So, have we got a problem here?” asked Tony.
“Well, it’s not an AI,” said Hume. “Not in the sense Turing meant.”
“But the tests…” said Tony.
“Exactly,” said the colonel. “It failed the tests.” He looked at Shel, then back at Tony. “When Alan Turing proposed this sort of test in 1950, the idea was that you asked something a series of natural-language questions, and if you couldn’t tell by the responses that the thing you were conversing with was a computer, then it was, by definition, an artificial intelligence—it was a machine that responded the way a human does. But Professor Decter here has very neatly proven the opposite: that whatever they’re talking to is just a computer.”
“But it’s behaving as though it’s conscious,” said Tony.
“Because it can carry on a conversation? It’s an intriguing chatbot, I’ll give you that, but…”
“Forgive me, sir, but are you sure?” Tony said. “You’re sure there’s no threat here?”
“A machine can’t be conscious, Mr. Moretti. It has no internal life at all. Whether it’s a cash register figuring out how much tax to add to a bill, or”—he gestured at a screen—“that, a simulation of natural-language conversation, all any computer does is addition and subtraction.”
“What if it’s not a simulation,” said Shel, getting up from his chair and walking over to join them.
“Pardon?” said Hume.
“What if it’s not a simulation—not a program?”
“How do you mean?” asked Hume.
“I mean we can’t trace it. It’s not that it’s anonymized—rather, it simply doesn’t source from any specific computer.”
“So you think it’s—what? Emergent?”
Shel crossed his arms in front of his chest, the snake tattoo facing out. “That’s exactly what I think, sir. I think it’s an emergent consciousness that’s arisen out of the infrastructure of the World Wide Web.”
Hume looked back at the screen, his blue eyes tracking left and right as he reread the transcripts.
“Well?” said Tony. “Is that possible?”
The colonel frowned. “Maybe. That’s a different kettle of fish. If it’s emergent, then—hmmm.”
“What?” said Tony.
“Well, if it spontaneously emerged, if it’s not programmed, then who the hell knows how it works. Computers do math, and that’s all, but if it’s something other than a computer—if it’s, Christ, if it’s a mind, then…”
“Then what?”
“You’ve got to shut it down,” Hume said.
“Are you sure?”
He nodded curtly. “That’s the protocol.”
“Whose protocol?” demanded Tony.
“Ours,” said Hume. “DARPA did the study back in 2001. And the Joint Chiefs adopted it as a working policy in 2003.”
“Aiesha, tie into the DARPA secure-document archive,” said Tony.
“Done,” she said.
“What’s the protocol called?” asked Tony.
“Pandora,” said Hume.
Aiesha typed something. “I’ve found it,” she said, “but it’s locked, and it’s rejecting my password.”
Tony sidled over to her station, leaned over, and typed in his password. The document came up on Aiesha’s monitor, and Tony threw it onto the middle big screen.
“Go to the last page before the index,” Colonel Hume said.
Aiesha did so.
“There,” said Hume. “ ‘Given that an emergent artificial intelligence will likely increase its sophistication moment by moment, it may rapidly exceed our abilities to contain or constrain its actions. If absolute isolation is not immediately possible, terminating the intelligence is the only safe option.’ ”
“We don’t know where it’s located,” Shelton said.
“You better find out,” said Colonel Hume. “And you better get the Pentagon on the line, but I’m sure they’ll concur. We’ve got to kill the damn thing right now—before it’s too late.”