MINCEMEAT

This was a really stupid idea, thinks Peter. He’s walking completely alone in the dim light through a run-down old industrial area, which has plenty of industrial buildings but no more industry. He stands in front of a heavy steel door and checks to see whether the number of the building matches that of the note in his hand. Because he is able to find neither a doorbell nor a door handle, he begins to knock and call out: “Hello? Hello? Is someone there?”

With a humming sound, a surveillance camera turns toward him. A woman—who could easily play an extra in a virtual reality remake of The Walking Dead without any help from a makeup artist whatsoever—comes shuffling around the corner of the building, lured by his calls. Peter turns pleadingly toward the camera.

“Does the… er… the old man live here?”

Nothing happens. The woman shuffles closer and closer. Peter isn’t wondering what diseases the woman has, but rather if there are any she doesn’t have.

“Heeey,” she cries. “Heey yooouuuu!”

“Please,” says Peter to the camera, “I was told you could help me!”

The woman is now just five steps away.

“Kiki sent me,” says Peter pleadingly.

Suddenly, the door opens. Peter slips in. The door closes with a whooshing sound.

“Heey!” He hears the woman call from outside. “Heeeyyy!”

He is standing alone in a dark hallway. A monitor lights up. Beneath it, a compartment opens. The instruction “Place all technical devices in the compartment” appears on the monitor.

Peter puts his QualityPad in.

“All of them” flashes up on the monitor.

“I don’t have any more,” says Peter.

“Earworm,” flashes up on the monitor.

Peter tugs four times on his right earlobe. The earworm undocks itself from his blood supply and crawls along the ear canal into the outer ear. It tickles. Peter carefully picks up the tiny thing with his thumb and forefinger and places it in the less-than-trustworthy-looking compartment. He reminds himself to disinfect the earworm before he puts it back in again.

An elevator door opens. Peter gets in, the door closes, and the lift begins to rattle its way upward. When the doors open again, Peter has no idea what floor he’s on. Luminous tape on the wall begins to glow, and Peter follows it until he’s standing in a thirty-two-square-meter room in front of a bulletproof glass screen that divides the room into halves.

Behind the screen, in a room stuffed with electronic devices, sits a withered old man with a wild beard. All of the devices look strange. Antiquated, yes, but there’s something else.

“Are you, erm, the old man?” calls Peter hesitantly.

“Well, I’m certainly not a young one,” croaks the man, chuckling. “And there’s no need to shout like that. Everything each of us says is electronically enhanced for the other. A local system, in case you’re interested. There’s no connection to the net.”

Now Peter realizes what was confusing him about the computers. All their cameras and microphones have been removed, but without someone having gone to the effort of masking the amputations. The machines sit there around the old man, deaf and blind, as though covered with gaping wounds. The only object on Peter’s side of the bulletproof glass screen is an old, untrustworthy-looking folding chair. Peter stays on his feet, decides to ignore everything and come straight to the point.

“I have a problem,” he says.

“Aha,” mumbles the old man.

“And Kiki told me that you might be able to help me.”

“Are you a God-fearing man?” asks the old man suddenly.

“Erm,” says Peter in surprise. “I don’t believe there is a God.”

“Oh,” says the old man. “But there will be…”

“What do you mean by that?”

“Are you familiar with the concept of the super intelligence?”

“Not really.”

“No, you don’t look like you are,” says the old man, chuckling. “Are you familiar with the difference between a weak and a strong artificial intelligence?”

“Just vaguely,” says Peter. “A weak AI is constructed for a specific role. To steer a car, for example. Or to return unwanted products. And they can be very annoying.”

“Yes, something like that. And a strong AI?”

“A strong artificial intelligence doesn’t have to be programmed for one specific task. It’s a general problem-solving machine that can successfully carry out all intellectual tasks that a human can master. And that perhaps even has a genuine consciousness. But something like that doesn’t exist.”

“Aha,” says the old man. “It looks like someone hasn’t read the news recently. Allegedly there is now such a strong AI in existence. And we might even be ruled by it…” He points toward one of his monitors, on which a campaign commercial by the Progress Party is playing.

“John of Us?” asks Peter. “John of Us is a super intelligence?”

The old man chuckles. “Have you been following his election campaign? No. Not a super intelligence. No.” He ponders. “On the other hand…”

“What?” asks Peter.

“I just remembered an old quote: ‘Every machine that is clever enough to pass the Turing test could also be clever enough not to pass it.’”

“I don’t understand.”

“Never mind,” says the old man.

“What’s the Turing test?”

“In 1950, Alan Turing suggested a method that allegedly makes it possible to establish whether a machine has a thought capacity equal to that of a human.”

“And how does it work?”

“The human being gets two conversation partners whom he can neither hear nor see. They communicate via keyboard. One of the conversation partners is a human, the other is an artificial intelligence. If the questioner doesn’t succeed in finding out which of his conversation partners is human and which is a machine, the AI has a thought capacity equal to that of a human.”

“I understand.”

“But do you really? By the way, usually the machines betray themselves by being too friendly and polite.” The old man chuckles. “Now, John of Us is certainly a strong AI. An AI that can do everything a human being can. Except faster, of course, and without any mistakes. And what is the most important ability we human beings have? What made us into the world-conquering species we are today?”

“I have no idea,” says Peter. “The ability to form communities? Empathy? Love?”

“Oh, sure, but those are just trinkets!” cries the old man. “No, we can make tools. Machines! Now do you understand what I’m getting at?”

“No,” says Peter. “Not really.”

“A strong AI is an intelligent machine capable of creating an even more intelligent machine, which in turn is capable of creating an even more intelligent machine. Recursive self-improvement. It would result in an intelligence explosion! Now, of course our John is forbidden, for obvious reasons, from improving himself. But let’s suppose he finds a way of getting around the ban—or that the next people who develop a strong AI don’t equip their creation with such a ban… What would happen then?”

“I don’t know, but I’m sure you’re about to tell me.”

“A super intelligence would come into being. An intelligence far beyond our modest powers of imagination. And it certainly wouldn’t be so stupid as to wait in a central computer and risk being turned off. It would decentralize itself and distribute itself across the network, where it would have access to billions of cameras, microphones, and sensors. It would be omnipresent. It would have access to all the data and information that has ever been collected, and it would be capable of extrapolating this statistically into the future. It would be omniscient. And of course it would be capable of changing at will not just the virtual world, but our physical world, too, because almost everything can be controlled over the internet. It would be omnipotent. Now tell me, what do you call a being which is omnipresent, omniscient, and omnipotent?”

“God?” asks Peter.

The old man smiles. “Yes. So now you’ll understand what I mean when I say that, in an ironic twist to everything the religions tried to teach us, it wasn’t God that created humanity, but humanity that will create a God.”

Peter thinks for thirteen seconds.

“Be that as it may,” he says eventually, “this is all very interesting, but not my problem! I came to you because—” Then he interrupts himself. “Will it be a benevolent God?”

“Yes, that’s the question,” says the old man. “The most critical question, even. Generally speaking, there are three possibilities: The super intelligence could be benevolent toward us, to varying degrees, it could be hostile toward us, again to varying degrees, or it could be indifferent to us. The problem is that even an indifferent God could be catastrophic for us, in a similar way to how we’re not really hostile toward animals and yet we’ve destroyed their living space regardless. God could simply decide that, for example, the production of chocolate hazelnut spread uses too many resources that he needs for other things. Then there would be no more chocolate hazelnut spread. That would be tragic. I mean, perhaps hazelnuts have unimagined data-storage capacities, far beyond that of an average roll of sticky tape. Perhaps the super intelligence will also decide that the entire foodstuff production industry is a waste of resources.”

“Why are you telling me all of this?” asks Peter. “It has absolutely nothing to do with my problem.”

“I’m telling you this,” says the old man, “because I believe that everyone should know about it. I’m telling you this so that you see that your problem, whatever it might be, will soon be completely meaningless, and your existence pointless.”

“Well, thanks,” says Peter. “That’s a great help.”

“My pleasure.”

Peter wants to contradict what the old man has said. “But there’s also immense potential,” he says. “If we could somehow create the super intelligence in such a way that it likes us…”

“Of course,” says the old man. “Then it could be paradise. Happiness beyond all the powers of our imagination. But…” He hesitates.

“What?” asks Peter.

“Even a super intelligence that is benevolent toward us—” begins the old man.

“—could have catastrophic consequences?” asks Peter.

“Yes. Just imagine that, in his goodness, God offers to take on all of our work.”

“Sounds wonderful.”

“Really? Imagine that you’re an architect, but every building you want to build, God could build much quicker, much more cheaply, and much better than you. Imagine you’re a poet, but every poem you want to write, God could write more quickly, more beautifully, more artistically than you. Imagine you’re a doctor, but every person you want to heal, God could heal them much more quickly, with less pain, and more lastingly. Imagine you’re an excellent lover, but every woman you want to satisfy, God could—”

“All of my problems would be meaningless and my existence pointless,” says Peter.

“Indeed,” mumbles the old man. “And even if we succeeded in anchoring protective directives so deeply in the super intelligence that it couldn’t be rid of them or even want to be rid of them—which is very unlikely, but let’s imagine it regardless—there would still be the problem that the opposite of good is often well-meant. Are you familiar with the Asimov laws?”

“No.”

“Isaac Asimov formulated the three laws of robotics back in 1942. They were: firstly, a robot may not injure a human being or allow them to come to harm through inaction. Secondly, a robot must obey orders given to it by a human being, unless these orders are in conflict with the First Law. Thirdly, a robot must protect his own existence, as long as this protection is not in conflict with the First or Second Law. Sounds good, don’t you think?”

“Yes, I suppose so.”

“Except that even Asimov himself dedicated almost his entire working life to the paradoxes and problems that result from these laws. For example: imagine we were to equip the super intelligence with the directive of protecting all humans.”

“Sounds sensible,” says Peter.

“Yes, yes… But it’s not all that improbable that the super intelligence, after studying our history, could decide that we humans, above all, need to be protected from ourselves. It could consequently decide that the best course of action is to lock each of us up into tiny, practical cells. This would be called an unintended side effect. Oops. Hard luck! That kind of thing happens all the time. Take the Consumption Protection Laws with their repair ban, for example. All people wanted to do was stimulate the economy, but it also resulted in defective AIs fearing for their survival and trying to hide their faults.”

“I’m familiar with that one,” says Peter.

“But the main problem with Asimov’s Laws is that the First Law is nothing but theory, because it’s far too practical to have robots that can kill humans. So the First Law has already been done away with. This also made the second shorter. A robot must obey the orders given to it by a human. Period. A human? Which human? The orders of its owner, of course. So just imagine that, by chance, the first super intelligence comes into being in the computer system of a large mincemeat producer and its sole directive is to increase mincemeat production. This could end with the entire universe soon consisting of just three things. Firstly the super intelligence and its computers, secondly the production materials to create mincemeat, and thirdly mincemeat.”

“But if the super intelligence poses such an existential threat, why would we even build it?” asks Peter. “Why does nobody ban it?”

“The appeal of creating ever-improving AIs is simply too high. There are financial, productive, and military advantages. Wars are won by the army with the superior AIs. That alone means no country can afford to discontinue research into ever-stronger AIs, because even the failure to develop a strong AI could be an existential threat. Perhaps not for the whole of humanity, but certainly for the part of humanity to which one might, unfortunately, belong. Even if all the states of the world agreed on a ban, the super intelligence could still come into being in some hobby programmer’s garage.”

“So when God appears, it’s very probable that humanity as a species will be eliminated?”

“Yes. And that wouldn’t be the worst-case scenario.”

“No?” asks Peter in surprise. “What could be worse than that?”

“Well,” says the old man, “the super intelligence might hate us. God might want to see us suffer. He might enjoy torturing us and prolonging our lives again and again, in order to torture us into all eternity, with methods that would make even Freddy Krueger shudder.”

“But why?” asks Peter. “Why?”

“Why?” repeats the old man. “Why not? Who could blame an omnipotent, omnipresent, omniscient super intelligence for developing a God complex and modeling itself on the punishing gods of our mythical world? Or perhaps the super intelligence will be developed by a religious sect, in order to hold court on the day of judgment. Perhaps it’s simply a Dante Alighieri fan and decides, purely for shits and giggles, to recreate the seven circles of hell.”

“I see.”

“Good.”

“Who’s Freddy Krueger?” asks Peter.

“That’s irrelevant,” says the old man. “So, you came to me with a matter you wanted to discuss. What’s your problem?”

“Oh…” says Peter. “I don’t think it’s that important.”

“You can tell me, go ahead.”

“I…” Peter sighs. “I received this pink dolphin vibrator in the post. And they won’t let me give it back.”

Загрузка...