The analyst in the Hefei office of the Artificial Intelligence Strategic Advisory Committee got another alert from the AI that he now considered to be the most interesting of the ones he was actively programming, even though it was still frustratingly simpleminded and obtuse. But they all were. Quantum computers were magnitudes faster than classic computers in several classes of operation, but they were still limited by their tendency to decohere, also by the inadequacies of their programming; which was to say the inadequacies of their programmers. So it was like being confronted with one’s own stupidity.
“Alert,” the AI said.
The analyst had recently given it a voice modeled on that of Zhou Xuan, the classic actress featured in the 1937 movie Street Angel. Now he checked his own security protocols, then said, “Tell me.”
“The Unicaster 3000 previously mentioned and now on the moon has just been interfered with, and thus experienced wave collapse and quantum decoherence.”
The analyst said, “Did you move this information into the appropriate file and sequester it?”
“I did that.”
“Will this unicaster device continue to function as an open line, or has it shut down?”
“It has shut down, in keeping with its design.”
“Okay. Can you identify who interfered with the device?”
“No.”
“But intrusion always leaves a mark.”
“In this case the collapse of the wave function is the only mark.”
“Can you identify when it happened, and where it was when it happened?”
“It happened at UTC 16:42 on July 23, 2047. It happened on the moon.”
“Can you be any more specific?”
“The device lacks GPS, as part of its design privacy. It was last seen on security cameras being taken into an office occupied by the Scientific Research Steering Committee, at Shackleton Crater.”
“But that committee is under the umbrella of the Central Military Commission. Do they have some of their people on the moon?”
“Yes.”
“Oh dear. Our beloved colleagues. Surely they should not be on the moon.”
“Military activity is forbidden on the moon by Outer Space Treaty, 1967.”
“Very good. And now the device is inoperative?”
“It could be entangled again with another matching device.”
“But to key that entanglement, both of them would have to be in the possession of the same operator.”
“Yes.”
“And the other device is presumably on Earth. What has become of the person who took the device to the moon?”
“He is no longer visible to me.”
“Wait, what? You’ve lost him?”
“He suffered a health problem while with Governor Chang Yazu of the Chinese Lunar Authority. He and Chang Yazu both collapsed. Chang later died. Fredericks was taken to a south pole complex hospital.”
“Chang died? You tell me this now?”
“Yes.”
“Why did you not tell me that first?”
“Your directive instructed me to report on the device.”
“Yes, but Chang! What was the cause of death?”
“The autopsy is not available to me.”
“Both men collapsed?”
“Fredericks and Chang both collapsed.”
The analyst thought for a while. “That sounds like the black ones.”
“I don’t understand.”
The analyst sighed. “I-330, I want you to initiate a covert inquiry by way of all the backdoor taps I inserted into the Invisible Wall when it was built. Work through fourth parties. No detects allowed. Look for any mention of Chang Yazu. See if you can compile his pattern of contacts with everyone on the moon, and also trace his employment history back here on Earth.”
“I will.”
“Act like a general intelligence, please. Make suppositions, look for evidence supporting them. Consider all you find, and attempt explanations for individual and institutional behavioral patterns by way of Bayesian analysis and all the rest of your learning algorithms. Apply all your capacity for self-improvement!”
“I will.”
The analyst sighed again. He was sounding like Chairman Mao exhorting the masses: do the best you can with what you have! This to a search engine. Well, from each according to its capacities.
He sat down and began to ponder again the problem of programming self-improvement into an AI. New work from Chengdu on rather simple Monte Carlo tree searches and combinatorial optimization had given him some ideas. Deep learning was alas very shallow whenever it left closed sets of rules and data; the name was a remnant of early AI hype. If you wanted to win a game like chess or go, fine, but when immersed in the larger multivariant world, AI needed more than deep learning. It needed to incorporate the symbolic logic of earlier AI attempts, and the various programs that instructed an AI to pursue “child’s play,” meaning randomly created activities and improvements. There also had to be encouragements in the form of actually programmed prompts to help machine learning occur mechanically, to make algorithms create more algorithms.
All this was hard; and even if he managed to do some of it, at best he would still be left with nothing more than an advanced search engine. Artificial general intelligence was just a phrase, not a reality. Nothing even close to consciousness would be achieved; a mouse had more consciousness than an AI by a factor that was essentially all to nothing, so a kind of infinity. But despite its limitations, this particular combination of programs might still find more than he or it knew it was looking for. And the outside possibility of a rapid assemblage of stronger cognitive powers was always there. For there was no doubt that one aspect of quantum computers was already very far advanced: they could work fast.