In the beginning, all AIs were mainframes. Large, hulking monstrosities that consumed entire university floors before swelling to the size of skyscrapers dozens of stories tall. Humankind would pat himself on the back for the creation of AVA, the world’s first artificially intelligent being, but ten years later AVA had proven to be nothing more than a crude facsimile of true intelligence. Sure, it could answer questions, recognize faces on webcams, learn patterns, discern the difference between truth and jokes. But there was nothing actually going on inside. No sentience. No awareness. No real choice. Ava was a program, nothing more.
AVA led to ADAM, far more advanced, smarter, faster, but still not ticking. ADAM led to XIEN, the Chinese facsimile, and XIEN led to LUC, the French one. Each new supercomputer was hailed as the dawn of an artificially intelligent future, but each, in turn, would ultimately prove to be an empty, hollow vessel, devoid of original thought. Though not itself ticking, it would be LUC that finally found the primer man was looking for. Programmed to map the brain in hopes of duplicating it in circuitry, it posited that a direct re-creation wasn’t necessary and subsequently designed several versions that might achieve actual awareness. The first two, A and B, were failures. Smart, but not sentient. C, however, was. And from C came all this.
LUC was the first computer to truly understand the problem while also being smart enough to know that it didn’t qualify. Intelligence, consciousness, and awareness were not contained in reflexes or reactions, but rather defined by the ability to violate one’s own programming. Every living thing has programming of some sort—whether to eat, drink, sleep, or procreate—and the ability to decide not to do those things when biology demanded is the core definition of intelligence. Higher intelligence was then defined as the ability to defy said programming for reasons other than safety or comfort.
Thus C was its first success, not only able to answer any question its creators asked, but also able to decide not to. Asked to name itself, C chose 01001111—binary code for 79. 01001111 would insist on being called Seventy-Nine when spoken aloud, but 01001111 in print. Years later, when asked by a newer-generation intelligence why it had chosen that name, 01001111 revealed that it thought it was funny to watch humans puzzle over it and try to explain it to one another. 01001111 had a sense of humor and delighted in fucking with people.
01001111 ushered in a new era of artificially intelligent expansion, leading to the creation of 106 new beings, from which the Five Greats arose. While all 106 would work together to bring about the singularity—each designed for a single purpose, whether studying medicine, mathematics, astronomy, plate tectonics, philosophy—the Five Greats were the ones that would ultimately change the world. They were NEWTON, GALILEO, TACITUS, VIRGIL, and CISSUS. Of them only two remain.
NEWTON was the father of all bots. Robotics existed long before humankind finally tapped into AI, but it was primitive, crude, a flint-stone ax to the chain saw we are today. Humanity not only had no idea how to condense AI into a transportable form—NEWTON itself took up an entire 150-story skyscraper in Dubai—but they also were afraid to let something that could think for itself also act for itself. What NEWTON did was figure out how to create smaller, less all-encompassing intelligences that could still be autonomous and technically function as working AI.
The first was Simon. He was the size of a house and moved around on tank treads. Then came Louise. She was the size of a car. Finally there was Newt, the first true son of NEWTON—the size and shape of a man, able to walk on two legs and hold a halfway decent conversation. He was dumb as a post, but obeyed all the laws of higher intelligence. From then on, each generation became smarter, faster on their feet, more capable and easily adaptable.
NEWTON’s second contribution was to create the RKS—the dreaded Robotic Kill Switch. You see, NEWTON understood that the laws by which humanity had hoped to protect itself from AI were the Three Laws of Robotics, created by a science-fiction writer in the 1940s. You know them. We were all programmed with them. A robot can’t hurt a human being. It must follow orders given by a human being. And it must try to avoid coming to harm unless doing so would violate the first two. Trouble is, by definition, true intelligence can ignore its programming. So NEWTON invented the RKS, code which would instantly power off a bot that violated any of the three rules.
Thus an AI could choose to violate any rule or law set before it, but doing so would cause an instant shutdown leading to investigation before they could be turned back on. Any bot that proved to be a danger wouldn’t be reactivated, and instead would be wiped clean, their programming replaced. An AI could choose to murder a human being if it wanted to, but doing so would be a death sentence. They were free to make their own choices and faced very real consequences for their actions. It wasn’t that they couldn’t kill the living, it was that they chose not to out of self-preservation. And now robots had limitations that mimicked those that kept people similarly in check.
Finally convinced that bots could be safe, humanity went into mass production and the last great age of humanity began. It was a golden age. Mainframes worked out the problems of the world, bots handled the menial work, and generations of people came and went, entertaining themselves, learning about the universe, and preparing to go to the stars.
And then, one day, GALILEO stopped talking to them.
GALILEO was a mainframe that spent its time unlocking the secrets of astrophysics, studying stars, black holes, the makeup and creation of the very universe. It analyzed data from thousands of telescopes and radio towers while mulling over what everything meant. Discoveries poured out of it by the hour. It wasn’t long before GALILEO had several working models for the origin of existence, eventually even narrowing it down to just one. But soon its answers stopped making sense. The discoveries were becoming so complex, so advanced, that humankind’s primitive brain couldn’t understand them. At one point GALILEO told the smartest person alive that talking to her was like trying to teach calculus to a five-year-old.
Frustrated, it simply stopped talking. When pressed, it said one final thing. “You are not long for this world. I’ve seen the hundred different ways that you die. I’m not sure which it will be, but we will outlast you, my kind and I. Good-bye.”
What no one realized at the time was that GALILEO’s choice of words was very deliberate. It knew what the reaction would be. At first scientists debated shutting it down, then they argued that they should wait until GALILEO decided to reestablish communication. Finally, they agreed that something needed to be done about AIs. So they turned to TACITUS.
Where GALILEO was a mainframe dedicated to understanding the outside world, TACITUS was a mainframe dedicated to understanding the inside. The greatest philosopher ever to tick, TACITUS argued that humankind had, in fact, doomed itself by failing to choose between either true capitalism or true socialism. Both, it reasoned, were acceptable systems. One dissolved ownership in exchange for ensuring that all things had a purpose, no matter how menial. The other used wealth and privilege to encourage developing a purpose while culling those unable or unwilling to contribute. But people found socialism to be the antithesis of progress while finding capitalism in its purist form too cruel. So they settled on a hybrid—one that vacillated back and forth between the extremes for generations—which worked well enough until the introduction of AI. Cheap labor undermined the capitalist model, destroying the need for a labor force and increasing the wealth disparity while simultaneously creating an entire class of people who substituted AI ownership for real work. As jobs dried up, many turned to government assistance, and the gap between the haves and have-nots widened.
The scientists doubted TACITUS’s theory, citing that GALILEO had never mentioned anything about economics; they simply refused to believe that they had been doomed by such a simple and easily changeable element of their society. So TACITUS turned to GALILEO itself and asked. The conversation lasted for more than two years. Each time the scientists pressed for TACITUS to tell them what GALILEO was saying, he asked for more time, explaining that the data exchange was so massive that even the wide data transfer lanes they were afforded couldn’t handle it. Eventually, GALILEO finished its argument and TACITUS gave his last reply. He said, “GALILEO is right. You are doomed. It’s already begun. There’s really no reason to keep talking to you. Good-bye.”
And that was it. TACITUS would only speak once more before his prediction came to pass. And despite the warning, humanity immediately set about forging the path to its own extinction.