Robots And Aliens
by Isaac Asimov

You may have noticed (assuming that you have read my robot stories and novels) that I have not had occasion to discuss the interaction of robots and aliens. In fact, at no point anywhere in my writing has any robot met any alien. In very few of my writings have human beings met aliens, in fact.

You may wonder why that is so, and you might suspect that the answer would be, “I don’t know. That’s just the way I write stories, I guess.” But if that is what you suspect, you are wrong. I will be glad to explain just why things are as they are.

The time is 1940…

In those days, it was common to describe “Galactic Federations” in which there were many, many planets, each with its own form of intelligent life. E. E. (“Doc”) Smith had started the fashion, and John W. Campbell had carried it on.

There was, however, a catch. Smith and Campbell, though wonderful people, were of northwest European extraction and they took it for granted that northwest Europeans and their descendants were the evolutionary crown and peak. Neither one was a racist in any evil sense, you understand. Both were as kind and as good as gold to everyone, but they knew they belonged to the racial aristocracy.

Well, then, when they wrote of Galactic Federations, Earthmen were the northwest Europeans of the Galaxy. There were lots of different intelligences in Smith’s Galaxy but the leader was Kimball Kinnison, an Earthman (of northwest European extraction, I’m sure). There were lots of different intelligences in Campbell’s Galaxy, but the leaders were Arcot, Wade, and Morey, who were Earthmen (of northwest European extraction, I’m sure).

Well, in 1940, I wrote a story called “Homo Sol”, which appeared in the September 1940 issue of Astounding Science Fiction. I, too, had a Galactic Federation composed of innumerable different intelligences, but I had no brief for northwest Europeans. I was of East European extraction myself and my kind was being trampled into oblivion by a bunch of northwest Europeans. I was therefore not intent on making Earthmen superior. The hero of the story was from Rigel and Earthmen were definitely a bunch of second-raters.

Well, Campbell wouldn’t allow it. Earthmen had to be superior to all others, no matter what. He forced me to make some changes and then made some himself, and I was frustrated. On the one hand, I wanted to write my stories without interference; on the other hand, I wanted to sell to Campbell. What to do?

I wrote a sequel to “Homo Sol”, a story called “The Imaginary”, in which only the aliens appeared. No Earthmen. Campbell rejected it; it appeared in the November 1942 issue of Superscience Stories.

Then inspiration struck. If I wrote human/alien stories, Campbell would not let me be. If I wrote alien-only stories, Campbell would reject them. So why not write human-only stories. I did. When I got around to making another serious attempt at dealing with a Galactic society, I made it an all-human Galaxy and Campbell had no objections at all. Mine was the first such Galaxy in science fiction history, as far as I know, and it proved phenomenally successful, for I wrote my Foundation (and related) novels on that basis.

The first such story was “Foundation” itself, which appeared in the May 1942 Astounding Science Fiction. Meanwhile, it had also occurred to me that I could write robot stories for Campbell. I didn’t mind having Earthmen superior to robots-at least just at first. The first robot story that Campbell took was “Reason”, which appeared in the April 1941 Astounding Science Fiction. Those stories, too, proved very popular, and presuming upon their popularity, I gradually made my robots better and wiser and more decent than human beings and Campbell continued to take them.

This continued even after Campbell’s death, and now I can’t think of a recent robot story in which my robot isn’t far better than the human beings he must deal with. I think of “Bicentennial Man”, “Robot Dreams”, “Too Bad” and, most of all, I think of R. Daneel and R. Giskard in my robot novels.

But the decision I made in the heat of World War II and in my resentment of Campbell’s assumption have stayed with me. My Galaxy is still all-human, and my robots still meet only humans.

This doesn’t mean that (always assuming I live long enough) it’s not possible I may violate this habit of mine in the future. The ending of my novel Foundation and Earth makes it conceivable that in the sequel I may introduce aliens and that R. Daneel will have to deal with them. That’s not a promise because actually I haven’t the faintest idea of what’s going to happen in the sequel, but it is at least conceivable that aliens may intrude on my close-knit human societies.

(Naturally, I repel, with contempt, any suggestion that I don’t introduce aliens into my stories because I “can’t handle them.” In fact, my chief reason for writing my novel The Gods Themselves was to prove to anyone who felt he needed the proof, that I could, too, handle aliens. No one can doubt that I proved it, but I must admit that even in The Gods Themselves, the aliens and the human beings didn’t actually meet face-to-face.)

But let’s move on. Suppose that one of my robots did encounter an alien intelligence. What would happen?

Problems of this sort have occurred to me now and then but I never felt moved to make one the basis of a story.

Consider- How would a robot define a human being in the light of the three laws. The First Law, it seems to me, offers no difficulty: “A robot may not injure a human being, or through inaction, allow a human being to come to harm.”

Fine, there need be no caviling about the kind of a human being. It wouldn’t matter whether they were male or female, short or tall, old or young, wise or foolish. Anything that can define a human being biologically will suffice.

The Second Law is a different matter altogether: “A robot must obey orders given it by a human being except where that would conflict with the First Law.”

That has always made me uneasy. Suppose a robot on board ship is given an order by someone who knows nothing about ships, and that order would put the ship and everyone on board into danger. Is the robot obliged to obey? Of course not. Obedience would conflict with the First Law since human beings would be put into danger.

That assumes, however, that the robot knows everything about ships and can tell that the order is a dangerous one. Suppose, however, that the robot is not an expert on ships, but is experienced only in, let us say, automobile manufacture. He happens to be on board ship and is given an order by some landlubber and he doesn’t know whether the order is safe or not.

It seems to me that he ought to respond, “Sir, since you have no knowledge as to the proper handling of ships, it would not be safe for me to obey any order you may give me involving such handling.”

Because of that, I have often wondered if the Second Law ought to read, “A robot must obey orders given it by qualified human beings…”

But then I would have to imagine that robots are equipped with definitions of what would make humans “qualified” under different situations and with different orders. In fact, what if a landlubber robot on board ship is given orders by someone concerning whose qualifications the robot is totally ignorant.

Must he answer, “Sir, I do not know whether you are a qualified human being with respect to this order. If you can satisfy me that you are qualified to give me an order of this sort, I will obey it.”

Then, too, what if the robot is faced by a child of ten-indisputably human as far as the First Law is concerned. Must the robot obey without question the orders of such a child, or the orders of a moron, or the orders of a man lost in the quagmire of emotion and beside himself?

The problem of when to obey and when not to obey is so complicated and devilishly uncertain that I have rarely subjected my robots to these equivocal situations.

And that brings me to the matter of aliens.

The physiological difference between aliens and ourselves matters to us-but then tiny physiological or even cultural differences between one human being and another also matter. To Smith and Campbell, ancestry obviously mattered; to others skin color matters, or gender or eye shape or religion or language or, for goodness sake, even hairstyle.

It seems to me that to decent human beings, none of these superficialities ought to matter. The Declaration of Independence states that “All men are created equal.” Campbell, of course, argued with me many times that all men are manifestly not equal, and I steadily argued that they were all equal before the taw. If a law was passed that stealing was illegal, then no man could steal. One couldn’t say, “Well, if you went to Harvard and were a seventh-generation American you can steal up to one hundred thousand dollars; if you’re an immigrant from the British Isles, you can steal up to one hundred dollars; but if you’re of Polish birth, you can’t steal at all.” Even Campbell would admit that much (except that his technique was to change the subject).

And, of course, when we say that “All men are created equal” we are using “men” in the generic sense including both sexes and all ages, subjected to the qualification that a person must be mentally equipped to understand the difference between right and wrong.

In any case, it seems to me that if we broaden our perspective to consider non-human intelligent beings, then we must dismiss, as irrelevant, physiological and biochemical differences and ask only what the status of intelligence might be.

In short, a robot must apply the Laws of Robotics to any intelligent biological being, whether human or not.

Naturally, this is bound to create difficulties. It is one thing to design robots to deal with a specific non-human intelligence, and specialize in it, so to speak. It is quite another to have a robot encounter an intelligent species whom it has never met before.


After all, different species of living things may be intelligent to different extents, or in different directions, or subject to different modifications. We can easily imagine two intelligences with two utterly different systems of morals or two utterly different systems of senses.

Must a robot who is faced with a strange intelligence evaluate it only in terms of the intelligence for which he is programmed? (To put it in simpler terms, what if a robot, carefully trained to understand and speak French, encounters someone who can only understand and speak Farsi?)

Or suppose a robot must deal with individuals of two widely different species, each manifestly intelligent. Even if he understands both sets of languages, must he be forced to decide which of the two is the more intelligent before he can decide what to do in the face of conflicting orders-or which set of moral imperatives is the worthier?

Someday, this may be something I will have to take up in a story but, if so, it will give me a lot of trouble. Meanwhile, the whole point of the Robot City volumes is that young writers have the opportunity to take up the problems I have so far ducked. I’m delighted when they do. It gives them excellent practice and may teach me a few things, too.

Загрузка...