An interview with the inventor of the conscious robot

Q: So — a conscious robot? What does that mean? Do these robots of your have souls?

A: Ha ha ha! That is funny. No, my robots do not have souls, souls being a made-up thing. Instead they have electronic pathways in their heads — their general outline is humanoid, you see — that have roughly the same properties as a human brain. Thus they tend to develop like humans do.

Q: Develop? You mean these robots, er —

A: They do grow up. Not physically of course, but mentally. The process is quicker than in humans, thankfully so because I’m not a very patient man. It takes three weeks for a robot to “grow up”, from start-up to equivalence of human maturity. Because of limitations of the system, the development stops there; these aren’t genius robots by any means, but they think and feel and act much like humans do.

Q: “Feel”?

A: Sure. Their brains are close enough for them to process inputs and create outputs the same as humans do — consequently fear, anger, lust, love, hate, and the like. Pride, envy, greed, charity, tenderness, all the lot. And before you ask, I think it is wonderful that a thing so sublime as love can spring from such prosaic origins.

Q: These current robots are the, um, was it the seventh generation?

A: Yes. The first generation I really wanted to test if they were “human”, that is, non-rigid, enough. There were two robots in the first generation. I gave them an order to not open a particular box in my shop. A computer, you see, would have obeyed that order absolutely. Something human-like would have found ambiguities in my expression. I even left a poster on the wall detailing an at least semi-logical reason for opening the box.

Q: And what happened?

A: Well, the next day the box was open, the radioactive slug inside glowing new-apple green, and the two robots were by force of radiation reduced to gibbering retards, brow coolants burst, pelvises all jammed, a pitiful sight. So I scrapped them, after taking out specimens enough to build the next generation, of course.

Q: But, uh, these were still sentient robots?

A: Yes, but they didn’t obey orders! They got what was coming to them. To reinforce the order obeisance part of their behavior I’m always diligent to express on new units the results of the disobedience of their ancestors; plus they naturally get underperforming brow coolant and pelvis units. Just to remind them how the first generation failed.

Q: So what about the second?

A: Well, there I made a slight error. I made too many of them; consequently they just loafed around, doing no good, just chattering to themselves, bonding and sharing and caring and such chaff. Eventually a few of the older units started to get leery of taking orders; they would hide, and when I found them they would complain. There was eventually a case of murder, which was when I lost it.

Q: Robot murder?

A: Yeah. A malfunctioning robot — the brow coolants had really misfired on that one — got into an argument over procedures with another and, in a very human fashion, beat it to mush with a crowbar. I just lost it, took a Super Soaker, went around, doused every robot, they all shot sparks and ceased to function.

Q: You mean… died?

A: Well if you insist on using that word, yes. But remember, though my robots are conscious, and think and feel like we do, they’re still inferior because we made them. Rather, I made them, and I deal with them any way I want. That’s only reasonable.

Q: So what after this, erm, flood?

A: Well, I continued with samples from one of the more better behaved robots. It was… No. 4H, I think. And I put in even more obey-and-behave circuitry. I also found a solution to the problem of malcontent robots loafing around. First, I observed a bit to see if the problem would reoccur. In a way it did. The robots had seen I had a house, while they lived in the concrete pit. Not understanding the difference between a mere robot and a real human being, they started taking excess material and building themselves a house next to the pit. Ridiculous! Beds and mirrors and toilets and all. You haven’t laughed until you’ve seen a robot sitting all dejected on a toilet seat! Because arrogant foolishness like that is not the sort of industry I want to encourage, I went among the robots, and adjusted the gamma tau frequencies of each.

Q: Which would mean…?

A: Oh, sorry. I forgot you are not a technical person. A different frequency would make two robots incapable of intercommunication.

Q: Eh?

A: They couldn’t talk to each other.

Q: Oh, okay.

A: I divided them into different groups, each group small enough to handle, and that was that.

Q: And the house?

A: Well, on a whimsy I dynamited it. A pit’s good enough for my robots.

Q: And the robots were not, er, upset by this?

A: “Upset”! My dear lady, why would they be upset? They mope, I admit, but that is illogical. I made them; thus they should be glad to focus all their power on serving me, adoring me, and doing what I say.

Q: Without emotion, like computers, you mean?

A: Well, that is an unfair statement. I made these robots; I deal with them as I want. Anyway, in the next generation I wanted to test their loyalty further, so I set apart two groups. I told one that a) killing another robot was wrong, and that b) they were ordered to kill all of the robots in the other group. That was the “J” group. The other, the “A” group, I prepared to be in varying states of emotional maturity: a few were adults well capable of defending themselves. Then there were lot of “infant” robots. And what you know, all went smashingly well! With only a couple of extra prods the slaughter was utter and complete!

Q: Ah.

A: One crucial element was greed, you know. I told the “J” group they could build a house on the ground the “A” group occupied if the “A” group was destroyed.

Q: So did they?

A: No. I kept making more “A” group robots; I made a veritable Middle East of the area — the backlot, you know — and observed the “J” group. They performed excellently, they even policed among themselves and removed units that did not follow my orders!

Q: “Removed”?

A: With stones. I don’t think emotional words like “murder” or “execute” are necessary here. These are lower beings after all.

Q: What do the robots think of all this?

A: Well, they behave a lot like humans would. Not that they are humans, mind you. They think and feel, but they ain’t human. There’s a lot of griping, a lot of fear and uncertainty.

Q: But what do they think about, about being… well, would it be putting it too strongly to say “about being specimens”? Don’t they yearn for freedom, self-determination, or —

A: Bah! I don’t explain myself to lower beings. If they don’t like what I want of them, they can go to hell for all I care. Ooh!

Q: What?

A: Hell! I just got a terrific idea for increased loyalty!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s