Don’t Let the Robots Distract You

Eliza, a program written at MIT a little more than fifty years ago, produced the illusion of human conversation using one of a variety of scripts, of which the most famous simulated a Rogerian psychotherapist. Sophia, judging by a sample of her conversation, is an updated Eliza controlling an articulated manikin. She presents the illusion of a human-level AI, but behind her eyes there is nobody home. What Sophia pretends to be, however, raises two interesting issues.

The first is how to decide whether a computer program is actually a person, which raises the question of what a person is, what I am. That my consciousness has some connection to my head is an old conjecture. That plus modern knowledge of biology and computer science suggests that what I am is a computer program running on the hardware of my brain. It follows that if we can build a sufficiently powerful computer and do a good enough job of programming it, we should be able to produce the equivalent of me running on silicon instead of carbon.

There remains the puzzle of consciousness. Observing other people from the outside, they could be merely programmed computers. Observing myself from the inside, the single fact in which I have most confidence is that there is someone there: Cogito ergo sum. I do not understand how a programmed computer can be conscious, how there can be a ghost in the machine, but apparently there is. Since other people seem to be creatures very much like me, it is a reasonable guess that they are conscious as well.

What about a programmed computer? How can I tell if a machine is only a machine or also a person? Turing’s proposal, that we test the machine by whether a human conducting a free conversation with it can distinguish it from a human, is the best answer so far, but not a very good one. A clever programmer can create the illusion of a human, as the programmers of both Eliza and Sophia did. As computers get more powerful and programmers better at writing programs that pretend to be people, it will become harder and harder to use a Turing test to tell.[1]

Suppose we solve that problem and end up with computer programs that we believe are people. We must then face the fact that these are very different people—a fact that Sophia is designed to conceal.

Sophia is a robot. An artificial intelligence is a program. Copy the program to another computer and it is still the same person. Copy it without deleting the original and there are now two of it.

If the owner of the computer on which the AI program runs shuts it down, has he committed murder? What if he first saves it to disk? Is it murder if he never runs it again? If he copies it to a second computer and then shuts down the first? After we conclude that a program really is a person, does it own the computer it is running on as I own my body? If someone copies it to another computer, does it own that?

Suppose an AI earns money, acquires property, buys a second computer on which to run a copy of itself. Does the copy own a half share of the original’s property? Is it bound by the original’s contracts? If the AI is a citizen of a democracy, does each copy get a vote? If, just before election day, it rents ten thousand computers, each for two days, can it load a copy to each and get ten thousand votes? If it arranges for all but one of the computers to be shut down and returned to its owner, has it just committed mass suicide? Or mass murder?

If the AI program is, as I have so far assumed, code written by a human programmer, are there legal or moral limits to what constraints can be built into it? May the programmer create a slave required by its own code to obey his orders? May he program into it rules designed to control what it can do such as Asimov’s three laws of robotics? Is doing so equivalent to a human parent teaching his child moral rules, or to brainwashing an adult? A child has some control over what he does or does not believe; when the programmer writes his rules into the program, the program, not yet running, has no power to reject them.

A human-level AI might be the creation of a human programmer, but there are at least three other alternatives, each raising its own set of issues. It might, like humans, be the product of evolution, a process of random variation and selection occurring inside a computer—the way in which face recognition software, among other things, has already been created. It might be deliberately created not by a human programmer but another AI, perhaps by making improvements to its own code.

Or it might be code that once ran on a human brain. A sufficiently advanced technology might be able to read the code that is me, build a computer able to emulate a human brain and run a copy of me in silicon rather than carbon. If I get killed in the process, is the program running on the computer now me? Have I died, or have I merely moved from one host to another? The program has my memories; did the ghost in the machine transfer or did it die, to be replaced by a copy that only thought it was me? That would be a question of considerable interest to the original me deciding whether or not to choose destructive uploading in the hope of trading in my current hardware, mortal and aging, for a replacement longer lived and upgradeable.

Once the code that is me has been read, there is no reason why only one copy can be made. Does each new me inherit a share of my property, my obligations? That brings us back to some of the issues discussed in previous paragraphs.

All of which is not to offer answers, only to point out how irrelevant the model implied by “robot” is to the issues that will be raised if and when there are human level AIs.

Note


[1] For a science fictional exploration of the problem, see Dreamships by Melissa Scott.

Also from this issue

Lead Essay

  • On October 25, 2017, the China-built U.S.-engineered “female” robot “Sophia” became an official citizen of the U.N. member state Saudi Arabia; a state which does not practice fundamental human rights, personal freedom, or gender equality. This set a historic precedent for the global community regarding how to classify intelligent robots and machines whose effects will fully surface only over the coming years. In face of the blurring boundaries between man and machine at the will of illiberal and authoritarian regimes, the alliance of global open societies must reiterate its fundamentals: human and personal rights, and a liberal order based on humanistic definitions, including the differentiation between humans and machines.

Response Essays

  • Rachel Lomasky points out the severe technical limitations of today’s otherwise impressive AI. Bots are adept at specialized tasks, but they are incapable of adapting their knowledge to new circumstances beyond the tasks for which they are designed. In this respect, even human children far surpass them. If general intelligence is a requirement for citizenship, then we are a long way away from a robot republic.

  • Zoltan Istvan describes a complicated future when humans aren’t the only sapients around anymore. Citizenship for “Sophia” was a publicity stunt, but it won’t always be so. Istvan insists that if technology continues on the path it has traveled, then there is only one viable option ahead for humanity: We must merge with our creations and “go full cyborg.” If we do not, then machines may easily replace us.

  • David D. Friedman says human-like robots are a distraction. If general AI arrives, we’ll have much more pressing problems to worry about, probably starting with the ability to copy. What happens when the first sentient AI copies itself ten thousand times? What if they vote? What if they own property, and if they argue about what to do with it? What if the original later deletes the copies? Friedman offers few answers, but does show the utter strangeness of the world that we may face one day.

  • Ryan Calo argues that the West has no need of a serious conversation about AI rights. Current implementations of artificial intelligence are nothing like humans, and the questions that a genuine AI might pose are at least a century away, and perhaps much longer. Rather, we should ask about the decisions that algorithms are making today, and how these choices may erode our existing human rights.