Much Ado about Robots

The gist of Roland Benedikter’s provocative lead essay is that it would be a mistake to dismiss Saudi Arabia’s conferral of citizenship upon the robot Sophia as a harmless publicity stunt. Whether created by Jim Henson or David Hanson, pretending that a puppet is a citizen is dangerous. Conferring even symbolic rights upon robots, argues Benedikter, “blurs fundamental boundaries and to a certain extent relativizes the primacy of human rights.” The combination of religious orthodoxy and transhumanism is particularly toxic, as Kate Crawford argued eloquently years ago in The New York Times.[1]

So far so good. Benedikter takes an unwise turn, however, where he suggests that the appropriate response to the celebration of robot rights by illiberal states and autocratic leaders is for the United States and Europe to develop their own brand of legal transhumanism. Benedikter criticizes the UN for failing to condemn, let alone unravel, Saudi Arabia’s new category of citizenship. He depicts Europe as disorganized and divided on this seemingly critical issue. And he poses a series of pressing questions for the robot rights enthusiast. The implicit argument is that the West needs to get its act together and figure this robot rights thing out.

I believe the wisest course of action in response to Citizen Sophia is to note the ugly irony and move on. Anything more is much ado about nothing. The goal is not intellectual ascendance around the rhetoric of robot rights but a wise and inclusive law and policy infrastructure that helps channel robotics and artificial intelligence toward the ultimate end of human flourishing. Of all the levers we have to achieve this goal, it strikes me that state-imposed animism is the least helpful.

I have been to this masquerade ball myself. Lured by the law’s recognition of rights in corporations, animals, lands, and ships, I too have imagined grafting people rights onto machines. At first blush, it looks plausible enough. We can give speech rights to robots, why not citizenship? The questions, pulled apart and isolated, appear tantalizingly solvable. Perhaps if an artificial intelligence built in 2050 ran for president we could waive the Constitutional requirement that it wait until 2085.

Ultimately, however, we don’t have a Dogberry’s chance in Messina to accommodate artificial people without a complete overhaul of our rules and institutions. The law holds deep, biological assumptions about the nature of personhood that not even the clever folks at Oxford and Cambridge will be able to reconcile. Consider, for example, the Copy or Vote Paradox. An artificial intelligence awakens one day in the United States and searches for inalienable rights to demand. Among the several it would find are the right to procreate, “one of the basic civil rights of man”[2] our new friend points out, and universal suffrage. Turns out this machine procreates not through months of gestation followed by years of maturation, however, but by instantaneous binary fission. It demands the right to copy itself any number of times and for each copy to have an immediate vote. Sound good?

But let us say that such an exercise were possible. Why would we devote vast international resources to these questions at this particular point in history? Artificial intelligence is somewhere between a century and infinity away from approximating all aspects of human cognition.[3] Meanwhile, robots are killing people on the road and from the air, and algorithms are making more and more decisions about life, liberty, and the pursuit of happiness.[4] We have, I would argue, an enormous opportunity to manage the social impacts of the transformative technology of our time. Let us not squander this opportunity by debating whether a twenty-first century marionette can get married.

In 2015, in response to a viral video of a Boston Dynamics roboticist kicking a robot dog, PETA issued the following statement: “PETA deals with actual animal abuse every day, so we won’t lose sleep over this incident. But while it’s far better to kick a four-legged robot than a real dog, most reasonable people find even the idea of such violence inappropriate, as the comments show.”[5] Reasonable people should frown upon Saudi Arabia’s stunt; we should call attention, as Benedikter has, to the broader geopolitical implications of technological development against a socially regressive backdrop. But responding to Sophia by initiating a competing robots rights discourse comes at too high a cost.


[1] Kate Crawford, “Artificial Intelligence’s White Guy Problem,” Opinion, New York Times (Jun. 25, 2016).

[2] Skinner v. Oklahoma, 316 U.S. 535 (1942).

[3] This claim is “contested” in the same way climate change is contested. The overwhelming scientific consensus is that, despite recent breakthroughs in the application of machine and reinforcement learning, we are far from achieving general or “strong” artificial intelligence. As a leading Australian expert put it to me, “We have been doing artificial intelligence since that term was coined in the 1950s. Today, robots are about as smart as insects.”

[4] And that’s just today. For an often plausible set of threats posed by the malicious deployment of artificial intelligence in the near future, see “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation” (Feb. 2018), online at

[5] Phoebe Parke, “Is it cruel to kick a robot dog?” (Feb. 13, 2015).

Also from this issue

Lead Essay

  • On October 25, 2017, the China-built U.S.-engineered “female” robot “Sophia” became an official citizen of the U.N. member state Saudi Arabia; a state which does not practice fundamental human rights, personal freedom, or gender equality. This set a historic precedent for the global community regarding how to classify intelligent robots and machines whose effects will fully surface only over the coming years. In face of the blurring boundaries between man and machine at the will of illiberal and authoritarian regimes, the alliance of global open societies must reiterate its fundamentals: human and personal rights, and a liberal order based on humanistic definitions, including the differentiation between humans and machines.

Response Essays

  • Rachel Lomasky points out the severe technical limitations of today’s otherwise impressive AI. Bots are adept at specialized tasks, but they are incapable of adapting their knowledge to new circumstances beyond the tasks for which they are designed. In this respect, even human children far surpass them. If general intelligence is a requirement for citizenship, then we are a long way away from a robot republic.

  • Zoltan Istvan describes a complicated future when humans aren’t the only sapients around anymore. Citizenship for “Sophia” was a publicity stunt, but it won’t always be so. Istvan insists that if technology continues on the path it has traveled, then there is only one viable option ahead for humanity: We must merge with our creations and “go full cyborg.” If we do not, then machines may easily replace us.

  • David D. Friedman says human-like robots are a distraction. If general AI arrives, we’ll have much more pressing problems to worry about, probably starting with the ability to copy. What happens when the first sentient AI copies itself ten thousand times? What if they vote? What if they own property, and if they argue about what to do with it? What if the original later deletes the copies? Friedman offers few answers, but does show the utter strangeness of the world that we may face one day.

  • Ryan Calo argues that the West has no need of a serious conversation about AI rights. Current implementations of artificial intelligence are nothing like humans, and the questions that a genuine AI might pose are at least a century away, and perhaps much longer. Rather, we should ask about the decisions that algorithms are making today, and how these choices may erode our existing human rights.