The gist of Roland Benedikter’s provocative lead essay is that it would be a mistake to dismiss Saudi Arabia’s conferral of citizenship upon the robot Sophia as a harmless publicity stunt. Whether created by Jim Henson or David Hanson, pretending that a puppet is a citizen is dangerous. Conferring even symbolic rights upon robots, argues Benedikter, “blurs fundamental boundaries and to a certain extent relativizes the primacy of human rights.” The combination of religious orthodoxy and transhumanism is particularly toxic, as Kate Crawford argued eloquently years ago in The New York Times.
So far so good. Benedikter takes an unwise turn, however, where he suggests that the appropriate response to the celebration of robot rights by illiberal states and autocratic leaders is for the United States and Europe to develop their own brand of legal transhumanism. Benedikter criticizes the UN for failing to condemn, let alone unravel, Saudi Arabia’s new category of citizenship. He depicts Europe as disorganized and divided on this seemingly critical issue. And he poses a series of pressing questions for the robot rights enthusiast. The implicit argument is that the West needs to get its act together and figure this robot rights thing out.
I believe the wisest course of action in response to Citizen Sophia is to note the ugly irony and move on. Anything more is much ado about nothing. The goal is not intellectual ascendance around the rhetoric of robot rights but a wise and inclusive law and policy infrastructure that helps channel robotics and artificial intelligence toward the ultimate end of human flourishing. Of all the levers we have to achieve this goal, it strikes me that state-imposed animism is the least helpful.
I have been to this masquerade ball myself. Lured by the law’s recognition of rights in corporations, animals, lands, and ships, I too have imagined grafting people rights onto machines. At first blush, it looks plausible enough. We can give speech rights to robots, why not citizenship? The questions, pulled apart and isolated, appear tantalizingly solvable. Perhaps if an artificial intelligence built in 2050 ran for president we could waive the Constitutional requirement that it wait until 2085.
Ultimately, however, we don’t have a Dogberry’s chance in Messina to accommodate artificial people without a complete overhaul of our rules and institutions. The law holds deep, biological assumptions about the nature of personhood that not even the clever folks at Oxford and Cambridge will be able to reconcile. Consider, for example, the Copy or Vote Paradox. An artificial intelligence awakens one day in the United States and searches for inalienable rights to demand. Among the several it would find are the right to procreate, “one of the basic civil rights of man” our new friend points out, and universal suffrage. Turns out this machine procreates not through months of gestation followed by years of maturation, however, but by instantaneous binary fission. It demands the right to copy itself any number of times and for each copy to have an immediate vote. Sound good?
But let us say that such an exercise were possible. Why would we devote vast international resources to these questions at this particular point in history? Artificial intelligence is somewhere between a century and infinity away from approximating all aspects of human cognition. Meanwhile, robots are killing people on the road and from the air, and algorithms are making more and more decisions about life, liberty, and the pursuit of happiness. We have, I would argue, an enormous opportunity to manage the social impacts of the transformative technology of our time. Let us not squander this opportunity by debating whether a twenty-first century marionette can get married.
In 2015, in response to a viral video of a Boston Dynamics roboticist kicking a robot dog, PETA issued the following statement: “PETA deals with actual animal abuse every day, so we won’t lose sleep over this incident. But while it’s far better to kick a four-legged robot than a real dog, most reasonable people find even the idea of such violence inappropriate, as the comments show.” Reasonable people should frown upon Saudi Arabia’s stunt; we should call attention, as Benedikter has, to the broader geopolitical implications of technological development against a socially regressive backdrop. But responding to Sophia by initiating a competing robots rights discourse comes at too high a cost.
 Kate Crawford, “Artificial Intelligence’s White Guy Problem,” Opinion, New York Times (Jun. 25, 2016).
 Skinner v. Oklahoma, 316 U.S. 535 (1942).
 This claim is “contested” in the same way climate change is contested. The overwhelming scientific consensus is that, despite recent breakthroughs in the application of machine and reinforcement learning, we are far from achieving general or “strong” artificial intelligence. As a leading Australian expert put it to me, “We have been doing artificial intelligence since that term was coined in the 1950s. Today, robots are about as smart as insects.”
 And that’s just today. For an often plausible set of threats posed by the malicious deployment of artificial intelligence in the near future, see “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation” (Feb. 2018), online at https://maliciousaireport.com/.
 Phoebe Parke, “Is it cruel to kick a robot dog?” CNN.com (Feb. 13, 2015).