Here’s Why Robots Aren’t Ready for the Robot Republic

Roland Benedikter’s essay raises insightful questions about the implications of robots achieving, if not surpassing, human intelligence. He may be correct that societies will be compelled to offer them citizenship, leading to many ethical and logistical quandaries, beginning with how “human level intelligence” is defined and measured. However, the future he imagines is remote. While both research and consumer products based on artificial intelligence are progressing quickly, they are confined to specialized tasks. The leap from there to generalized intelligence is enormous, and the future imagined in the essay is still far ahead of us.

While it is quite premature to have a concrete discussion about robots as full members of society, we can speculate about a robot citizenry. However, it will be closer to speculative fiction than political science, akin to futurists in the 1970s talking about smart phones. But in the near term, there is benefit to be had from a conversation on which decisions should be made by artificial intelligence.

As a machine learning researcher and practitioner, I worry less about catastrophic situations, such as those that would threaten the existence of humanity, and more about how important decisions are increasingly offloaded to artificial intelligences. Visibility into how AIs reach decisions becomes obscured as they become more complicated, making it much more difficult to discover bugs and prejudices. Thus we need to decide how much auditability and transparency should be required of their predictive models.

This question becomes much more complicated if we assume robots with human-level intelligence. Currently, bots that are racist are shut down, such as in the case of Microsoft’s racist Tay bot, or reprogrammed like Google’s image recognition software. However, in the future, if a robot is acting unethically, is there an obligation or even a right to reprogram it? Is changing its code acceptable? What if they were hacked? How would the penal system apply to a robot?

It may not even be apparent that a given robot citizen is acting abnormally. It is a bit naive to assume that robots will want the things we want, or even that they will want anything at all. Is literal embodiment necessary to be considered a citizen? Does it have to be a humanoid body? Currently, many powerful artificial intelligences have no bodies (for example, those making loan decisions), highlighting an enormous assumption about exactly how robots will be citizens and how many questions about this future remain open. Not only do we wonder who will build the roads in an artificial intelligence society, but we must also question whether beings with few physical needs and the ability to communicate instantly would even care about roads.

Likewise, it is not a foregone conclusion that citizen robots will outstrip humans in terms of numbers. Robots, probably. But humans will not passively acquiesce to a large new voting bloc. The people currently screaming about citizenship for immigrants would only be more vociferous against robots. The exception, of course, would be if they thought robots would vote “the right way,” maybe because they felt they could control the robots programming. This would lead to an arms race, in which political parties work to create partisans faster than their opponents. Humans will only let a robot citizenry dilute their voices if they will bolster their cause. It is unclear how to determine whether a robot should be granted suffrage. In most countries, voting rights are conferred on citizens with age, which will clearly need to be modified for an artificial intelligence. How does one determine the age of a being that can be powered down, or reset? Without age, there needs another standard for evaluating AIs.

How do we even evaluate artificial intelligence? Clearly not every program is intelligent. Therefore we need a method to identify those which are capable of voting, which is a proxy for when an intelligence has reached the level of a human. Stated simply, “How do we know A is smart enough to vote?” One answer may be the Turing Test, determining whether an artificial intelligence can fool a human into think it is a fellow natural intelligence. In the same paper that he presents his famous test, Turing states that rather than measuring whether a robot can demonstrate adult knowledge, we should determine whether it can learn like a child. This may be an appropriate test, as our current artificial intelligences are limited by their training data, unlike a child. Additionally, several natural intelligence assessment tools already exist, from standard IQ tests to civil service exams. With very little adaptation, these could be administered to robots. However, these tests mostly focus on language and logical processing capabilities. They would serve as excellent measures of what the intelligence already knows, but not its ability to learn.

Should artificial intelligences that haven’t reached full human sentience be treated, both ethically and legally, as children or animals? Or are they different enough that new metaphors will need to be created? I also pose the complementary question of how many votes we grant to a given robot. In most democracies, citizenship comes with exactly one vote. Perhaps for robots, we should follow the model of the Republic of Gondour and give the smarter, more sophisticated intelligences a weightier vote while those below a certain threshold receive a fraction of a vote.

While the shape of public policy around robots remains largely uncertain, this is not a cause for concern. With all due respect to my machine learning colleagues, there is no pressing need to fill in the details. The number of AI papers has increased nine-fold in the past twenty years. However, the vast majority of that research is into specialized intelligences, such as game playing (the much-touted case of an AI beating a human at Go), natural language translation, and even proving mathematical theorems. But general intelligence still eludes us. Changing a task even slightly results in a dramatic decline in performance. In line with Moravec’s Paradox, researchers have found brilliant specialist algorithms are far easier to implement than the abilities of a human toddler. In many cases, artificial intelligence trails humans by significant margins in terms of speed, accuracy, and amount of training data needed. It took a Google neural network 16,000 processors and 20,000 training examples to learn to identify cat videos. The result was less than 75% accurate. The average two year old could do this task with far greater accuracy, after being told something was a cat a handful of times.

Intelligence is much more than a combination of algorithms. Regardless of how smart birds are, no matter how many birds were solving a problem together or how long they thought for, the result would not be a human-level solution. Currently, our models are much like birds – specialized, with amazing abilities to recognize and mimic patterns. To achieve “humanity,” robots will need to answer “what-if” questions and understand causal implications. The progress in this area has been so miniscule that it’s hard to imagine how this will be implemented in a robot brain. Perhaps a novel, as-yet-undiscovered method, will allow an ensemble of mimickers to reach sentience, maybe based on the biology of the human brain. Perhaps some shortcuts can be built into the system, similar to Pinker’s “language instinct.” But it is certainly not a foregone conclusion, and certainly not “soon.”

To the contrary, there is increasing evidence that the algorithm behind rapid progress in machine learning, including neural networks and deep learning, will not be the silver bullet that delivers general artificial intelligence. Backprop, a gradient descent method that was first discovered in the 1980s by Geoffrey Hinton, is fuzzy pattern recognition method relying on labeled training data. Backprop can be thought of as various neurons communicating with each other (through a gradient signal) whether their outputs should increase or decrease. It is unable to generalize like a brain. Rather, it’s little more than a clever mimic, unable to predict novel examples and requiring magnitudes more data than a human brain for training. For example, even an amazing chess robot would be at a loss if presented with “suicide chess,” a version where the pieces move identically but the goal is to lose your king rather than keep it. However, any human chess player can grasp the concept and adapt her strategy in just a few minutes. This is because recent advancements in artificial intelligence are little more than methods of brute force, requiring massive hardware systems and making them unsuited for the “small data” problems that are every day human decisionmaking. Hinton, now a lead researcher at Google Brain, believes that back-propagation is a dead end and that we will have to start anew.

Thus, today humans have an obligation to ensure our decisionmaking artificial intelligences are making moral decisions. Human biases embedded in the training data have led to bias in everything from beauty contests to fair lending. At the very least, we need to audit and scrub our data to ensure the intelligences are initialized without prejudices. While robot citizenship is probably far in the future, this is one step that can be taken immediately. Perhaps in the future, certification companies will arise, giving their mark to companies that perform these tasks to their standards.

Also from this issue

Lead Essay

  • On October 25, 2017, the China-built U.S.-engineered “female” robot “Sophia” became an official citizen of the U.N. member state Saudi Arabia; a state which does not practice fundamental human rights, personal freedom, or gender equality. This set a historic precedent for the global community regarding how to classify intelligent robots and machines whose effects will fully surface only over the coming years. In face of the blurring boundaries between man and machine at the will of illiberal and authoritarian regimes, the alliance of global open societies must reiterate its fundamentals: human and personal rights, and a liberal order based on humanistic definitions, including the differentiation between humans and machines.

Response Essays

  • Rachel Lomasky points out the severe technical limitations of today’s otherwise impressive AI. Bots are adept at specialized tasks, but they are incapable of adapting their knowledge to new circumstances beyond the tasks for which they are designed. In this respect, even human children far surpass them. If general intelligence is a requirement for citizenship, then we are a long way away from a robot republic.

  • Zoltan Istvan describes a complicated future when humans aren’t the only sapients around anymore. Citizenship for “Sophia” was a publicity stunt, but it won’t always be so. Istvan insists that if technology continues on the path it has traveled, then there is only one viable option ahead for humanity: We must merge with our creations and “go full cyborg.” If we do not, then machines may easily replace us.

  • David D. Friedman says human-like robots are a distraction. If general AI arrives, we’ll have much more pressing problems to worry about, probably starting with the ability to copy. What happens when the first sentient AI copies itself ten thousand times? What if they vote? What if they own property, and if they argue about what to do with it? What if the original later deletes the copies? Friedman offers few answers, but does show the utter strangeness of the world that we may face one day.

  • Ryan Calo argues that the West has no need of a serious conversation about AI rights. Current implementations of artificial intelligence are nothing like humans, and the questions that a genuine AI might pose are at least a century away, and perhaps much longer. Rather, we should ask about the decisions that algorithms are making today, and how these choices may erode our existing human rights.