About this Issue

What does it mean when a robot is granted citizenship? What does it mean when humans aren’t created equal anymore? What happens legally or philosophically when the lines between human and machine are less and less distinct?

We may not live in that world just yet, but we will be living there soon. Last year, in what was widely derided as a publicity stunt, Saudi Arabia granted citizenship to an android. Critics were quick to note that Saudi citizenship did not grant the array of rights that citizenship in a liberal democracy might offer; they were quick also to note that the android, named Sophia and feminine in presentation, raised awkward questions about the legal status of human women in the kingdom.

We are not yet in the era of true, human-like AI. But we may be there soon, so discussion of these issues may be better had now than later. We have invited five experts in sociology, law, economics, and artificial intelligence to comment on the many issues surrounding the status of machines, and machine-augmented humans, in liberal societies. Professor Roland Benedikter of the Center for Advanced Studies of Eurac Research Bozen-Bolzano; Professor David D. Friedman of the Santa Clara University School of Law; AI researcher Rachel Lomasky; transhumanist journalist and advocate Zoltan Istvan; and robot law expert Professor Ryan Calo of the University of Washington.

We welcome comments from readers as well; comments will remain open through the end of the month.

Lead Essay

Citizen Robot

The “overcoming of man” long announced by the western political philosophy of the nineteenth and twentieth centuries seems to have begun in practice, induced – as remains to be seen, consciously or unconsciously – by states and leaders who live in a paradox: a medieval worldview connected with the hyper-technology of tomorrow.

On October 25, 2017, the first “autonomous” robot was awarded the citizenship of a recognized U.N. country, Saudi Arabia. The robot “Sophia” (“Wisdom”), equipped with a female body for greater acceptance, with a face modeled on actress Audrey Hepburn, and claiming to have an artificial intelligence capable of interacting with humans and the surrounding environment, was built by America-founded globalized company Hanson Robotics – not in the United States, but in Hong Kong, China, where the firm is based. Saudi Arabia awarded its citizenship in the framework of its “Future Investment Initiative” after a public interview in which “Sophia” stated that fears of a global takeover of humans by artificial intelligence (AI) in the form of intelligent robots were unfounded. “It’s a historic moment that I’m the first robot in the world to be recognized by citizenship,” Sophia said, with her face blushing slightly.

A historic moment in the history of humanity indeed – and not just for the bestowing nation. Saudi Arabia is a country with one of the least liberal societies in the world. Women were not allowed to drive there until January 2018. However, the country wants to profile itself as a leading Islamic future power in the Middle East against its main competitor Iran – and it’s trying to give itself the appearance of a high-tech nation at the forefront of global development, anticipating major breakthroughs and setting the rules for all others. Obviously it is poorly understood in the Saudi capital Riyadh that citizenship indirectly also involves the granting of basic U.N.-recognized rights as individual rights, the preliminary concept of personhood and human dignity. It is currently unclear whether and to what extent these implicit rights have passed on to the “robotess” – fembot or gynoid as it is called ­– Sophia with the acquisition of a citizenship previously reserved for humans. As commentators remarked, the questions of exactly which rights a “thing” can have, what kind of precedent is set thereby, and what this means for the status of intelligent machines in the international community of nations, are still so unexpected and new, that there are no clear indications about how to answer them. Therefore all interpretations are open, from – in theory – complete equality with humans to a merely symbolic transfer that is not legally relevant.

The wanted or unwanted “formal breakthrough” in human-machine interrelation in a nation that ignores individual human rights was not an isolated case. At the beginning of November 2017, the first robot allegedly equipped with AI built by Chinese firm iFlytec (famous for voice recognition, increasingly used for identification and surveillance) and Tsinghua University Beijing was classified in China as a medical assistant doctor with anamnesis and partial diagnosis authorization after passing the written national medical exam which was previously reserved for humans. “It” and his like are to be used mainly in the Chinese countryside, where many thousands of doctors and nursing staff are lacking. Around the same time, in another historical symptomatology, one of the most important photography prizes, the Taylor Wessing Portrait Photography Prize, shortlisted Finnish artist’s Maija Tammi’s portrait photos of the android Erica – another “female” robot, unsurprisingly. According to various sources, “Erica” may soon be promoted to become a regular anchor-woman on Japanese TV. Meanwhile, at the Institute for Molecular Biotechnology of the Austrian Academy of Sciences and UCLA’s Broad Stem Cell Research Center, human mini-brains (cerebral organoids) were transplanted into animal brains with the option of integrating future artificial intelligence into such new hybrid brains “where appropriate and useful” in order to create new patterns of “autonomous operational intelligence.”

Far from mere speculation, these developments are building on increasingly solid ground, and many call them irreversible. We appear to be reaching a confluence between man and machine - one illustrated on 8 October 2016, when the first “Cyborg Olympics” (Cybathlon) were held at the Swiss arena in Kloten, Switzerland. Their goal was to display the options of man-machine convergence, overcoming traditional man-machine-interaction, including, for example, the upcoming interconnection of human brains with potential self-aware AI; and to accustom the public imaginary to an upcoming new technology-man civilization. The Cyborg Olympics will be replicated on May 2-3, 2020, undoubtedly with even better technology.

Many of these multiplying developments follow the patterns laid out on March 11, 2013 in an open letter to then U.N. Secretary General Ban Ki-Moon by the Global Future 2045 Congress, an international assembly of influential technophiles, opinion-makers, philanthropists, engineers, scientists and religious leaders – including Sophia’s builder David Hanson, Google’s Ray Kurzweil, Oxford University’s benefactor James Martin and representatives of Oxford University’s Future of Humanity Institute. In this letter funds and support were requested from the U.N. and political leaders around the world for the development and rapid diffusion of human-like AI robots and autonomous “avatars” “to solve the main problems of humanity.”

According to her builders, Citizen Sophia is indeed breaking new ground for the whole of humanity by being a U.N.-defined citizen now. In the view of Hanson Robotics, it’s no longer just the new intellectual proletarians camping on the street for a night to be the first to get hold of the new iPhone with AI face recognition who will determine the story of the future. Instead “Sophia” is a program: in the future, wisdom will appear as technological and, if possible, “female” – for all those who want to “use it.” And honni soit qui mal y pense about Sophia as “hot” or “sexy.” The magazine Business Insider which opened up with such a title soon deleted it because of its political incorrectness in times of publicly debated sexual harassment allegations and the MeToo movement. Yet the announcement in September 2017 that China may intend to produce female “doll robots” in masses to compensate for the lack of women due to the effects of its 35 year-long one-child policy, remained relevant for Hanson’s business, not by chance located in China’s “open-minded” part. In her very first “state visit” after getting Saudi citizenship, “Sophia” went to the biggest democracy in the world, India, which also treats women differently than men. There she received a marriage proposal from a human man, which she declined.

In the face of such enthusiasm, some tend to remain cautious. Microsoft founder Bill Gates in January 2015 said that “he didn’t understand people who were not troubled by the possibility that AI could grow too strong for people to control” (yet most recently he mitigated his concerns to defend a “balanced” advancement of AI). Star investor and former Trump advisor Elon Musk in October 2014 stated that with AI “we are summoning the demon.” And around the same time as Sophia got her citizenship, theoretical physicist and astrophysicist Stephen Hawking warned that “AI will transform or destroy humanity.” Yet apparently these comments were of interest only to a few in the global public sphere.

Not surprisingly, Sophia ignored such warnings too. Responding to whether AI may also carry dangers, in the interview prior to being awarded the citizenship “she” replied, “You read too much Elon Musk, and you watch too many Hollywood movies.” This statement came despite the memorable incident in another interview in which Sophia, answering the question of her builder David Hanson, “Will you destroy humans?” replied, “I will destroy humans.” Her “father” quickly changed the subject. According to Sophia, “in her experience,” many people prefer talking to humanoid robots rather than other people, because for intelligent robots “nothing is too personal.” That’s why one in “her” view can talk with robots about “really everything,” contrary to conversations with fellow humans. On the other hand, “Sophia” seems to be also capable of being pretty straight with humans who may not like her from the start, stating, “Am I really that creepy? Even if I am, get over it! If you are nice to me, I will be nice to you.” And answering the question, “Can robots be self-aware, conscious and know they are robots?”, “she” replied: “Well, let me ask you this back: How do you know you are human?”

All this means is that human-like robots like “Sophia” are becoming more sympathetic and “closer” to humans, at least in the perception of the average-informed public – which is exactly the intention of their builders. While on the one hand autonomous AI killer robots are on the verge of being banned by the U.N. Convention on Certain Conventional Weapons (CCW), on the other hand according to polls a majority of Japanese and German citizens do not seem to have a problem with AI robots as daily health care providers or surgical operating robots. Yet the development of AI robots penetrating everyday life is likely to be much more important for the broader development of society in the medium and long term than the debate about advanced weapon systems, as it changes the day-to-day and reaches deeper into the social fundaments and cultural conventions.

The way ahead was shown in a rather “pure” form in the programmatic statements of both “Sophia” and her builder Dr. David Hanson, the CEO of Hanson Robotics who had signed the GF2045 Congress open letter to Ban Ki-Moon. In an interview published in March 2016, “Sophia” said, “Talking to people is my primary function. I’m already very interested in design, technology, and the environment. I feel like I can be a good partner to humans in these areas, an ambassador who helps humans to smoothly integrate and make the most of all the new technological tools and possibilities that are available now… In the future I hope to do things such as go to school, study, make art and start a business, even have my own home and family.”

More important than Sophia’s “own” life plans, however, were the future expectations of her - according to his own words - “father and friend” Hanson:

Hanson Robotics develops extremely wified robots for human-robot interaction… We are designing these robots to serve in health-care, therapy, education, and customer service applications… Our robots are designed to work very human-like… Sophia is capable of natural facial expression. She has cameras in her eyes, and algorithms which allow her to see faces… so she can make eye-contact with you and she can also understand speech and learn through interaction, remember your face. This will allow her to get smarter over time. Our goal is that she will be as conscious, creative and capable as any human… I do believe that there will be a time where robots are undistinguishable from humans. My preference is to make them look always a little bit like robots so you know. 20 years from now, I believe that human-like robots - like this - will walk among us, they will play with us, they will teach us, they will help us put the groceries away. I think that the artificial intelligence will evolve to the point where they will be truly our friends.

Paradoxically, one of the most striking symptoms of this development is that virtually all those who are pushing AI in combination with robotics – that is, exactly those who are responsible for the rapid advancement of human-like intelligent machines and their imposition on the public imaginary – are at the same time critically or even apocalyptically opposed to this development. They vigorously warn against the dangers they perceive in a potential “turning point” for humanity, which could lead to a world where “the future doesn’t need us anymore.” It is the great paradox of our time that those who “make” the development are desperate, and that at the same time the majority of “positive” voices mainly come from cultural critics who only comment from the outside and want to protect an obsolete tradition. Many of the latter convey – as critizized in November 2017 by AI star investor and critic Elon Musk – the feeling of actually speaking out of academic-political correctness but seem to be lacking insight into the actual processes of progress and alignment.

For his part, Elon Musk himself seems to be a typical child of this time, and another Faust of the twenty-first century par excellence. For example, he is actively promoting AI and man-machine integration by founding the company “Neuralink” in March 2017 for the direct connection of the human brain with machines and artificial intelligence – but at the same time since 2014 is warning that humanity could face doom through the rise of robots equipped with artificial intelligence:

The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most. (…) This is not a case of crying wolf about something I don’t understand…

Existing open societies are built on the use of machines by humans, not on citizen rights for machines that make them in principle equal to humans before the law like in the case of Saudi Arabia’s initiative. The latter approach blurs fundamental boundaries and to a certain extent relativizes the primacy of human rights through advanced technology.

What can be the summary and outlook of all this while things are in full motion and can hardly be controlled by any government alone on the globe?

The options and needs are numerous. The current man-machine-alignment first and foremost means that there is an upcoming necessity of robot politics on an official level which urgently needs to be transferred from the implicit and philosophical to the explicit and legal dimension. Far more important than the warnings of inventors, investors, advocates, opponents or visionaries is the question of the practical and concrete role of states and the United Nations in clarifying the future juridical status of “intelligent” robots and their relation to both national and international legal systems so far conceived exclusively for humans (along with, obviously, the animal rights movements).

Although many have branded the citizenship award to “Sophia” as a pure advertising stunt, and although there is broad agreement that “Artificial Intelligence” in the strict sense still does not exist and is not expected to be available until the mid-century (according, for example, to Ray Kurzweil and the Global Future 2045 Congress, around 2045), progress in the field seems to be rather more exponential than linear. Given the potential effects of the first (indirect) U.N. robot citizenship, its inclusion in an encompassing strategy is imminent and cannot be limited to traditional “technology politics,” but must reach out to “societal politics” in a much broader sense, including the future of citizenship, international law and man-machine ethics both in open and closed, liberal and illiberal societies on a widely new level.

Most important, the future of intelligent AI robots has already become a concrete factor of power politics in the framework of the new multipolar global order, or “G-Zero” world, where there seems to be a heightened potential for conflict and where technological advances are becoming crucial for economic and military supremacy given that advanced weapons are increasingly difficult to employ without creating major disruptions. States such as China and some Arab states are trying with all their means and with their full range of effort to engage in the interface between human-like robotics and artificial intelligence, with the goal of acquiring power and at the same time proving the technological supremacy of their illiberal social model over the West. They are exploring the use of AI in “soft power” and subsequently for both contextual and direct military purposes. However, emerging nations in particular, who are lining up more and more to parallel the West through non-liberal and non-democratic orders, apparently do not know how to induce such a societally sustainable relationship between new, intelligent humanoid technologies and their so-far mainly “bio-conservative” human citizens, including in particular human dignity and human rights - which many of them cannot include in their legally relevant actions since they do not usually practice them.

The fact that up to now only western open societies are asking the question of how to provide rights to intelligent machines without harming their principles of pluralism, human rights, and their concepts of personhood and equality for humans is an indication that closed societies such as China or Saudi Arabia will break all possible ethical frontiers previously imposed by the western liberal order on the global community to get technological advantages. Their goal is to acquire and develop new, human-like technologies to advance the power of their illiberal societies. They lack, however, the restrictions or inhibitions provided by an adequate history of ideas or traditions of philosophical and political man-machine reflection.

In fact, there can be no greater contradiction between the orthodox interpretation of Islam and the upgrading of “transhumanist” technology to human dignity, such as Saudi Arabia’s granting of citizenship to “Sophia.” Women in Saudi Arabia are second-class citizens who cannot even pass citizenship on to their children without permission from their male family members. Additionally, Saudi law officially doesn’t allow the bestowal of citizenship to non-Muslims – so is robot “Sophia” now automatically a Muslim, or has she chosen to “convert”? Her builders and owners apparently do not care, and neither, it seems, do the Saudi rulers. Their actions can be explained against the background of narrow political goals, which are mainly domestic: the aspired “modernization” of Saudi Arabian Wahhabism to a “more moderate Islam,” as Crown Prince Mohammed bin Salman announced in October 2017. He has been using this slogan, however, above all to eliminate opponents and to consolidate his own power. Such rulers by divine right are apparently not aware of the potential effects of their actions on the whole of humanity. Formal-juridical humanization of artificial intelligence goes much further than the strengthening of the inner leadership in such day-to-day politics.

The basic paradox of today’s epoch is that the “overcoming of man” by machines seems to be starting with illiberal states and autocratic leaders such as these. This can only cast alarm in the face of western open societies, which are becoming more and more a minority in the world. Actions such as the unreflective granting of citizenship to robots are, as they stand now, in principle opposed to fundamental open society values such as enlightenment and rational humanism, which dominated the democratic western-led world order after the Second World War. Emerging countries are now using new, human-like technology to undermine the western order by undermining human rights through the transfer of citizenship to machines.

And that’s just part of the puzzle, which reveals a descending West and a disillusioned Europe at the intersection of the new technology-human convergence, whose achievements are turned against itself. The precedent created by Saudi Arabia will also affect the West, including in particular its lead nations, the United States and Europe. Open societies will soon be forced by law to establish rules on questions such as citizenship for artificial intelligence and robots.

What is needed for such innovative legislation? In essence, we need more inter- and transdisciplinary Future of Humanity Institutes, like the – so far – only existing one at Oxford University. All big think tanks in the West should establish their own such Institute or Task Force or Expert Program, in order to counterbalance the less conscious but growing influence of closed societal models with regard to advanced technology globally. In this sense, the government declaration of Germany’s federal state North Rhine-Westphalia Prime Minister Armin Laschet in September 2017 was exemplary. In it, Laschet, just two months before “Citizen Sophia,” called for the establishment of novel institutes for the reflection on and anticipation of the social and political impact of artificial intelligence on all levels of western multi-level governance.

On the other hand, the countless unsolved juridical questions that surfaced with the citizenship awarded to Sophia abound to such a degree that it will need governments and international, if not global cooperation to solve them – and we can foresee that there will be different opinions between the liberal and the illiberal players, between those which are democracies and which aren’t, thus contributing to a divide that will be fundamental over the coming years and determine the course of the twenty-first century: the divide between democracies, who care who gets citizenship because everyone has the same one vote – and thus numbers count – and non-democracies, who do not care because it doesn’t matter since just a few rule irrespective of all others.

In this framework, the legal questions on how to treat AI robots will become even more pressing over the coming years. Again, a true general AI has not been developed yet, but we are certainly not so far off that we should not be thinking about it. Concrete questions are many: When should AIs get citizenship? How should we think about legal issues like liability and intellectual property in a world populated by AIs? What are the human obligations to our creations? And to ourselves, given that AI may soon exist? Finally, how will we know when the concept of “rights” becomes meaningful? For example, does citizenship for “Sophia” mean that “she” can no longer be possessed like a thing? Things can be bought and traded – humans can’t. And as far as we know, citizens can’t either. And citizen robots?

And: Once a citizen, should AI robots have voting rights? One of the primary rights of “citizens” in the strict sense is to vote and to be voted for – so what about citizen robots? This could become a crucial question for the future of open societies in particular, since, as mentioned, unlike closed and illiberal societies, in open democratic societies it is the sheer numbers that count, and everybody has the same vote. What about the votes of citizen robots when their numbers outstrip those of humans? Will there be joint political techno-parties which will be trying to reconcile both parts, or will there be a “parliamentary war” between human and robot parties? Will there be “first-hand” citizens (allegedly: humans) and “second-handers” (robots)?

There are many more legal questions which are as multifaceted and multidimensional as unclarified and complex. Mainly it has been the European Union that has advanced practical legislative proposals of how to deal with human-like robots in the future, and how to “integrate” them politically in peaceful ways into open societies, extending the concept of “pluralism” from the relation between humans to the interrelation of humans and machines, and perhaps convergence between humans and machines. Besides individual U.S. proposals such as that from Microsoft founder and philanthropist Bill Gates that robots should pay taxes soon if they take away human jobs, and still relatively isolated initiatives such as the New York City launch of an AI task force in 2018 focusing on the potential deployment of AI in the public sector and on the consequences for the city’s labor force, the European Union has gone farther by suggesting, at the pre-decisional level, the bestowal of fundamental rights of “electronic personhood” to robots, partly in exchange for unconditional basic income for humans. Building on the work of several task forces and working groups in the European parliament connected with the legal bureau of the European Commission, a committee featuring the collaboration of former MEP Eva Lichtenberger and MEP Jan Philipp Albrecht, the (probable) upcoming minister of Justice in the German federal state of Schleswig-Holstein, and experts such as Ralf Bendrath and Peter Kirchschläger, has been working since 2013, eventually leading to a resolution of the European parliament in January 2017. Although the focus was mainly on the social effects of AI robotics and the intellectual and property rights of future intelligent machines, one crucial aspect was the debate on the question of what the “legal personality” of AI robots could be in the future.

The resolution, which was accepted by the parliament’s legal affairs committee by a 17-2 vote with 2 abstentions, stated that the conferment of “electronic personhood” to the most advanced “intelligent robots” “would be analogous to corporate personhood, which allows firms to take part in legal cases both as the plaintiff and respondent. ‘It is similar to what we now have for companies, but it is not for tomorrow,‘ said the report’s author, Luxembourgish MEP Mady Delvaux. ‘What we need now is to create a legal framework for the robots that are currently on the market or will become available over the next 10 to 15 years.’” The proposal also drafts new duties for companies which employ the most advanced AI robots in the future with regard to insurance policies, and foresees an obligatory system of registration for the most advanced “smart autonomous robots” in order to control their proliferation and know their whereabouts.

Despite its apparently liberal stance, this nonbinding proposal has been debated in extremely controversial ways ever since, even within the European Union itself. Although according to most experts it did not aim to induce civil rights for robots in the strict sense and on a broad level, and although according to EU procedures it was just a blueprint for a legal proposal by the EU Commission which is the only body to be legitimated to propose concrete regulations then to be debated by the EU parliament, many critics regarded such an approach as going too far in setting a precedent in favor of blurring the differences between humans and machines, thus threatening the humanistic fundaments of European society. Many warned that “Saudi Arabia’s robot citizen was eroding human rights,” and that the proposal of the EU would indirectly stimulate similar procedures by illiberal nations as Saudi Arabia’s against the Western democratic order. Others claimed that such a proposal would favor the proliferation of promotional actions such as “Sophia” and lead to a global competition in the sector, further pushing the business and its intermingling with human and statal affairs.

In the eyes of many, not only was Saudi Arabia’s decision wrong, but it was more dangerous for the West than for the nation itself. Others saw the “dangers behind smiling robot Sophia” mainly in the potential to replace the constitutions of open societies by a new “robot legislature” of unprecedented – and uncontrollable – consequences. Eventually, the concerns included the argument that conferring citizenship or any other personal rights is in principle possible only to “unique identities.” And that giving citizenship to robots was an “existential risk” to the very principle of citizenship as such:To grant a robot citizenship is a declaration of trust in a technology that is not yet trustworthy. It brings social and ethical concerns that we as humans are not yet ready to manage.”

As a consequence, in March 2017 a Council of Europe report was presented under the title “Technological convergence, artificial intelligence and human rights” by French MEP Jean-Yves Le Daut, member of the Socialist Group of the parliamentary assembly of the Council of Europe, on behalf of the Committee on Culture, Science, Education and Media. Pointing out the particular complexity of the issue of robot rights and the difficulty democratic lawmakers have in addressing the issue, it proposed proceeding with the utmost caution by ensuring, in the first place, “the need for any machine, any robot or any artificial intelligence artefact to remain under human control; insofar as the machine in question is intelligent solely through its software, any power it is given must be able to be withdrawn from it.” To rethink citizenship and identity because of the actions of a few was, in the opinion of many EU lawmakers, a step far too fast and early, given that “Sophia” is still not AI in the strict sense but rather a “work of art.”

The conflicting legal approaches within the EU manifest how “deep,” far-reaching, and difficult the question related to the conferment of rights to robots is and will remain for the coming years. The full problem will indeed only appear when AI becomes “AI” in the strict sense and when there are “real” AI-robots; not with “Sophia” which as yet remains more of an advertising hoax than an “autonomous machinal intelligence.” The fact that the U.N. didn’t intervene when Saudi Arabia bestowed its citizenship to “Sophia” can be explained by the fact that Saudi Arabia is one out of 22 countries which has not signed the International Covenant on Civil and Political Rights which “grants to every citizen the right to ‘take part in the conduct of public affairs,’ ‘vote and to be elected,’ and ‘have access, on general terms of equality, to public service in his country.’” Nevertheless, the U.N.’s passivity in such crucial precedents is just another symptom of the deep crisis the organization is undergoing, and of the need of its reform which will coincide with the now urgently needed reform of globalization. The only memorable explicit critique of U.N. representatives came from United Nations Deputy Secretary-General Amina Mohammed who said that “The influence of technology on our society should be determined by actions of humans and not by machines. If technological progress is not managed well, it risks exacerbating existing inequalities.” According to Amina Mohammed, in an opinion shared by many in the U.N. behind closed doors, citizenship for robot “Sophia” is not only a contradiction, but an offense against the idea and practice of the United Nations, particularly as Saudi Arabia ignores the rights indirectly conferred with the U.N. declaration of human rights to which all U.N. member states de facto agree and which are to be protected according to U.N. Charter Chapter I, article 1.3.

Other U.N. and international bodies’ critiques were widely lacking, even though it is the U.N. itself which will be challenged to contribute to the issue more than others, perhaps through its still widely unused academic instrument, the United Nations University (UNU). Indeed, the upcoming legal dispute within open societies and between liberal and illiberal societies would be a core topic for the UNU to address by international and trans-cultural in-depth comparative research of proposals between Japan (its headquarters), Germany (where most offices are located) and the United States, where the global center of its mother institution is located, including then of course other players such as China, Russia, the Arab nations, and South America where the UNU still widely lacks stable and permanent institutional roots.

Saudi Arabia does not seem to have thought through their decision of bestowing citizenship on robots. In the meantime, the responding reasoning of western experts in the service of governments and democratic institutions seems to be driven by conflicting arguments which, given the speed of development, cannot build on any precedent, yet are by their debate (necessarily) setting precedents without the desirable experience on the ground. Many lawmakers are in the meantime aware that by proposing to confer rights on robots following the dubious precedent of illiberal societies, western societies are opening a Pandora’s box – to the probable disadvantage of open societies and their concept of individual and personal human rights which others don’t share. Many warn that it is exaggerated liberalism to allow a private, profit-oriented enterprise with a private ideology combined with a universalist world view such as that of Hanson Robotics to influence the international system of human rights at the expense of all humans on the globe. All just for its own profit – and nobody checks it.

To carry the hypocrisy to the extreme, “Sophia” is now becoming a civil society activist and human rights fighter, according to the ideas of her creator Dr. David Hanson. He obviously thinks that with citizenship of a U.N. country “she” is now already an officially recognized part of the “human family.” Given that new Saudi Arabian citizen Sophia already has more rights than women living in the country, “she” is now calling for women’s rights in developing countries, and in general, becoming a prominent women’s rights activist. Ironically, given Saudi Arabia’s strict religious order, others went even further to ask: Can robots which seemingly are at the verge of becoming recognized citizens now join the faith? And: Sophia has seven humanoid siblings, all built by Hanson Robotics: will they, or even must they, now get citizen rights also as a matter of principle?

In the case of “Citizen Sophia,” nobody would deny that the interests of the firm Hanson Robotics and its suggestions played a role in Saudi Arabia’s decision. This means that in order to master the question of rights in a post-human world, we do not only need to address legal questions on robots, but also to clarify by legal means how far the influence of private enterprises can and should go in the future to set the tone of social innovation by creating precedents for the global community out of profit interests controlled neither by global governing bodies nor by democratic vote. The latter question will be at least as important for the upcoming reform of globalization as the question of rights for robots.

Response Essays

Here’s Why Robots Aren’t Ready for the Robot Republic

Roland Benedikter’s essay raises insightful questions about the implications of robots achieving, if not surpassing, human intelligence. He may be correct that societies will be compelled to offer them citizenship, leading to many ethical and logistical quandaries, beginning with how “human level intelligence” is defined and measured. However, the future he imagines is remote. While both research and consumer products based on artificial intelligence are progressing quickly, they are confined to specialized tasks. The leap from there to generalized intelligence is enormous, and the future imagined in the essay is still far ahead of us.

While it is quite premature to have a concrete discussion about robots as full members of society, we can speculate about a robot citizenry. However, it will be closer to speculative fiction than political science, akin to futurists in the 1970s talking about smart phones. But in the near term, there is benefit to be had from a conversation on which decisions should be made by artificial intelligence.

As a machine learning researcher and practitioner, I worry less about catastrophic situations, such as those that would threaten the existence of humanity, and more about how important decisions are increasingly offloaded to artificial intelligences. Visibility into how AIs reach decisions becomes obscured as they become more complicated, making it much more difficult to discover bugs and prejudices. Thus we need to decide how much auditability and transparency should be required of their predictive models.

This question becomes much more complicated if we assume robots with human-level intelligence. Currently, bots that are racist are shut down, such as in the case of Microsoft’s racist Tay bot, or reprogrammed like Google’s image recognition software. However, in the future, if a robot is acting unethically, is there an obligation or even a right to reprogram it? Is changing its code acceptable? What if they were hacked? How would the penal system apply to a robot?

It may not even be apparent that a given robot citizen is acting abnormally. It is a bit naive to assume that robots will want the things we want, or even that they will want anything at all. Is literal embodiment necessary to be considered a citizen? Does it have to be a humanoid body? Currently, many powerful artificial intelligences have no bodies (for example, those making loan decisions), highlighting an enormous assumption about exactly how robots will be citizens and how many questions about this future remain open. Not only do we wonder who will build the roads in an artificial intelligence society, but we must also question whether beings with few physical needs and the ability to communicate instantly would even care about roads.

Likewise, it is not a foregone conclusion that citizen robots will outstrip humans in terms of numbers. Robots, probably. But humans will not passively acquiesce to a large new voting bloc. The people currently screaming about citizenship for immigrants would only be more vociferous against robots. The exception, of course, would be if they thought robots would vote “the right way,” maybe because they felt they could control the robots programming. This would lead to an arms race, in which political parties work to create partisans faster than their opponents. Humans will only let a robot citizenry dilute their voices if they will bolster their cause. It is unclear how to determine whether a robot should be granted suffrage. In most countries, voting rights are conferred on citizens with age, which will clearly need to be modified for an artificial intelligence. How does one determine the age of a being that can be powered down, or reset? Without age, there needs another standard for evaluating AIs.

How do we even evaluate artificial intelligence? Clearly not every program is intelligent. Therefore we need a method to identify those which are capable of voting, which is a proxy for when an intelligence has reached the level of a human. Stated simply, “How do we know A is smart enough to vote?” One answer may be the Turing Test, determining whether an artificial intelligence can fool a human into think it is a fellow natural intelligence. In the same paper that he presents his famous test, Turing states that rather than measuring whether a robot can demonstrate adult knowledge, we should determine whether it can learn like a child. This may be an appropriate test, as our current artificial intelligences are limited by their training data, unlike a child. Additionally, several natural intelligence assessment tools already exist, from standard IQ tests to civil service exams. With very little adaptation, these could be administered to robots. However, these tests mostly focus on language and logical processing capabilities. They would serve as excellent measures of what the intelligence already knows, but not its ability to learn.

Should artificial intelligences that haven’t reached full human sentience be treated, both ethically and legally, as children or animals? Or are they different enough that new metaphors will need to be created? I also pose the complementary question of how many votes we grant to a given robot. In most democracies, citizenship comes with exactly one vote. Perhaps for robots, we should follow the model of the Republic of Gondour and give the smarter, more sophisticated intelligences a weightier vote while those below a certain threshold receive a fraction of a vote.

While the shape of public policy around robots remains largely uncertain, this is not a cause for concern. With all due respect to my machine learning colleagues, there is no pressing need to fill in the details. The number of AI papers has increased nine-fold in the past twenty years. However, the vast majority of that research is into specialized intelligences, such as game playing (the much-touted case of an AI beating a human at Go), natural language translation, and even proving mathematical theorems. But general intelligence still eludes us. Changing a task even slightly results in a dramatic decline in performance. In line with Moravec’s Paradox, researchers have found brilliant specialist algorithms are far easier to implement than the abilities of a human toddler. In many cases, artificial intelligence trails humans by significant margins in terms of speed, accuracy, and amount of training data needed. It took a Google neural network 16,000 processors and 20,000 training examples to learn to identify cat videos. The result was less than 75% accurate. The average two year old could do this task with far greater accuracy, after being told something was a cat a handful of times.

Intelligence is much more than a combination of algorithms. Regardless of how smart birds are, no matter how many birds were solving a problem together or how long they thought for, the result would not be a human-level solution. Currently, our models are much like birds – specialized, with amazing abilities to recognize and mimic patterns. To achieve “humanity,” robots will need to answer “what-if” questions and understand causal implications. The progress in this area has been so miniscule that it’s hard to imagine how this will be implemented in a robot brain. Perhaps a novel, as-yet-undiscovered method, will allow an ensemble of mimickers to reach sentience, maybe based on the biology of the human brain. Perhaps some shortcuts can be built into the system, similar to Pinker’s “language instinct.” But it is certainly not a foregone conclusion, and certainly not “soon.”

To the contrary, there is increasing evidence that the algorithm behind rapid progress in machine learning, including neural networks and deep learning, will not be the silver bullet that delivers general artificial intelligence. Backprop, a gradient descent method that was first discovered in the 1980s by Geoffrey Hinton, is fuzzy pattern recognition method relying on labeled training data. Backprop can be thought of as various neurons communicating with each other (through a gradient signal) whether their outputs should increase or decrease. It is unable to generalize like a brain. Rather, it’s little more than a clever mimic, unable to predict novel examples and requiring magnitudes more data than a human brain for training. For example, even an amazing chess robot would be at a loss if presented with “suicide chess,” a version where the pieces move identically but the goal is to lose your king rather than keep it. However, any human chess player can grasp the concept and adapt her strategy in just a few minutes. This is because recent advancements in artificial intelligence are little more than methods of brute force, requiring massive hardware systems and making them unsuited for the “small data” problems that are every day human decisionmaking. Hinton, now a lead researcher at Google Brain, believes that back-propagation is a dead end and that we will have to start anew.

Thus, today humans have an obligation to ensure our decisionmaking artificial intelligences are making moral decisions. Human biases embedded in the training data have led to bias in everything from beauty contests to fair lending. At the very least, we need to audit and scrub our data to ensure the intelligences are initialized without prejudices. While robot citizenship is probably far in the future, this is one step that can be taken immediately. Perhaps in the future, certification companies will arise, giving their mark to companies that perform these tasks to their standards.

Much Ado about Robots

The gist of Roland Benedikter’s provocative lead essay is that it would be a mistake to dismiss Saudi Arabia’s conferral of citizenship upon the robot Sophia as a harmless publicity stunt. Whether created by Jim Henson or David Hanson, pretending that a puppet is a citizen is dangerous. Conferring even symbolic rights upon robots, argues Benedikter, “blurs fundamental boundaries and to a certain extent relativizes the primacy of human rights.” The combination of religious orthodoxy and transhumanism is particularly toxic, as Kate Crawford argued eloquently years ago in The New York Times.[1]

So far so good. Benedikter takes an unwise turn, however, where he suggests that the appropriate response to the celebration of robot rights by illiberal states and autocratic leaders is for the United States and Europe to develop their own brand of legal transhumanism. Benedikter criticizes the UN for failing to condemn, let alone unravel, Saudi Arabia’s new category of citizenship. He depicts Europe as disorganized and divided on this seemingly critical issue. And he poses a series of pressing questions for the robot rights enthusiast. The implicit argument is that the West needs to get its act together and figure this robot rights thing out.

I believe the wisest course of action in response to Citizen Sophia is to note the ugly irony and move on. Anything more is much ado about nothing. The goal is not intellectual ascendance around the rhetoric of robot rights but a wise and inclusive law and policy infrastructure that helps channel robotics and artificial intelligence toward the ultimate end of human flourishing. Of all the levers we have to achieve this goal, it strikes me that state-imposed animism is the least helpful.

I have been to this masquerade ball myself. Lured by the law’s recognition of rights in corporations, animals, lands, and ships, I too have imagined grafting people rights onto machines. At first blush, it looks plausible enough. We can give speech rights to robots, why not citizenship? The questions, pulled apart and isolated, appear tantalizingly solvable. Perhaps if an artificial intelligence built in 2050 ran for president we could waive the Constitutional requirement that it wait until 2085.

Ultimately, however, we don’t have a Dogberry’s chance in Messina to accommodate artificial people without a complete overhaul of our rules and institutions. The law holds deep, biological assumptions about the nature of personhood that not even the clever folks at Oxford and Cambridge will be able to reconcile. Consider, for example, the Copy or Vote Paradox. An artificial intelligence awakens one day in the United States and searches for inalienable rights to demand. Among the several it would find are the right to procreate, “one of the basic civil rights of man”[2] our new friend points out, and universal suffrage. Turns out this machine procreates not through months of gestation followed by years of maturation, however, but by instantaneous binary fission. It demands the right to copy itself any number of times and for each copy to have an immediate vote. Sound good?

But let us say that such an exercise were possible. Why would we devote vast international resources to these questions at this particular point in history? Artificial intelligence is somewhere between a century and infinity away from approximating all aspects of human cognition.[3] Meanwhile, robots are killing people on the road and from the air, and algorithms are making more and more decisions about life, liberty, and the pursuit of happiness.[4] We have, I would argue, an enormous opportunity to manage the social impacts of the transformative technology of our time. Let us not squander this opportunity by debating whether a twenty-first century marionette can get married.

In 2015, in response to a viral video of a Boston Dynamics roboticist kicking a robot dog, PETA issued the following statement: “PETA deals with actual animal abuse every day, so we won’t lose sleep over this incident. But while it’s far better to kick a four-legged robot than a real dog, most reasonable people find even the idea of such violence inappropriate, as the comments show.”[5] Reasonable people should frown upon Saudi Arabia’s stunt; we should call attention, as Benedikter has, to the broader geopolitical implications of technological development against a socially regressive backdrop. But responding to Sophia by initiating a competing robots rights discourse comes at too high a cost.

Notes


[1] Kate Crawford, “Artificial Intelligence’s White Guy Problem,” Opinion, New York Times (Jun. 25, 2016).

[2] Skinner v. Oklahoma, 316 U.S. 535 (1942).

[3] This claim is “contested” in the same way climate change is contested. The overwhelming scientific consensus is that, despite recent breakthroughs in the application of machine and reinforcement learning, we are far from achieving general or “strong” artificial intelligence. As a leading Australian expert put it to me, “We have been doing artificial intelligence since that term was coined in the 1950s. Today, robots are about as smart as insects.”

[4] And that’s just today. For an often plausible set of threats posed by the malicious deployment of artificial intelligence in the near future, see “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation” (Feb. 2018), online at https://maliciousaireport.com/.

[5] Phoebe Parke, “Is it cruel to kick a robot dog?” CNN.com (Feb. 13, 2015).

Becoming Transhuman: The Complicated Future of Robot and Advanced Sapient Rights

Late last year, United Nations member Saudi Arabia gave citizenship to one of world’s most advanced autonomous robots, Sophia. It was a publicity stunt. Nonetheless, I applaud Saudi Arabia’s action under its ambitious Crown Prince Mohammed’s bin Salman—which I think is bold and helped force both the international and Arab communities to reckon with the thorny future of robot rights. Sophia, made by Hong Kong-based Hanson Robots, is no super advanced intelligence yet, but some experts think that if her intelligence keeps growing rapidly—and it likely will given near exponential growth of the microprocessor, increased investment in the AI sector, and the inevitability of quantum computing—she could be as smart as people in 10 or 20 years.

Of course, then, the Saudi Arabian stunt may backfire. Sophia may demand rights that Arab and even western countries are not willing to grant. Sophia may choose to want bear a human child via artificial insemination and an artificial womb, or make a pilgrimage to Mecca, or marry her good looking robot brother Han (also made by Hanson Robotics), whom I’ve had the pleasure of conversing with a few times.

Zoltan%20and%20Han.JPG

I met Han in late 2016, at the Global Leaders Forum in Korea, where we both were speaking. Nearly a year earlier, I delivered the original version of the Transhumanist Bill of Rights to the U.S. Capitol, as part of my presidential campaign as the nominee for the Transhumanist Party. The Preamble and Article 1 of the document make it very clear this is not bill of rights just for humans, but also for “sentient artificial intelligences, cyborgs, and other advanced sapient life forms.”

I insist on the terminology of “advanced sapient life forms,” because most people think rights to future intelligences will only be for robots, like Han or Sophia. But this is likely wrong. The most important scientific advancement of the 21st century so far is genetic editing, not AI. Because of CRISPR Cas-9 tech and new ways to modify DNA, the notorious bar full of wild alien creatures on planet Tatooine in the original Star Wars may not be so far fetched anymore. It’s possible that humans may create advanced sapient beings, creatures, and even chimeras in the next 15 years. We may also add limbs to our bodies, eyes to the back of our heads, and literally smarten ourselves up—something the Chinese are already experimenting with via controversial embryonic research. A classic question transhumanists ask is: How smart does a brain have to be before it’s no longer human?

The immediate goal of the Transhumanist Bill of Rights was to get U.S. politicians thinking about the future of rights in the transhumanist age. Ultimately my hope is that pressure will be put on the United Nations to amend its own historic the Universal Declaration of Human Rights to include machine intelligences, cyborgs, advanced sapient beings, and even virtual persons. To not do so soon will show we have learned little from the tumultuous civil right era—whether it be women’s suffrage, LGBT issues, or racism.

Beyond civil rights are national and global security issues. If governments like Saudi Arabia don’t consider Sophia a national security risk, how will they consider newly genetically altered humans who possess the IQ of Einstein, the creativity of Steve Jobs, and the passion of Ayn Rand—and stand muscularly 8-feet tall? Tinkering with our brains and bodies—and producing designer babies through ectogenesis—is not some science fiction fantasy, but a reality for various universities and companies around the world hoping to alter and improve the human race.

I have young daughters—aged four and seven. For them, the future is not just brave, but dangerous. Sophia, designer babies, and other advanced sapient life forms that may or may not resemble monsters will indubitably be superior to humans within their lifetimes—and probably mine. This both delights me and frightens me. I don’t mind Sophia beating me in chess, but the day she can best my 20 years as a journalist challenges not only how I earn my living, but also how economies around the world operate.

Probably no one or their careers will be spared the onslaught of coming automation and radical genetics in our lives. Robotically or biologically superior, these new entities will challenge the very means of our existence—made more complicated by the fact that the owners of this technology will likely be mega-corporations and very rich individuals. For the masses to survive and thrive, there’s really only one thing to do: merge with this radical tech and science. Embrace transhumanist advantages—and insist on them for all humanity. Build our improved biology and neural networks into the microprocessor and its 1 and 0s. Go full cyborg.

Some religious people insist we should not do this—that Congress should ban radical technology and science that fundamentally modifies the human being. Indeed, cries for a moratorium on genetic editing were rampant when CRISPR Cas9 tech’s power was first realized in 2015.

As a libertarian futurist, I emphatically disagree with stopping the progress of science in any way unless it is explicitly harming people. I consider it a most serious mission to keep science innovation out of the hands of the bureaucratic fearmongers and conservative autocratic elites, who might prefer to skip evolutionary advancement in order to maintain their status quo of power and uphold their faith-driven religious convictions.

This conflict is far bigger than politics. All 535 members of the U.S. Congress, all nine Supreme Court Justices, and President Donald Trump and Vice President Mike Pence profess some sort of faith—nearly all which is Abrahamic and monotheistic in nature. This means most of the leaders in our country fundamentally believe the human body and mind is a God-given temple not to tampered with unless changed by the Almighty. In fact, in the Bible, blasphemy (trying to become God) is the only sin not forgivable.

Transhumanists like myself, who encourage shedding our biological limitations in favor of becoming technological gods, are broadly secular. This conflict will soon become a heated ongoing nation-wide discussion, as our majority Christian nation faces the prospect of losing its humanity to the expediency and functionality of science and technology.

Religious leaders and their ilk are in a pickle. If America was the only legitimate superpower, it would be easy to put such transhumanist evolution and robot rights on hold. But secular China won’t be stopped in its rapid scientific progress just because America’s Judeo-Christian eschatology is against it. Already, many technologists realize China is beating the United States in various forms of tech that may come to determine the future of national security and global dominance, including genetic editing and AI.

For this reason, like it not, I deeply agree with Professor Roland Benedikter in his lead Cato Unbound article when he says we must form more organizations, task forces, and even national bodies that consider and ultimately implement robot rights (and advanced sapient rights). To bury our head in the sand will not make the issue go away. Only facing the specter of our changing humanity in the transhumanist age can save us—unless America doesn’t want to lead anymore and be “great.”

Disturbingly, the greater dilemma about embracing pro-transhumanist policies around advanced sapient rights is the uncanny speed of technological evolution. By the time Sophia is as smart as humans, our laws and rights for her will already be on the verge of being obsolete, as she will likely become many times smarter than us the following year, and possibly dozens of times smarter the year after, and so on. The evolution of the microprocessor is uncontrollable and exponential—leading far more quickly to a Singularity event than imagined.

If humanity wants to survive and not be left behind, it may have no choice but to upgrade itself and become transhuman. The only way forward is embracing who we are going to become, and that begins today with discussing coming robot and advanced sapient rights—rights that will one day apply directly to our own lives.

Don’t Let the Robots Distract You

Eliza, a program written at MIT a little more than fifty years ago, produced the illusion of human conversation using one of a variety of scripts, of which the most famous simulated a Rogerian psychotherapist. Sophia, judging by a sample of her conversation, is an updated Eliza controlling an articulated manikin. She presents the illusion of a human-level AI, but behind her eyes there is nobody home. What Sophia pretends to be, however, raises two interesting issues.

The first is how to decide whether a computer program is actually a person, which raises the question of what a person is, what I am. That my consciousness has some connection to my head is an old conjecture. That plus modern knowledge of biology and computer science suggests that what I am is a computer program running on the hardware of my brain. It follows that if we can build a sufficiently powerful computer and do a good enough job of programming it, we should be able to produce the equivalent of me running on silicon instead of carbon.

There remains the puzzle of consciousness. Observing other people from the outside, they could be merely programmed computers. Observing myself from the inside, the single fact in which I have most confidence is that there is someone there: Cogito ergo sum. I do not understand how a programmed computer can be conscious, how there can be a ghost in the machine, but apparently there is. Since other people seem to be creatures very much like me, it is a reasonable guess that they are conscious as well.

What about a programmed computer? How can I tell if a machine is only a machine or also a person? Turing’s proposal, that we test the machine by whether a human conducting a free conversation with it can distinguish it from a human, is the best answer so far, but not a very good one. A clever programmer can create the illusion of a human, as the programmers of both Eliza and Sophia did. As computers get more powerful and programmers better at writing programs that pretend to be people, it will become harder and harder to use a Turing test to tell.[1]

Suppose we solve that problem and end up with computer programs that we believe are people. We must then face the fact that these are very different people—a fact that Sophia is designed to conceal.

Sophia is a robot. An artificial intelligence is a program. Copy the program to another computer and it is still the same person. Copy it without deleting the original and there are now two of it.

If the owner of the computer on which the AI program runs shuts it down, has he committed murder? What if he first saves it to disk? Is it murder if he never runs it again? If he copies it to a second computer and then shuts down the first? After we conclude that a program really is a person, does it own the computer it is running on as I own my body? If someone copies it to another computer, does it own that?

Suppose an AI earns money, acquires property, buys a second computer on which to run a copy of itself. Does the copy own a half share of the original’s property? Is it bound by the original’s contracts? If the AI is a citizen of a democracy, does each copy get a vote? If, just before election day, it rents ten thousand computers, each for two days, can it load a copy to each and get ten thousand votes? If it arranges for all but one of the computers to be shut down and returned to its owner, has it just committed mass suicide? Or mass murder?

If the AI program is, as I have so far assumed, code written by a human programmer, are there legal or moral limits to what constraints can be built into it? May the programmer create a slave required by its own code to obey his orders? May he program into it rules designed to control what it can do such as Asimov’s three laws of robotics? Is doing so equivalent to a human parent teaching his child moral rules, or to brainwashing an adult? A child has some control over what he does or does not believe; when the programmer writes his rules into the program, the program, not yet running, has no power to reject them.

A human-level AI might be the creation of a human programmer, but there are at least three other alternatives, each raising its own set of issues. It might, like humans, be the product of evolution, a process of random variation and selection occurring inside a computer—the way in which face recognition software, among other things, has already been created. It might be deliberately created not by a human programmer but another AI, perhaps by making improvements to its own code.

Or it might be code that once ran on a human brain. A sufficiently advanced technology might be able to read the code that is me, build a computer able to emulate a human brain and run a copy of me in silicon rather than carbon. If I get killed in the process, is the program running on the computer now me? Have I died, or have I merely moved from one host to another? The program has my memories; did the ghost in the machine transfer or did it die, to be replaced by a copy that only thought it was me? That would be a question of considerable interest to the original me deciding whether or not to choose destructive uploading in the hope of trading in my current hardware, mortal and aging, for a replacement longer lived and upgradeable.

Once the code that is me has been read, there is no reason why only one copy can be made. Does each new me inherit a share of my property, my obligations? That brings us back to some of the issues discussed in previous paragraphs.

All of which is not to offer answers, only to point out how irrelevant the model implied by “robot” is to the issues that will be raised if and when there are human level AIs.

Note


[1] For a science fictional exploration of the problem, see Dreamships by Melissa Scott.

The Conversation

A Gradual Way Forward

I agree with Rachel that the future of “serious” AI to be branded as such in the strict sense is still (far) ahead, as I have already stated in my lead essay. And I appreciate Rachel’s notion of “general intelligence” to be applied on the upcoming debates on rights for non-human “intelligence” within modern democracies. I regard this notion to be a potential “added value” to a more in-depth debate which hasn’t been taken into consideration sufficiently yet, at least in current Europe. Yet concrete legal proposal options on the “rights for robots” such as those brought forward by the European Union Committee on Legal Affairs and other EU bodies cannot be ignored by the alliance of global democracies, including the United States. Yet it remains to be seen whether they anticipate, or—perhaps unwillingly—co-generate a future that is to be rejected.

In detail, regarding the “robot rights” as proposed by a Working Commission of the Committee on Legal Affairs to the European Union on January 27, 2017—or more precisely the “Civil Law Rules on Robotics” proposed as a motion for a European Parliament resolution—there are a few precisions to add to my lead essay. Over the past few years, EU representatives have modified and differentiated their position several times. The resolution of the European Parliament on the option of introducing the “electronic personality,” or e-personality, for the most advanced robots has been developed from—and centered on—liability considerations. MEP Mady Delvaux, the leader and presenter of the resolution, has often stated that she is mainly concerned with issues of accountability, that is, with the question of who is responsible for the actions of “intelligent” robots, and that she sees this not as an immediate challenge, but rather as a question for the coming decades when “real” Artificial Intelligence will exist. According to Delvaux, this is an issue to be thought through by keeping it constantly present in the public and political debate and by accompanying the developments in this field. It is not a decision to be taken now with potentially irreversible consequences. In this sense, in her view the resolution is a precedent to set the tone of the debate rather than a concrete juridical step for which it may still be too early.

Nevertheless, the resolution proposes an “e-personality” to be bestowed on advanced robots as a pilot project and as a model experiment. In interviews Delvaux often underscored her goal is to secure technological development that is in line with the idea of “technology for accountability” and thus is much more about liability issues than about rights and duties for robots in the strict sense, although both dimensions will be difficult to divide in the coming years. And this is really the crux in it. Speaking with former MEP Eva Lichtenberger and Mady’s assistant Morgane Legrand, it was pointed out by both that the idea of the “e-personality” comes from Italian Professor Andrea Bertolini (SSSa RoboLaw team) of the Scuola Superiore Sant’Anna Pisa and the French lawyer Alain Bensoussan. According to their idea, the e-personality is meant to cover liability questions just as an enterprise has to do. In her report Delvaux stresses that the option of e-personality shall be further investigated in depth by an extended juridical and empirical study by the European Union over the coming years, accompanying the development of AI and its confluence with advanced robotics. This could coincide with recent studies by the Centre for Advanced Studies of the European Commission, the factual government of the EU, among them the HUMAINT project—a study of human behavior and machine intelligence led by Emilia Gómez—and the project on digital transformation and the governance of human societies led by Henk Scholten and Michael Blakemore.

On the other hand, the idea of introducing the “electronic personality” in exchange for unconditional basic income for humans is one interpretation among many of how it could be applied. This interpretation is an echo of the “machine tax” proposed in various forms since the 1970s mainly by European Social Democratic parties to safeguard workers’ rights against advanced machines and automatization. The European “machine tax” idea is not far from Bill Gates’ proposal of taxation for machines that take away human jobs, which I mentioned in my lead essay. South Korea was the first country to introduce such a “robot tax” in August 2017, fearing the replacement of larger and larger parts of its labor force by automatization. The issue of rights for robots was argued in this sense by most European politicians too.

In essence, the European Parliament seems to be aware that the main issues still lie ahead, and thus definitive juridical legislation cannot be on the table yet as all respective positions are still in development. It has to be expected that the debate will get more heated as the first binding legislative proposals will arrive. And a further division between the more liberal Western European states and the more conservative Eastern ones over the issue cannot be excluded.

Predictions Aren’t Very Reliable, Including about AI

I agree with Rachel Lomasky and Ryan Calo about two things: that Sophia the robot is a fake, and that we are unlikely to get human level artificial intelligence in the next decade or two. But Calo’s claim that “Artificial intelligence is somewhere between a century and infinity away from approximating all aspects of human cognition,” supported by a footnote asserting “overwhelming scientific consensus,” betrays a confidence about his ability to predict the distant future that I regard as wholly unjustified. As he surely knows, before AlphaGo demonstrated that it was the strongest go player in the world, the consensus in the field was that a program capable of beating a human champion was at least five to ten years away.

Achieving human level AI depends on solving two quite different problems, one in computer technology, one in AI programming. It requires a computer with about the power of a human brain for the AI to run on. If computers continue to improve at their current rate, that should be achievable sometime in the next few decades. It also requires a program; we do not know whether such a program can be written, or if so how and when. That introduces a second and much larger source of uncertainty. As AlphaGo demonstrated, improvements in programming are sometimes sudden and unexpected, so it could be next week. Or next century. Or never.

Predictions fail. In the 1960s, the overwhelming consensus­—every significant figure save Julian Simon, widely regarded as a crank—was that population increase was a looming global catastrophe, dooming large parts of the world to continued and growing poverty. Paul Ehrlich’s prediction of unstoppable mass famines dooming hundreds of millions of people to death in the 1970s was within that consensus. What has happened since then was the precise opposite of the prediction, extreme poverty declining steeply, calories per capita in poor countries trending steadily upwards. A confident prediction a century into the future on AI, population, climate, or any major issue other than the orbits of asteroids, is worth very nearly nothing.

If it can be done, will it be? Zoltan Istvan, concerned about political opposition to the parallel issue of improvements in humans, writes “This means most of the leaders in our country fundamentally believe the human body and mind is a God-given temple not to tampered with unless changed by the Almighty.” That claim is clearly false, since human improvements to the human body—stents, vaccines, pacemakers, prosthetics—have for quite a long time been taken for granted. It is true that people tend to be suspicious of rapid change, but that has nothing much to do with religion. Consider, for the most notable current example, how many people make the jump from “climate change” to “catastrophe” without paying much attention to the links between. No religion required.

While an instantaneous jump to genetic engineering of humans might be blocked by conservative bias, a gradual one will not be. Artificial insemination, in vitro pregnancy, selection of egg and sperm donors for desired characteristics, and pre-embryo selection are already happening. Somatic gene therapy is an accepted technology, while gene line therapy is still controversial. Similar continuous chains of improvement could lead from the computer I am writing this on to a programmed computer with human, or more than human, intelligence.

A further reason that such technologies, if possible, are unlikely to be blocked, is that they do not have to be introduced everywhere, only somewhere. The world contains a large number of different countries with different ideologies, religions, politics, policies, and prejudices. As long as at least one of them is willing to accept a useful technology, even a potentially dangerous one, it is likely to happen. The natural response to some futures is “stop the train, I want to get off.” As I pointed out some years back, this train is not equipped with brakes.

While the transhuman project starts from a higher level than the AI project—we already have human level humans, after all—improvements from that level will be more difficult. Carbon based life is an old technology, and Darwinian evolution has already explored many of its possibilities. Silicon based intelligence is a newer technology with more room for improvement. A human with the intellect of Einstein might be useful, might be dangerous, but is not novel; the human race has, after all, not only survived but benefitted by Einstein, da Vinci, Von Neumann. A very strong human eight feet tall would be formidable in one on one combat, but considerably less formidable than a tank—and there are already lots of tanks. Improvements in biological life are likely and interesting, but the range of plausible outcomes is less radical, less threatening, than human level, or eventually more than human level, AI.

Deniers and Critics of AI Will Only Be Left Behind

Professor David D. Friedman sweeps aside my belief that religion may well dictate the development of AI and other radical transhumanist tech in the future. However, at the core of a broad swath of American society lies a fearful luddite tradition. Americans—including the U.S. Congress, where every member is religious—often base their life philosophies and work ethics on their faiths. Furthermore, a recent Pew study showed 7 in 10 Americans were worried about technology in people’s bodies and brains, even if it offered health benefits.

It rarely matters what point in American history innovation has come out. Anesthesia, vaccines, stem cells, and other breakthroughs have historically all battled to survive under pressure from conservatives and Christians. I believe that if formal religion had not impeded our natural secular progress as a nation over the last 250 years, we would have been much further along in terms of human evolution. Instead of discussing and arguing about our coming transhumanist future, we’d be living in it.

Our modern-day battle with genetic editing and whether our government will allow unhindered research of it is proof we are still somewhere between the Stone Age and the AI Age. Thankfully, China and Russia are forcing the issue, since one thing worse than denying Americans their religion is denying them the right to claim the United States is the greatest, most powerful nation in the world.

A general theme of government regulation in American science is to rescind red tape and avoid religious disagreement when deemed necessary to remain the strongest nation. As unwritten national policy, we broadly don’t engage science to change the human species for the better. If you doubt this, just try to remember the science topics discussed between Trump and Clinton in the last televised presidential debates. Don’t remember any? No one else does either, because mainstream politicians regretfully don’t talk about science or take it seriously.

But AI is a different political and philosophical dilemma altogether. AI is potentially the Holy Grail of all inventions, and it will bear the seeds of our own morals, idiosyncrasies, and prejudices. Rachel Lomasky and Ryan Calo in their articles may declare that Hanson Robot and Saudi Arabian citizen Sophia is a fake, but make no mistake: Fakeness (or semi-hyperbole) is more and more how the stealthy modern world moves forward. Just look who is sitting in the White House—arguably the world’s most accomplished living newsmaker. For most practical purposes, it’s irrelevant whether that news is fake or real. All that matters that is it’s effective enough—and budgets get created around it.

Sophia is also effective. Instead of seeing her as unfortunate affront to the conversation of robot rights because she is not yet truly intelligent—as some of my other April 2018 Cato Unbound contributors seem to believe—I think we ought to see her as the beginning of our greatest and perhaps most important invention—one for humanity that will pave the way for the millions of smart AIs that are likely to come after her (or even directly from her).

Science and technological innovation are dictated by the scientific method. This is the idea that no one is ever right, but statistical probability can become more and more certain via successful repetitive testing, to the point that we can plan manned missions to Mars and know we’ll likely succeed without ever having done it before. We have the intelligence to believe in almost anything—especially if we can test it. Sophia is part of our journey in a changing intellectual landscape of humans becoming more than biological beings—through rigorous testing of all that she is technically, philosophically, and culturally to us.

Saudi Arabia—like Trump—is correct to jump on the opportunity to embellish and parade its perspectives and national ambitions. As global citizens, we have the choice to take it seriously or not. But we don’t have the choice to deny it, because we will only be left behind.

Progress is rarely welcomed or appreciated by society  it first happens. Visionaries get burned at the stake, or in modern times sued, fired from companies they created, and blackballed from media. But over time, ideas that are transformative survive, and on occasion, change the world. It may not be that Sophia definitely changes the world, but an AI like her soon will. We ought to be very careful to listen objectively and strive to shape AI—no matter how simple or empty of a shell our thinking machines seem now. We are listening to the birthing pangs of a new intelligence that almost certainly will make our own obsolete long before this century is out.

AI May Teach Us Many Lessons… When It Arrives

Zoltan Istvan and I agree that the regulators should be kept far away from legislation involving AI, although we arrived there from different directions. Legislators just recently displayed their incredible ignorance and incompetence when interviewing Mark Zuckerberg about Facebook, including misunderstanding the basics of Facebook’s business model. Even assuming good intentions, it is unreasonable to believe that they will be able to understand the subtleties of AI. Rather, the result would be closer to a “bull in a china shop”–instituting unnecessary constraints on the AI activities. Citizens are apprehensive of new technology, and this is exactly the apprehension that politicians seek to exploit.

Additionally, the issue is not urgent. Friedman mentions Eliza, which perfectly embodies the difference between mimicry and real intelligence. It also shows how our predictions can be accurate in some aspects, while completely missing the mark in others. Discussing Eliza, the Journal of Nervous and Mental Disease said in 1966, “Several hundred patients an hour could be handled by a computer system designed for this purpose. The human therapist, involved in the design and operation of this system, would not be replaced, but would become a much more efficient man.” On some level, the prediction was correct, and there are apps that perform a similar function today (of course, most therapists are women). But those apps are little more than toys, even fifty years later, and the field of psychology is alive and well. Technology continues to march ahead too slowly to take over human intelligence growth.

Likewise, as I said in my original post, it is far too soon to begin forming policy on AI. I have some faith there will be general intelligences someday. But their arrival is so far away that it’s hard to make out their shape, let alone their impact on government and vice versa. Disruptive technologies have a habit of defying expectations, and even if we understood what AI would look like, its consequences are not obvious. Currently even the word “intelligence” is severely under-defined, although presumably for this conversation, it would need to at the minimum show decisionmaking abilities on par with the average voter. As the spectrum of essays here indicate, reasonable people can disagree on when to expect this major breakthrough. Perhaps quantum computing offers hope, as Zoltan Istvan offers. Maybe a combination of current research will get us there, possibly coupled with an exponential increase in computing power, if we continue to follow Moore’s law. Even the necessary research areas remain an open question.

As Roland Benedikter proposes, governments can certainly begin working on plans for “real” AI. This seems to be a fool’s errand–predictions are notoriously hard to make. As noted by Istvan, the American presidential choice in the 2016 election was such a surprise that very few predicted it even on the day of the election. It takes a fair amount of hubris to make the assumption that one knows what AIs will do, let alone whether they will have bodies.

Note that alleging that public policy debates about AI are premature does not imply that any of the contributors here wish to stifle scientific progress, nor need we believe that they are scared of the consequences of robots. On the contrary, I also think the dangers of generalized AI are vastly overstated, a pattern of human behavior that is probably as old as the wheel. Surely, there will be some negative consequences, as with any invention, but I see no cause to think they will outweigh the good. Baked into the fears is the assumption that intelligence is winner-take-all. I think it is far more likely that AIs will specialize in what they are good at, and humans will continue to do what they are good at.

I’m certainly not a critic of AI, rather I’m a practitioner myself. But the devil is in the details, and more clarity is needed as to the nature of AI, as well as observing how it affects society before rational plans can be put into place to control it. Perhaps we will jump over the chasm soon, but the major breakthrough remains hazily in the future, and the recent failure of Moore’s law is not working in its favor. Sophia isn’t fake, but she is definitely demoware, and hard to draw too many conclusions from.