AI May Teach Us Many Lessons… When It Arrives

Zoltan Istvan and I agree that the regulators should be kept far away from legislation involving AI, although we arrived there from different directions. Legislators just recently displayed their incredible ignorance and incompetence when interviewing Mark Zuckerberg about Facebook, including misunderstanding the basics of Facebook’s business model. Even assuming good intentions, it is unreasonable to believe that they will be able to understand the subtleties of AI. Rather, the result would be closer to a “bull in a china shop”–instituting unnecessary constraints on the AI activities. Citizens are apprehensive of new technology, and this is exactly the apprehension that politicians seek to exploit.

Additionally, the issue is not urgent. Friedman mentions Eliza, which perfectly embodies the difference between mimicry and real intelligence. It also shows how our predictions can be accurate in some aspects, while completely missing the mark in others. Discussing Eliza, the Journal of Nervous and Mental Disease said in 1966, “Several hundred patients an hour could be handled by a computer system designed for this purpose. The human therapist, involved in the design and operation of this system, would not be replaced, but would become a much more efficient man.” On some level, the prediction was correct, and there are apps that perform a similar function today (of course, most therapists are women). But those apps are little more than toys, even fifty years later, and the field of psychology is alive and well. Technology continues to march ahead too slowly to take over human intelligence growth.

Likewise, as I said in my original post, it is far too soon to begin forming policy on AI. I have some faith there will be general intelligences someday. But their arrival is so far away that it’s hard to make out their shape, let alone their impact on government and vice versa. Disruptive technologies have a habit of defying expectations, and even if we understood what AI would look like, its consequences are not obvious. Currently even the word “intelligence” is severely under-defined, although presumably for this conversation, it would need to at the minimum show decisionmaking abilities on par with the average voter. As the spectrum of essays here indicate, reasonable people can disagree on when to expect this major breakthrough. Perhaps quantum computing offers hope, as Zoltan Istvan offers. Maybe a combination of current research will get us there, possibly coupled with an exponential increase in computing power, if we continue to follow Moore’s law. Even the necessary research areas remain an open question.

As Roland Benedikter proposes, governments can certainly begin working on plans for “real” AI. This seems to be a fool’s errand–predictions are notoriously hard to make. As noted by Istvan, the American presidential choice in the 2016 election was such a surprise that very few predicted it even on the day of the election. It takes a fair amount of hubris to make the assumption that one knows what AIs will do, let alone whether they will have bodies.

Note that alleging that public policy debates about AI are premature does not imply that any of the contributors here wish to stifle scientific progress, nor need we believe that they are scared of the consequences of robots. On the contrary, I also think the dangers of generalized AI are vastly overstated, a pattern of human behavior that is probably as old as the wheel. Surely, there will be some negative consequences, as with any invention, but I see no cause to think they will outweigh the good. Baked into the fears is the assumption that intelligence is winner-take-all. I think it is far more likely that AIs will specialize in what they are good at, and humans will continue to do what they are good at.

I’m certainly not a critic of AI, rather I’m a practitioner myself. But the devil is in the details, and more clarity is needed as to the nature of AI, as well as observing how it affects society before rational plans can be put into place to control it. Perhaps we will jump over the chasm soon, but the major breakthrough remains hazily in the future, and the recent failure of Moore’s law is not working in its favor. Sophia isn’t fake, but she is definitely demoware, and hard to draw too many conclusions from.

Also from this issue

Lead Essay

  • On October 25, 2017, the China-built U.S.-engineered “female” robot “Sophia” became an official citizen of the U.N. member state Saudi Arabia; a state which does not practice fundamental human rights, personal freedom, or gender equality. This set a historic precedent for the global community regarding how to classify intelligent robots and machines whose effects will fully surface only over the coming years. In face of the blurring boundaries between man and machine at the will of illiberal and authoritarian regimes, the alliance of global open societies must reiterate its fundamentals: human and personal rights, and a liberal order based on humanistic definitions, including the differentiation between humans and machines.

Response Essays

  • Rachel Lomasky points out the severe technical limitations of today’s otherwise impressive AI. Bots are adept at specialized tasks, but they are incapable of adapting their knowledge to new circumstances beyond the tasks for which they are designed. In this respect, even human children far surpass them. If general intelligence is a requirement for citizenship, then we are a long way away from a robot republic.

  • Zoltan Istvan describes a complicated future when humans aren’t the only sapients around anymore. Citizenship for “Sophia” was a publicity stunt, but it won’t always be so. Istvan insists that if technology continues on the path it has traveled, then there is only one viable option ahead for humanity: We must merge with our creations and “go full cyborg.” If we do not, then machines may easily replace us.

  • David D. Friedman says human-like robots are a distraction. If general AI arrives, we’ll have much more pressing problems to worry about, probably starting with the ability to copy. What happens when the first sentient AI copies itself ten thousand times? What if they vote? What if they own property, and if they argue about what to do with it? What if the original later deletes the copies? Friedman offers few answers, but does show the utter strangeness of the world that we may face one day.

  • Ryan Calo argues that the West has no need of a serious conversation about AI rights. Current implementations of artificial intelligence are nothing like humans, and the questions that a genuine AI might pose are at least a century away, and perhaps much longer. Rather, we should ask about the decisions that algorithms are making today, and how these choices may erode our existing human rights.