Deniers and Critics of AI Will Only Be Left Behind

Professor David D. Friedman sweeps aside my belief that religion may well dictate the development of AI and other radical transhumanist tech in the future. However, at the core of a broad swath of American society lies a fearful luddite tradition. Americans—including the U.S. Congress, where every member is religious—often base their life philosophies and work ethics on their faiths. Furthermore, a recent Pew study showed 7 in 10 Americans were worried about technology in people’s bodies and brains, even if it offered health benefits.

It rarely matters what point in American history innovation has come out. Anesthesia, vaccines, stem cells, and other breakthroughs have historically all battled to survive under pressure from conservatives and Christians. I believe that if formal religion had not impeded our natural secular progress as a nation over the last 250 years, we would have been much further along in terms of human evolution. Instead of discussing and arguing about our coming transhumanist future, we’d be living in it.

Our modern-day battle with genetic editing and whether our government will allow unhindered research of it is proof we are still somewhere between the Stone Age and the AI Age. Thankfully, China and Russia are forcing the issue, since one thing worse than denying Americans their religion is denying them the right to claim the United States is the greatest, most powerful nation in the world.

A general theme of government regulation in American science is to rescind red tape and avoid religious disagreement when deemed necessary to remain the strongest nation. As unwritten national policy, we broadly don’t engage science to change the human species for the better. If you doubt this, just try to remember the science topics discussed between Trump and Clinton in the last televised presidential debates. Don’t remember any? No one else does either, because mainstream politicians regretfully don’t talk about science or take it seriously.

But AI is a different political and philosophical dilemma altogether. AI is potentially the Holy Grail of all inventions, and it will bear the seeds of our own morals, idiosyncrasies, and prejudices. Rachel Lomasky and Ryan Calo in their articles may declare that Hanson Robot and Saudi Arabian citizen Sophia is a fake, but make no mistake: Fakeness (or semi-hyperbole) is more and more how the stealthy modern world moves forward. Just look who is sitting in the White House—arguably the world’s most accomplished living newsmaker. For most practical purposes, it’s irrelevant whether that news is fake or real. All that matters that is it’s effective enough—and budgets get created around it.

Sophia is also effective. Instead of seeing her as unfortunate affront to the conversation of robot rights because she is not yet truly intelligent—as some of my other April 2018 Cato Unbound contributors seem to believe—I think we ought to see her as the beginning of our greatest and perhaps most important invention—one for humanity that will pave the way for the millions of smart AIs that are likely to come after her (or even directly from her).

Science and technological innovation are dictated by the scientific method. This is the idea that no one is ever right, but statistical probability can become more and more certain via successful repetitive testing, to the point that we can plan manned missions to Mars and know we’ll likely succeed without ever having done it before. We have the intelligence to believe in almost anything—especially if we can test it. Sophia is part of our journey in a changing intellectual landscape of humans becoming more than biological beings—through rigorous testing of all that she is technically, philosophically, and culturally to us.

Saudi Arabia—like Trump—is correct to jump on the opportunity to embellish and parade its perspectives and national ambitions. As global citizens, we have the choice to take it seriously or not. But we don’t have the choice to deny it, because we will only be left behind.

Progress is rarely welcomed or appreciated by society  it first happens. Visionaries get burned at the stake, or in modern times sued, fired from companies they created, and blackballed from media. But over time, ideas that are transformative survive, and on occasion, change the world. It may not be that Sophia definitely changes the world, but an AI like her soon will. We ought to be very careful to listen objectively and strive to shape AI—no matter how simple or empty of a shell our thinking machines seem now. We are listening to the birthing pangs of a new intelligence that almost certainly will make our own obsolete long before this century is out.

Also from this issue

Lead Essay

  • On October 25, 2017, the China-built U.S.-engineered “female” robot “Sophia” became an official citizen of the U.N. member state Saudi Arabia; a state which does not practice fundamental human rights, personal freedom, or gender equality. This set a historic precedent for the global community regarding how to classify intelligent robots and machines whose effects will fully surface only over the coming years. In face of the blurring boundaries between man and machine at the will of illiberal and authoritarian regimes, the alliance of global open societies must reiterate its fundamentals: human and personal rights, and a liberal order based on humanistic definitions, including the differentiation between humans and machines.

Response Essays

  • Rachel Lomasky points out the severe technical limitations of today’s otherwise impressive AI. Bots are adept at specialized tasks, but they are incapable of adapting their knowledge to new circumstances beyond the tasks for which they are designed. In this respect, even human children far surpass them. If general intelligence is a requirement for citizenship, then we are a long way away from a robot republic.

  • Zoltan Istvan describes a complicated future when humans aren’t the only sapients around anymore. Citizenship for “Sophia” was a publicity stunt, but it won’t always be so. Istvan insists that if technology continues on the path it has traveled, then there is only one viable option ahead for humanity: We must merge with our creations and “go full cyborg.” If we do not, then machines may easily replace us.

  • David D. Friedman says human-like robots are a distraction. If general AI arrives, we’ll have much more pressing problems to worry about, probably starting with the ability to copy. What happens when the first sentient AI copies itself ten thousand times? What if they vote? What if they own property, and if they argue about what to do with it? What if the original later deletes the copies? Friedman offers few answers, but does show the utter strangeness of the world that we may face one day.

  • Ryan Calo argues that the West has no need of a serious conversation about AI rights. Current implementations of artificial intelligence are nothing like humans, and the questions that a genuine AI might pose are at least a century away, and perhaps much longer. Rather, we should ask about the decisions that algorithms are making today, and how these choices may erode our existing human rights.