Professor David D. Friedman sweeps aside my belief that religion may well dictate the development of AI and other radical transhumanist tech in the future. However, at the core of a broad swath of American society lies a fearful luddite tradition. Americans—including the U.S. Congress, where every member is religious—often base their life philosophies and work ethics on their faiths. Furthermore, a recent Pew study showed 7 in 10 Americans were worried about technology in people’s bodies and brains, even if it offered health benefits.
It rarely matters what point in American history innovation has come out. Anesthesia, vaccines, stem cells, and other breakthroughs have historically all battled to survive under pressure from conservatives and Christians. I believe that if formal religion had not impeded our natural secular progress as a nation over the last 250 years, we would have been much further along in terms of human evolution. Instead of discussing and arguing about our coming transhumanist future, we’d be living in it.
Our modern-day battle with genetic editing and whether our government will allow unhindered research of it is proof we are still somewhere between the Stone Age and the AI Age. Thankfully, China and Russia are forcing the issue, since one thing worse than denying Americans their religion is denying them the right to claim the United States is the greatest, most powerful nation in the world.
A general theme of government regulation in American science is to rescind red tape and avoid religious disagreement when deemed necessary to remain the strongest nation. As unwritten national policy, we broadly don’t engage science to change the human species for the better. If you doubt this, just try to remember the science topics discussed between Trump and Clinton in the last televised presidential debates. Don’t remember any? No one else does either, because mainstream politicians regretfully don’t talk about science or take it seriously.
But AI is a different political and philosophical dilemma altogether. AI is potentially the Holy Grail of all inventions, and it will bear the seeds of our own morals, idiosyncrasies, and prejudices. Rachel Lomasky and Ryan Calo in their articles may declare that Hanson Robot and Saudi Arabian citizen Sophia is a fake, but make no mistake: Fakeness (or semi-hyperbole) is more and more how the stealthy modern world moves forward. Just look who is sitting in the White House—arguably the world’s most accomplished living newsmaker. For most practical purposes, it’s irrelevant whether that news is fake or real. All that matters that is it’s effective enough—and budgets get created around it.
Sophia is also effective. Instead of seeing her as unfortunate affront to the conversation of robot rights because she is not yet truly intelligent—as some of my other April 2018 Cato Unbound contributors seem to believe—I think we ought to see her as the beginning of our greatest and perhaps most important invention—one for humanity that will pave the way for the millions of smart AIs that are likely to come after her (or even directly from her).
Science and technological innovation are dictated by the scientific method. This is the idea that no one is ever right, but statistical probability can become more and more certain via successful repetitive testing, to the point that we can plan manned missions to Mars and know we’ll likely succeed without ever having done it before. We have the intelligence to believe in almost anything—especially if we can test it. Sophia is part of our journey in a changing intellectual landscape of humans becoming more than biological beings—through rigorous testing of all that she is technically, philosophically, and culturally to us.
Saudi Arabia—like Trump—is correct to jump on the opportunity to embellish and parade its perspectives and national ambitions. As global citizens, we have the choice to take it seriously or not. But we don’t have the choice to deny it, because we will only be left behind.
Progress is rarely welcomed or appreciated by society it first happens. Visionaries get burned at the stake, or in modern times sued, fired from companies they created, and blackballed from media. But over time, ideas that are transformative survive, and on occasion, change the world. It may not be that Sophia definitely changes the world, but an AI like her soon will. We ought to be very careful to listen objectively and strive to shape AI—no matter how simple or empty of a shell our thinking machines seem now. We are listening to the birthing pangs of a new intelligence that almost certainly will make our own obsolete long before this century is out.