Predictions Aren’t Very Reliable, Including about AI

I agree with Rachel Lomasky and Ryan Calo about two things: that Sophia the robot is a fake, and that we are unlikely to get human level artificial intelligence in the next decade or two. But Calo’s claim that “Artificial intelligence is somewhere between a century and infinity away from approximating all aspects of human cognition,” supported by a footnote asserting “overwhelming scientific consensus,” betrays a confidence about his ability to predict the distant future that I regard as wholly unjustified. As he surely knows, before AlphaGo demonstrated that it was the strongest go player in the world, the consensus in the field was that a program capable of beating a human champion was at least five to ten years away.

Achieving human level AI depends on solving two quite different problems, one in computer technology, one in AI programming. It requires a computer with about the power of a human brain for the AI to run on. If computers continue to improve at their current rate, that should be achievable sometime in the next few decades. It also requires a program; we do not know whether such a program can be written, or if so how and when. That introduces a second and much larger source of uncertainty. As AlphaGo demonstrated, improvements in programming are sometimes sudden and unexpected, so it could be next week. Or next century. Or never.

Predictions fail. In the 1960s, the overwhelming consensus­—every significant figure save Julian Simon, widely regarded as a crank—was that population increase was a looming global catastrophe, dooming large parts of the world to continued and growing poverty. Paul Ehrlich’s prediction of unstoppable mass famines dooming hundreds of millions of people to death in the 1970s was within that consensus. What has happened since then was the precise opposite of the prediction, extreme poverty declining steeply, calories per capita in poor countries trending steadily upwards. A confident prediction a century into the future on AI, population, climate, or any major issue other than the orbits of asteroids, is worth very nearly nothing.

If it can be done, will it be? Zoltan Istvan, concerned about political opposition to the parallel issue of improvements in humans, writes “This means most of the leaders in our country fundamentally believe the human body and mind is a God-given temple not to tampered with unless changed by the Almighty.” That claim is clearly false, since human improvements to the human body—stents, vaccines, pacemakers, prosthetics—have for quite a long time been taken for granted. It is true that people tend to be suspicious of rapid change, but that has nothing much to do with religion. Consider, for the most notable current example, how many people make the jump from “climate change” to “catastrophe” without paying much attention to the links between. No religion required.

While an instantaneous jump to genetic engineering of humans might be blocked by conservative bias, a gradual one will not be. Artificial insemination, in vitro pregnancy, selection of egg and sperm donors for desired characteristics, and pre-embryo selection are already happening. Somatic gene therapy is an accepted technology, while gene line therapy is still controversial. Similar continuous chains of improvement could lead from the computer I am writing this on to a programmed computer with human, or more than human, intelligence.

A further reason that such technologies, if possible, are unlikely to be blocked, is that they do not have to be introduced everywhere, only somewhere. The world contains a large number of different countries with different ideologies, religions, politics, policies, and prejudices. As long as at least one of them is willing to accept a useful technology, even a potentially dangerous one, it is likely to happen. The natural response to some futures is “stop the train, I want to get off.” As I pointed out some years back, this train is not equipped with brakes.

While the transhuman project starts from a higher level than the AI project—we already have human level humans, after all—improvements from that level will be more difficult. Carbon based life is an old technology, and Darwinian evolution has already explored many of its possibilities. Silicon based intelligence is a newer technology with more room for improvement. A human with the intellect of Einstein might be useful, might be dangerous, but is not novel; the human race has, after all, not only survived but benefitted by Einstein, da Vinci, Von Neumann. A very strong human eight feet tall would be formidable in one on one combat, but considerably less formidable than a tank—and there are already lots of tanks. Improvements in biological life are likely and interesting, but the range of plausible outcomes is less radical, less threatening, than human level, or eventually more than human level, AI.

Also from this issue

Lead Essay

  • On October 25, 2017, the China-built U.S.-engineered “female” robot “Sophia” became an official citizen of the U.N. member state Saudi Arabia; a state which does not practice fundamental human rights, personal freedom, or gender equality. This set a historic precedent for the global community regarding how to classify intelligent robots and machines whose effects will fully surface only over the coming years. In face of the blurring boundaries between man and machine at the will of illiberal and authoritarian regimes, the alliance of global open societies must reiterate its fundamentals: human and personal rights, and a liberal order based on humanistic definitions, including the differentiation between humans and machines.

Response Essays

  • Rachel Lomasky points out the severe technical limitations of today’s otherwise impressive AI. Bots are adept at specialized tasks, but they are incapable of adapting their knowledge to new circumstances beyond the tasks for which they are designed. In this respect, even human children far surpass them. If general intelligence is a requirement for citizenship, then we are a long way away from a robot republic.

  • Zoltan Istvan describes a complicated future when humans aren’t the only sapients around anymore. Citizenship for “Sophia” was a publicity stunt, but it won’t always be so. Istvan insists that if technology continues on the path it has traveled, then there is only one viable option ahead for humanity: We must merge with our creations and “go full cyborg.” If we do not, then machines may easily replace us.

  • David D. Friedman says human-like robots are a distraction. If general AI arrives, we’ll have much more pressing problems to worry about, probably starting with the ability to copy. What happens when the first sentient AI copies itself ten thousand times? What if they vote? What if they own property, and if they argue about what to do with it? What if the original later deletes the copies? Friedman offers few answers, but does show the utter strangeness of the world that we may face one day.

  • Ryan Calo argues that the West has no need of a serious conversation about AI rights. Current implementations of artificial intelligence are nothing like humans, and the questions that a genuine AI might pose are at least a century away, and perhaps much longer. Rather, we should ask about the decisions that algorithms are making today, and how these choices may erode our existing human rights.