About August 2017
Algorithms rule more and more of the world around us. They screen school and job applications. They determine who qualifies for loans and insurance. They trigger audits and investigations. But what’s going on under the hood? Are algorithms impersonal, and thus impartial and fair? Or can they be programmed, intentionally or otherwise, to replicate human biases? If so, then using algorithms leaves us worse off. A veneer of fairness now covers our systemic biases, making them harder to argue against or even discover.
That’s precisely the charge that Cathy O’Neil levels in her lead essay this month. The author of Weapons of Math Destruction takes us on a brief tour of how algorithms can mislead in teacher evaluations, debt collection, and several other important areas of life. She invites us to greater skepticism about artificial intelligence and recommends policy solutions that would curb the dangers of the algorithm-driven life.
Responding to her this month we have Caleb Watney of the R Street Institute, freelance journalist and former WIRED senior editor Laura Hudson, and Cato Institute Senior Fellow Julian Sanchez. Each will respond to O’Neil with an essay, and conversation among the four will continue through the month. Comments are also enabled through the month, and we invite readers to contribute to the discussion as well.
Cathy O’Neil explains why algorithms aren’t always to be trusted. Mathematical decisionmaking procedures are common in contemporary life, but they are only ever as good as the data that goes into them, and sometimes they’re quite a bit worse. Algorithms can perpetuate patterns of ethnic, gender, and class discrimination, and they can snare innocent people in the attempt to find wrongdoers. O’Neil calls for greater accountability and fairness in these products that increasingly determine our fates.
Caleb Watney looks at the uses and abuses of algorithmic decisionmaking. He argues that governments face different incentives from private actors, and that this explains in part why government use of algorithmic technology poses greater danger to the public. And of course governments exercise much greater power over our lives in any case. Watney also argues that algorithms are morally neutral, and that we should make use of them when they offer prospects for more impartial justice.
Laura Hudson draws on lingustics to show how we live in a world full of unexamined systems and tendencies. Some of these are deeply unfair despite their invisibility. As a result, “neutrality” favors the powerful. Thus the claim that technology is netural turns out to be a slanted one, because it neglects the many hidden forms of unfairness.
Julian Sanchez argues that peravsive data collection offers many temptations to bad actors, with or without algorithms to augment their acts. Data collection itself may be the bigger problem. He agrees, though, that when algorithms generate feedback loops that appear to validate their initial assumptions, things only get worse while they look like they’re getting more objective. Yet he remains skeptical about broadly imposed regulatory solutions to the problem.
Related at Cato
Commentary: “How Government Killed the Medical Profession,” by Jeffrey A. Singer, Reason, May 2013
Podcast: “A Reckoning for Big Data,” with Bruce Schneier October 23, 2015
Commentary: “There Is No Justification for Regulating Online Giants as if They Were Public Utilities,” Ryan Bourne, August 7, 2017