January 2020

False or misleading information on social media has been widely blamed for a variety of problems. Yet these “problems”—like the victory of Donald Trump in the 2016 presidential election—are certainly not considered problems by all. One person’s fake news is often another person’s brave truth-telling.

In a sense, this difficulty isn’t unique to social media. It’s in the nature of any disagreement that each side is apt to describe the other as bearing false information. Charges of lying or misleading are usually not far behind. Yet some matters are not subjective, and some facts are not up for debate. As Hannah Arendt famously remarked, no one can honestly say that in 1914, Belgium invaded Germany.

Perhaps the worst sorts of misinformation can indeed be removed from this or that social media platform. But this only raises a further problem: Who does the filtering? And on what basis do they decide? Is it possible that social media filtering of some sort can help? Perhaps. But does it open the door to censorship? If it becomes the industry norm to exercise widespread information filtering, will tech companies prefer to delegate this undeniably disagreeable work to the government? 

The lead essayist this month is Cato Vice President (and Cato Unbound Senior Editor) John Samples. He will be joined by Alex Feerst, who leads the Legal and Trust and Safety Teams at Medium, and Will Rinehart of the Center for Growth and Opportunity. After all three have written, there will be a period of open discussion. Comments are also enabled through the end of the month. Please join us for a stimulating discussion.

Print entire issue

Lead Essay

  • John Samples more or less supports Facebook’s refusal to fact-check claims by politicians. Partisans will always have strong views, and finding the line between contested statements and lies will never be easy. Existing campaign finance laws and the precedent that these set for any future social media regulation may make this an exceptionally important area for the private sector to get right: A political solution won’t likely be better.

Response Essays

  • Social media is engineered to deliver our attention to paying advertisers. The results for democratic governance have already been worrisome, and it is high time to develop “a more resilient national intellectual immune system,” writes Alex Feerst. He adds that there is no such thing as “natural, unmediated” online speech; all forms of digital organization involve privileging one or another form of data in some way. A pure free speech position may therefore not be particularly germane to the policy debates at hand.

  • Social media companies know a lot less than you may think, and that’s part of the problem. Trying to set them up as fact-checkers would commit them to a project that they are badly equipped to manage. Will Rinehart argues that just as the post office isn’t required to screen the mail for truth, and just as telecommunications companies are not held responsible for the content of robocalls, so too the new media should not be expected to do our thinking for us.

The Conversation

Coming Up

Conversation through the end of the month.

Related at Cato

Essay:The Problem with ‘Fake News’” by Ryan Khurana, July 20, 2018

Policy Analysis: “Why the Government Should Not Regulate Content Moderation of Social Media,” by John Samples, April 9, 2019