False or misleading information on social media has been widely blamed for a variety of problems. Yet these “problems”—like the victory of Donald Trump in the 2016 presidential election—are certainly not considered problems by all. One person’s fake news is often another person’s brave truth-telling.
In a sense, this difficulty isn’t unique to social media. It’s in the nature of any disagreement that each side is apt to describe the other as bearing false information. Charges of lying or misleading are usually not far behind. Yet some matters are not subjective, and some facts are not up for debate. As Hannah Arendt famously remarked, no one can honestly say that in 1914, Belgium invaded Germany.
Perhaps the worst sorts of misinformation can indeed be removed from this or that social media platform. But this only raises a further problem: Who does the filtering? And on what basis do they decide? Is it possible that social media filtering of some sort can help? Perhaps. But does it open the door to censorship? If it becomes the industry norm to exercise widespread information filtering, will tech companies prefer to delegate this undeniably disagreeable work to the government?
The lead essayist this month is Cato Vice President (and Cato Unbound Senior Editor) John Samples. He will be joined by Alex Feerst, who leads the Legal and Trust and Safety Teams at Medium, and Will Rinehart of the Center for Growth and Opportunity. After all three have written, there will be a period of open discussion. Comments are also enabled through the end of the month. Please join us for a stimulating discussion.