Reply to Feerst and Rinehart

I appreciate the replies to my essay on falsehoods and political ads. Both are predictably insightful and persuasive. Alex has me at a disadvantage: as mentioned, he has actually moderated content online, and I have not. Will Rinehart’s essay shows a firm grasp of the bigger social media picture and its implications. Readers will benefit from close attention to both, or, in online-speak: “read the whole thing.”

That said, I find both replies overly broad. Pace Alex, I was not making the case for a “do nothing” internet that was the dream of many in 1995. I focused on one question: what should private content moderators do about plausible falsehoods in political ads they present to users? And I don’t believe the companies should “do nothing” in general. As Alex says, they are going to do something. Similarly, Will’s view of the larger context seems plausible. Perhaps we must solve the larger issues of trust to truly deal with content moderation, but Facebook was called upon to decide right now what to do about falsehoods in political ads, a decision with implications for free speech. Perhaps we do not disagree all that much. We appear to be writing about three different aspects of the content moderation puzzle.

In response, would like to adumbrate my view of the social media big picture. Like the government, the companies may adopt content-neutral policies, or they can discriminate against speech on the basis of its content (they can discriminate through regulation or removal). Removing putative lies constitutes content discrimination, in the real world and online. Content discrimination should be minimized while content neutrality should actually avoid taking sides.

Like many Americans, Alex is wary of content discrimination beyond recognized categories of speech like obscenity and libel. Increasing internet literacy is potentially content-neutral unless its taught on the assumption that the claims of [insert favorite demon here] are always illogical or otherwise faulty. I would add that if universities are to be our guide here, at least thirty percent of social media education should be done by private entities. In my view, novel public education programs rarely reject the null hypothesis; a little competition might well advance the internet literacy project. Alex also lauds Facebook’s transparency efforts. Facebook has broad powers to condition their services on disclosure, but those policies may ultimately sit uneasily with concerns about privacy. And we ought to have limited expectations from disclosure. Congress has mandated disclosure of campaign finance for almost fifty years. Disclosure supposedly seeks to educate voters; in fact, much speech about disclosed information seeks to shame donors to disfavored campaigns by suggesting they seek a corrupt bargain through electoral means. Yes, shaming and smearing are permitted speech, but whether they advance public deliberations is less certain. Fortunately, I see little evidence that online disclosure fosters similar speech.

Before the internet, politicians lied in advertising. What has changed? Why do so many people expect Facebook to repress some content of their ads? Broadcast companies could not censor ads, and even when cable television firms could refuse such ads, they rarely did (as Will confirms). The same may be true of newspapers, almost all privately owned. Why not censor? One among many reasons might be that few political elites worried about falsehoods in a world with effective gatekeeping. Walter Cronkite would not allow the Alex Jones of 1969 on the air. Social media promised to end gatekeeping and did, at least of the sort effective in 1969. Now, a newer and presumably looser gatekeeping is being rebuilt by Facebook and others. I think the distinction between content discrimination and content neutrality might inform those new institutions of gatekeeping.

Content neutrality will be a challenge. If you lean left, imagine that eighty percent of Facebook content moderators support Donald Trump and 25 percent of those strongly support him. In that world, what would we expect if those moderators started looking for lies in ads or more generally? If you lean right, if Elizabeth Warren were President, and if Facebook regulated content in a “neutral” way that helps her re-election effort, what would you think? Perhaps the companies should be commended for taking on such a seemingly impossible task.

Finally, I recognize that my distinction between content neutrality and content discrimination does not accord with social media practice. Facebook discriminates against some content (hate speech, hoaxes, and similar falsehoods) that U.S. courts would protect. They also regulate speech that might be viewed as essential to elections. For example, Facebook forbids spreading lies about when and where you can vote and about the U.S. Census. The former concerns voting while the latter affects the allocation of representation. Their advertising policy, as we have seen, directly connects candidates to potential voters. Perhaps Facebook wishes to protect core political institutions in this republic. It may even be these policies could be neutral if not content neutral. But the policies on voting and the Census violate Facebook’s standards because of their content. And the policy to remedy those defects must discriminate against content.

These three essays suggest the breath and difficulties of the issues provoked by the rise of social media. Alex Feerst is known to say, “No one really understands what it means to give 3 billion people a printing press!” Indeed.

Also from this issue

Lead Essay

  • John Samples more or less supports Facebook’s refusal to fact-check claims by politicians. Partisans will always have strong views, and finding the line between contested statements and lies will never be easy. Existing campaign finance laws and the precedent that these set for any future social media regulation may make this an exceptionally important area for the private sector to get right: A political solution won’t likely be better.

Response Essays

  • Social media is engineered to deliver our attention to paying advertisers. The results for democratic governance have already been worrisome, and it is high time to develop “a more resilient national intellectual immune system,” writes Alex Feerst. He adds that there is no such thing as “natural, unmediated” online speech; all forms of digital organization involve privileging one or another form of data in some way. A pure free speech position may therefore not be particularly germane to the policy debates at hand.

  • Social media companies know a lot less than you may think, and that’s part of the problem. Trying to set them up as fact-checkers would commit them to a project that they are badly equipped to manage. Will Rinehart argues that just as the post office isn’t required to screen the mail for truth, and just as telecommunications companies are not held responsible for the content of robocalls, so too the new media should not be expected to do our thinking for us.