Facebook’s Filtering Policies Strike Nearly the Right Balance

Last October the Trump re-election campaign funded an ad on Facebook claiming that then–Vice President Joe Biden had threatened to withhold $1 billion of aid to Ukraine to end a corruption investigation of a company where Biden’s son was a board member. Biden asked that Facebook remove the ad; the company refused. A Biden staffer said Facebook was not “serious about fixing the real problems: allowing politicians to spread outrageous lies…” and that the company was “profiteering off the erosion of American democracy.”

Not wishing to be a threat to American democracy, Twitter decided to stop running political advertising. Google did not ban ads. Instead they will not sell ads targeted to small groups identified by machine learning (so-called microtargeting). Facebook relies on fact checking organizations to vet contested claims by ordinary users. But the company does not fact check claims by politicians. Hence their refusal to judge and remove the Trump ad about Biden. Facebook thus does not stand between a candidate for office and voters. Judging by the outrage (including from its own employees) the Facebook policy fails miserably. The question of the best policy is complex. But the outrage (as usual) is misplaced: Facebook has chosen the best policy.

People can make three kinds of claims in public or private forums. Almost everyone agrees falsehoods contravene the truth. Almost everyone again believes that facts correspond (or if you must, cohere) with reality. We lack consensus about contestable claims. They are, after all, contested and thus the essence of political argument.

Many people assume that falsehoods should be excluded from public debate, leaving only contestable claims and truths. Often lies are evident. If I claim to have won the Congressional Medal of Honor, my assertion can be falsified by checking the list of past winners. But what is actually fought over in politics is rarely an evident lie. Unlike Medal of Honor winners, we have no authoritative list of corrupt actors in Ukraine. Note also that most claims stoking political controversy will be similar to the Bidens’ case rather than Xavier Alvarez’s (clearly fake) Medal of Honor. The facts will not be easily determined. Partisans will have strong views. For them, identifying and excluding “lies” will be easy: most everything their opponents believe will turn out to be false and thus properly outside debate. Partisans will care the most about excluding such “lies” and thus will be likely to decide what is a falsehood. Many arguments that should be contested will be excluded from public debate.

Free speech doctrine says individuals are the proper censors for lies. In this case, those who buy ads make claims about themselves and their opponents. Historically some of those claims turn out to be lies. But who should decide which claims are lies? With free speech, the audience for the advertising is called upon to hear arguments and form views about truth and falsehoods. Free speech makes individuals the source of authority over lies and contestable claims. Such libertarian faith in individuals has long been an icon in the United States if not all liberal democracies. That faith informs Facebook’s policy at least regarding ads from politicians. Of course, that faith in free speech suggests individuals are the proper judge of all speech from all Facebook users, not just politicians. I shall return to that issue late in the essay. My focus, however, will be on the narrow question of the moment: should social media suppress “lies” in political ads?

The Facebook policy comports with law and media practices in the United States regarding political ads. Broadcasters have been bound by the Federal Communications Act, which says radio and television editors had “no power of censorship” over political ads. Ads by candidates had to be run, even if thought to be false. Broadcasters have been shielded from liability for the content of the ads.

I make no case for this FCC policy, which is not our topic in any case. The FCC policy involves coercion in service to a direct connection between voters and politicians. The coercion will rightly offend liberal temperaments, but assuring that voters directly hear from candidates for office comports with free speech ideals. The Facebook policy embraces the free speech aspect of the FCC policy without coercion, an improvement over the status quo.

Consider also the overlooked but still important communications medium of direct mail advertising. Surely the U.S. Post Office (that is, government officials) could not refuse to deliver mailed ads because they contained falsehoods. Would we want them to do so if they could? Who would expect private carriers monitor the content of deliveries?

The FCC requirement extends only to candidates and not third party ads by PACs, Super PACs, labor unions, interest groups, corporations, or individuals. Cable TV may refuse ads. CNN recently rejected two recent Trump ads because they violated the network’s advertising policy. Newspapers may also reject ads. Social media platforms also have this freedom to refuse ads. Facebook has chosen to directly connect candidates and voters. Most platforms have generally followed that policy.

So far as it goes, Facebook’s policy reflects both free speech philosophy and American practices regarding political advertising, save for the coercion of communications law. That suggests Facebook’s policy is not nearly as absurd and as its critics allege. Indeed it seems to be a mainstream policy that reflects the traditional value of free speech. But some issues demand further consideration.

Free speech doctrine assumes that a direct connection between politicians (speakers) and voters (listeners) serves autonomy and the social good. In contrast, many informed critics support speech paternalism, the idea that intermediaries (gatekeepers) can improve public debate. For example, gatekeepers can prevent “lies” from being heard and dangerous candidates from being elected. Rigorous content-based gatekeeping might be a vital part of militant democracy in our time. Once online, the people, it might be argued, are easily fooled or at least, unwilling to fact check ruthless populists. The intermediation of editors or content moderators is essential to prevent the spread of lies and the corruption of democracy. Whatever might have been said in the past about the glories of free speech has been overthrown by the realities of the internet. Speech paternalists ultimately believe someone should have authority to shape public debate to achieve some social good. And that authority must include excluding lies from public debates.

It is worth recalling why that authority cannot be government officials in the United States. If lies could be censored, government officials would also suppress some claims that should be contested and might even be true. Officials have strong incentives to suppress all claims weakening their power, many of which will be properly contestable. If that contestation does not happen, political and policy changes preferred by a majority may be stifled. The country would be worse off. This argument applies to any nation looking to government to improve speech.

Of course, constraints protecting speech from government do not apply to private gatekeepers. But should private gatekeepers be trusted? Consider the example at hand. Over the past forty years, stories similar to the Bidens’ adventures in Ukraine have been deemed “the appearance of corruption” if not actual corruption. Now we are told that “the claim [about the Bidens] has been repeatedly labeled as untrue by news organizations.” Reasonable people may (and do) suspect political bias among the gatekeepers of traditional media. The case against traditional gatekeepers is no doubt overdone. But surveys confirm that trust in the older media authorities has dropped by a third.

And critics of Facebook’s policy often want it both ways on whether social media gatekeepers should be trusted. They have argued that platforms encourage harmful speech to fatten their bottom line. Of course many of the same critics see controversial ads in the same light. But the same incentives would be at work if platforms removed “lies.” Might much of the antitrust case against Facebook and Google be contested and thus potentially deemed a “lie” in a candidate’s ad? The economic argument about content moderation thus lends support to Facebook’s unwillingness to fact check campaign ads. “Lies” in practice might turn out to be nothing more than contestable arguments that threaten profits, at least if we accept the other arguments against current content moderation practices.

The debate about ad policy has brought forth a different argument for giving Facebook authority to improve speech on its platform. Facebook is considering limiting the reach (but not the content) of controversial advertising; Google has already done so. Here’s the idea: instead of presenting an ad with a controversial claim before 500 users, all of whom might be inclined to affirm the content of the ad, Facebook could refuse to sell political ads to fewer than 5,000 users. (The numbers may not be exact, but you get the idea). Where a smaller audience might not have seen any debate about the ad, the larger audience will have many people who have doubts about the content of the ad. They might generate a debate about the Bidens’ Ukrainian adventures. The 500 users who might have heard nothing against the Biden ad would now hear “more speech” about it. Of course, free speech proponents always offer “more speech” as an alternative to censorship. This “reach, not speech” policy is brilliant in its own way.

But it remains a version of speech paternalism. “Reach, not speech” contravenes an idea undergirding free speech: people have the right and ability to discern truth and falsehood. Critics of microtargeting disagree. Writing in the New Scientist, Annalee Newitz recently argued that “microtargeting allows political lies on Facebook to reach only the people most likely to fall for them.” Those people, she writes, need to hear from “watchdog groups” who presumably set them straight. Set aside whether these groups are better guides to truth and falsehood than the vulnerable victims of internet advertising. “Reach, not speech” differs from counterspeech in one vital way. If I offer counterspeech to hateful claims, I assume my audience is capable of rational reflection and changing their minds. “Reach, not speech” assumes the opposite: “the vulnerable” cannot deal with arguments. “Reach, not speech” therefore only appears to comport with free speech ideals. The policy reflects profound doubts about the capacities of most people.

Nonetheless it might be argued that modifying microtargeting, whatever the motivation, does not constitute content-based discrimination. It fosters debate rather than suppresses speech. But the goal here is not fostering debate as an end in itself. The goal is to exclude speech thought to be false and ultimately dangerous to the polity. If “more speech” does not attain that goal, it will be easier to move on to content-based discrimination because we have already countenanced modifying microtargeting.

Philosophy aside, social media platforms should find it hard to become speech paternalists. The doubts about the traditional gatekeepers created an opening for social media. The social media platforms promised the end of gatekeeping and an open forum privately governed. Of course, the platforms had promised too much; they needed some gatekeeping or content moderation almost from the start. But social media has made good on the promise of a more open political discourse. Indeed, that is the complaint we are considering here: people are too free to speak lies. Perhaps an aggressive war on “lies” would not significantly harm social media brands. But it should.

Finally, why not just follow Twitter and refuse to run all political ads? The platforms say “no” to both parties thus remaining neutral and out of the fray. Someone else will decide “what is truth?” It’s difficult to draw the line between political and non-political ads. Faced with this gray area, a platform might end up refusing to run advertising promoting books. Complaints notwithstanding, refusing all political ads would worsen electoral debate by throwing out contestable claims along with falsehoods. Social media really does offer better ways to reach voters, as party committees have noted, and that improvement would be lost. And if that improvement exists, shouldn’t the owners of social media recognize an obligation to make it available and to deal with the ensuing problems? (There will always be ensuing problems.)

My case for the Facebook policy will not please everyone. But I hope to have shown that it reflects persuasive free speech ideals and that the alternatives are worse. Two ancillary points deserve mention.

First, the Facebook policy does not go far enough in light of its libertarian foundations. Why stand between any speaker and their audience? If you believe that most people can handle debate and act accordingly, why send any Facebook post to the fact checker? Of course, Facebook has the right to draw the distinction they have drawn between politicians and everyone else, and free speech proponents will be happy to have any recognition of their cause during an increasingly illiberal period. But the distinction needs attention, not least because fact checkers appear to share the traditional media’s loss of authority.

Second, I have not mentioned the current politics of this issue, election year notwithstanding. Policies about speech are unlikely to solve the complex political dilemmas faced by the platforms. The Facebook policy may help get elected officials off the company’s back. But it also led many on the left to conclude Facebook was helping Donald Trump’s re-election effort by spreading lies. Of course, if Facebook refused to run the Biden ad (and others later), the President and his supporters would conclude that the company was determined to oppose his re-election. Whatever social media does on speech will be seen as helping or hurting one side or the other in our polarized, zero-sum politics. The same may be true elsewhere. The companies should figure out what they stand for on these issues and stick with their principles. That will be a lot easier to say than to do. But it might offer in the longer term a way to avoid being governed by an “FCC for social media.”

We live in a difficult time. Liberals of the right and the left are losing elections and public influence. Those losses raise questions about liberal fundamentals like free speech, not least because private companies are not legally required to honor freedom of speech. Facebook’s policy stands against the dangers of our age at some risk to the company. It deserves our support.

Also from this issue

Lead Essay

  • John Samples more or less supports Facebook’s refusal to fact-check claims by politicians. Partisans will always have strong views, and finding the line between contested statements and lies will never be easy. Existing campaign finance laws and the precedent that these set for any future social media regulation may make this an exceptionally important area for the private sector to get right: A political solution won’t likely be better.

Response Essays

  • Social media is engineered to deliver our attention to paying advertisers. The results for democratic governance have already been worrisome, and it is high time to develop “a more resilient national intellectual immune system,” writes Alex Feerst. He adds that there is no such thing as “natural, unmediated” online speech; all forms of digital organization involve privileging one or another form of data in some way. A pure free speech position may therefore not be particularly germane to the policy debates at hand.

  • Social media companies know a lot less than you may think, and that’s part of the problem. Trying to set them up as fact-checkers would commit them to a project that they are badly equipped to manage. Will Rinehart argues that just as the post office isn’t required to screen the mail for truth, and just as telecommunications companies are not held responsible for the content of robocalls, so too the new media should not be expected to do our thinking for us.