About this Issue

False or misleading information on social media has been widely blamed for a variety of problems. Yet these “problems”—like the victory of Donald Trump in the 2016 presidential election—are certainly not considered problems by all. One person’s fake news is often another person’s brave truth-telling.

In a sense, this difficulty isn’t unique to social media. It’s in the nature of any disagreement that each side is apt to describe the other as bearing false information. Charges of lying or misleading are usually not far behind. Yet some matters are not subjective, and some facts are not up for debate. As Hannah Arendt famously remarked, no one can honestly say that in 1914, Belgium invaded Germany.

Perhaps the worst sorts of misinformation can indeed be removed from this or that social media platform. But this only raises a further problem: Who does the filtering? And on what basis do they decide? Is it possible that social media filtering of some sort can help? Perhaps. But does it open the door to censorship? If it becomes the industry norm to exercise widespread information filtering, will tech companies prefer to delegate this undeniably disagreeable work to the government? 

The lead essayist this month is Cato Vice President (and Cato Unbound Senior Editor) John Samples. He will be joined by Alex Feerst, who leads the Legal and Trust and Safety Teams at Medium, and Will Rinehart of the Center for Growth and Opportunity. After all three have written, there will be a period of open discussion. Comments are also enabled through the end of the month. Please join us for a stimulating discussion.

Lead Essay

Facebook’s Filtering Policies Strike Nearly the Right Balance

Last October the Trump re-election campaign funded an ad on Facebook claiming that then–Vice President Joe Biden had threatened to withhold $1 billion of aid to Ukraine to end a corruption investigation of a company where Biden’s son was a board member. Biden asked that Facebook remove the ad; the company refused. A Biden staffer said Facebook was not “serious about fixing the real problems: allowing politicians to spread outrageous lies…” and that the company was “profiteering off the erosion of American democracy.”

Not wishing to be a threat to American democracy, Twitter decided to stop running political advertising. Google did not ban ads. Instead they will not sell ads targeted to small groups identified by machine learning (so-called microtargeting). Facebook relies on fact checking organizations to vet contested claims by ordinary users. But the company does not fact check claims by politicians. Hence their refusal to judge and remove the Trump ad about Biden. Facebook thus does not stand between a candidate for office and voters. Judging by the outrage (including from its own employees) the Facebook policy fails miserably. The question of the best policy is complex. But the outrage (as usual) is misplaced: Facebook has chosen the best policy.

People can make three kinds of claims in public or private forums. Almost everyone agrees falsehoods contravene the truth. Almost everyone again believes that facts correspond (or if you must, cohere) with reality. We lack consensus about contestable claims. They are, after all, contested and thus the essence of political argument.

Many people assume that falsehoods should be excluded from public debate, leaving only contestable claims and truths. Often lies are evident. If I claim to have won the Congressional Medal of Honor, my assertion can be falsified by checking the list of past winners. But what is actually fought over in politics is rarely an evident lie. Unlike Medal of Honor winners, we have no authoritative list of corrupt actors in Ukraine. Note also that most claims stoking political controversy will be similar to the Bidens’ case rather than Xavier Alvarez’s (clearly fake) Medal of Honor. The facts will not be easily determined. Partisans will have strong views. For them, identifying and excluding “lies” will be easy: most everything their opponents believe will turn out to be false and thus properly outside debate. Partisans will care the most about excluding such “lies” and thus will be likely to decide what is a falsehood. Many arguments that should be contested will be excluded from public debate.

Free speech doctrine says individuals are the proper censors for lies. In this case, those who buy ads make claims about themselves and their opponents. Historically some of those claims turn out to be lies. But who should decide which claims are lies? With free speech, the audience for the advertising is called upon to hear arguments and form views about truth and falsehoods. Free speech makes individuals the source of authority over lies and contestable claims. Such libertarian faith in individuals has long been an icon in the United States if not all liberal democracies. That faith informs Facebook’s policy at least regarding ads from politicians. Of course, that faith in free speech suggests individuals are the proper judge of all speech from all Facebook users, not just politicians. I shall return to that issue late in the essay. My focus, however, will be on the narrow question of the moment: should social media suppress “lies” in political ads?

The Facebook policy comports with law and media practices in the United States regarding political ads. Broadcasters have been bound by the Federal Communications Act, which says radio and television editors had “no power of censorship” over political ads. Ads by candidates had to be run, even if thought to be false. Broadcasters have been shielded from liability for the content of the ads.

I make no case for this FCC policy, which is not our topic in any case. The FCC policy involves coercion in service to a direct connection between voters and politicians. The coercion will rightly offend liberal temperaments, but assuring that voters directly hear from candidates for office comports with free speech ideals. The Facebook policy embraces the free speech aspect of the FCC policy without coercion, an improvement over the status quo.

Consider also the overlooked but still important communications medium of direct mail advertising. Surely the U.S. Post Office (that is, government officials) could not refuse to deliver mailed ads because they contained falsehoods. Would we want them to do so if they could? Who would expect private carriers monitor the content of deliveries?

The FCC requirement extends only to candidates and not third party ads by PACs, Super PACs, labor unions, interest groups, corporations, or individuals. Cable TV may refuse ads. CNN recently rejected two recent Trump ads because they violated the network’s advertising policy. Newspapers may also reject ads. Social media platforms also have this freedom to refuse ads. Facebook has chosen to directly connect candidates and voters. Most platforms have generally followed that policy.

So far as it goes, Facebook’s policy reflects both free speech philosophy and American practices regarding political advertising, save for the coercion of communications law. That suggests Facebook’s policy is not nearly as absurd and as its critics allege. Indeed it seems to be a mainstream policy that reflects the traditional value of free speech. But some issues demand further consideration.

Free speech doctrine assumes that a direct connection between politicians (speakers) and voters (listeners) serves autonomy and the social good. In contrast, many informed critics support speech paternalism, the idea that intermediaries (gatekeepers) can improve public debate. For example, gatekeepers can prevent “lies” from being heard and dangerous candidates from being elected. Rigorous content-based gatekeeping might be a vital part of militant democracy in our time. Once online, the people, it might be argued, are easily fooled or at least, unwilling to fact check ruthless populists. The intermediation of editors or content moderators is essential to prevent the spread of lies and the corruption of democracy. Whatever might have been said in the past about the glories of free speech has been overthrown by the realities of the internet. Speech paternalists ultimately believe someone should have authority to shape public debate to achieve some social good. And that authority must include excluding lies from public debates.

It is worth recalling why that authority cannot be government officials in the United States. If lies could be censored, government officials would also suppress some claims that should be contested and might even be true. Officials have strong incentives to suppress all claims weakening their power, many of which will be properly contestable. If that contestation does not happen, political and policy changes preferred by a majority may be stifled. The country would be worse off. This argument applies to any nation looking to government to improve speech.

Of course, constraints protecting speech from government do not apply to private gatekeepers. But should private gatekeepers be trusted? Consider the example at hand. Over the past forty years, stories similar to the Bidens’ adventures in Ukraine have been deemed “the appearance of corruption” if not actual corruption. Now we are told that “the claim [about the Bidens] has been repeatedly labeled as untrue by news organizations.” Reasonable people may (and do) suspect political bias among the gatekeepers of traditional media. The case against traditional gatekeepers is no doubt overdone. But surveys confirm that trust in the older media authorities has dropped by a third.

And critics of Facebook’s policy often want it both ways on whether social media gatekeepers should be trusted. They have argued that platforms encourage harmful speech to fatten their bottom line. Of course many of the same critics see controversial ads in the same light. But the same incentives would be at work if platforms removed “lies.” Might much of the antitrust case against Facebook and Google be contested and thus potentially deemed a “lie” in a candidate’s ad? The economic argument about content moderation thus lends support to Facebook’s unwillingness to fact check campaign ads. “Lies” in practice might turn out to be nothing more than contestable arguments that threaten profits, at least if we accept the other arguments against current content moderation practices.

The debate about ad policy has brought forth a different argument for giving Facebook authority to improve speech on its platform. Facebook is considering limiting the reach (but not the content) of controversial advertising; Google has already done so. Here’s the idea: instead of presenting an ad with a controversial claim before 500 users, all of whom might be inclined to affirm the content of the ad, Facebook could refuse to sell political ads to fewer than 5,000 users. (The numbers may not be exact, but you get the idea). Where a smaller audience might not have seen any debate about the ad, the larger audience will have many people who have doubts about the content of the ad. They might generate a debate about the Bidens’ Ukrainian adventures. The 500 users who might have heard nothing against the Biden ad would now hear “more speech” about it. Of course, free speech proponents always offer “more speech” as an alternative to censorship. This “reach, not speech” policy is brilliant in its own way.

But it remains a version of speech paternalism. “Reach, not speech” contravenes an idea undergirding free speech: people have the right and ability to discern truth and falsehood. Critics of microtargeting disagree. Writing in the New Scientist, Annalee Newitz recently argued that “microtargeting allows political lies on Facebook to reach only the people most likely to fall for them.” Those people, she writes, need to hear from “watchdog groups” who presumably set them straight. Set aside whether these groups are better guides to truth and falsehood than the vulnerable victims of internet advertising. “Reach, not speech” differs from counterspeech in one vital way. If I offer counterspeech to hateful claims, I assume my audience is capable of rational reflection and changing their minds. “Reach, not speech” assumes the opposite: “the vulnerable” cannot deal with arguments. “Reach, not speech” therefore only appears to comport with free speech ideals. The policy reflects profound doubts about the capacities of most people.

Nonetheless it might be argued that modifying microtargeting, whatever the motivation, does not constitute content-based discrimination. It fosters debate rather than suppresses speech. But the goal here is not fostering debate as an end in itself. The goal is to exclude speech thought to be false and ultimately dangerous to the polity. If “more speech” does not attain that goal, it will be easier to move on to content-based discrimination because we have already countenanced modifying microtargeting.

Philosophy aside, social media platforms should find it hard to become speech paternalists. The doubts about the traditional gatekeepers created an opening for social media. The social media platforms promised the end of gatekeeping and an open forum privately governed. Of course, the platforms had promised too much; they needed some gatekeeping or content moderation almost from the start. But social media has made good on the promise of a more open political discourse. Indeed, that is the complaint we are considering here: people are too free to speak lies. Perhaps an aggressive war on “lies” would not significantly harm social media brands. But it should.

Finally, why not just follow Twitter and refuse to run all political ads? The platforms say “no” to both parties thus remaining neutral and out of the fray. Someone else will decide “what is truth?” It’s difficult to draw the line between political and non-political ads. Faced with this gray area, a platform might end up refusing to run advertising promoting books. Complaints notwithstanding, refusing all political ads would worsen electoral debate by throwing out contestable claims along with falsehoods. Social media really does offer better ways to reach voters, as party committees have noted, and that improvement would be lost. And if that improvement exists, shouldn’t the owners of social media recognize an obligation to make it available and to deal with the ensuing problems? (There will always be ensuing problems.)

My case for the Facebook policy will not please everyone. But I hope to have shown that it reflects persuasive free speech ideals and that the alternatives are worse. Two ancillary points deserve mention.

First, the Facebook policy does not go far enough in light of its libertarian foundations. Why stand between any speaker and their audience? If you believe that most people can handle debate and act accordingly, why send any Facebook post to the fact checker? Of course, Facebook has the right to draw the distinction they have drawn between politicians and everyone else, and free speech proponents will be happy to have any recognition of their cause during an increasingly illiberal period. But the distinction needs attention, not least because fact checkers appear to share the traditional media’s loss of authority.

Second, I have not mentioned the current politics of this issue, election year notwithstanding. Policies about speech are unlikely to solve the complex political dilemmas faced by the platforms. The Facebook policy may help get elected officials off the company’s back. But it also led many on the left to conclude Facebook was helping Donald Trump’s re-election effort by spreading lies. Of course, if Facebook refused to run the Biden ad (and others later), the President and his supporters would conclude that the company was determined to oppose his re-election. Whatever social media does on speech will be seen as helping or hurting one side or the other in our polarized, zero-sum politics. The same may be true elsewhere. The companies should figure out what they stand for on these issues and stick with their principles. That will be a lot easier to say than to do. But it might offer in the longer term a way to avoid being governed by an “FCC for social media.”

We live in a difficult time. Liberals of the right and the left are losing elections and public influence. Those losses raise questions about liberal fundamentals like free speech, not least because private companies are not legally required to honor freedom of speech. Facebook’s policy stands against the dangers of our age at some risk to the company. It deserves our support.

Response Essays

Where is Our Intellectual Immune System?

If only all friends were loyal, startups revolutionary, lovers true, and politicians honest. What a wonderful world it would be. But even in Silicon Valley, we live on Earth. And so, we have content moderation.

It should surprise no one that people, including politicians, lie in every era using the tools at hand—pen and paper, leaflets, radio, television, gullible reporters, manipulable recommendation algorithms, social media ads. But what is somehow both more surprising and totally understandable is how many people now seem eager for private companies (or government, or both) to limit their exposure to untruth. At platforms, the call for more moderation is coming in loud and clear from users, non-users, academics, governments, civil society orgs. “SOMEBODY DO SOMETHING,” they are saying. “BEFORE IT’S TOO LATE.” Of course, people are freaked out. Spreading lies on a mass scale used to be really expensive and now it’s not. At the same time—do they really think we can use code to … eliminate lies? Or that if we could sanitize our interpersonal world to that degree, it would be wise?

I’m sympathetic to the call for more moderation of the online social spaces where we now spend so much of our lives. At Medium, I banged my head against this particular wall every day for years, of how to foster an environment where people felt safe and free to express themselves without becoming targets of harassment, intimidation, and myriad other forms of abuse. I worked with content moderators to draft and evolve clear policies and figure out how to enforce them as quickly, consistently, and humanely as possible. No doubt some disagreed with what we chose to do and not do. But that’s what happens when you do something.

In his essay on Facebook’s policy on political ads, John Samples proposes that platforms instead not do something. Such strong free speech objections to the will to moderate go something like this: as harms perhaps attributable to social media pile up, sure, it’s tempting to ask or force online platforms to fix democracy by eliminating ostensibly dangerous expression, in this case, political ads that lie. But … that temptation should be resisted. Whether you do it by creating a bad news cycle for the company, or force them to do it by law, if you let yourself panic, break the constitutional glass, pull out the speech extinguisher, and use it to put out today’s social media dumpster fire, you may not have strong free speech norms left later when you need them, and then we’ll all be sad.

So, then, what do we do? Nothing? Just keep posting and liking and hoping? Won’t that just lead to more of the same?

Samples offers a solution of sorts: personal responsibility. As he puts it, “[w]ith free speech, the audience for the advertising is called upon to hear arguments and form views about truth and falsehoods.”

In other words: you’re on your own, kids. Good luck out there weighing the evidence.

To which I think a fair response is: Gee thanks, I’m drowning in nonsense designed to break my brain and obscure reality, absorbing factual claims I have no time or ability to check, and now I’ve been left on my own to “hear arguments and form views about truth.”

This assumption—that a given citizen using social media has all necessary inputs to form robust views about truth—is a key problem with Samples’s solution. Does the default information environment of social media give users enough to evaluate political claims and decide? Are we fully equipping users to make wise decisions about the information they ingest? If yes, then great, we have no problem. We can hope people make well-informed and carefully considered judgments based on evidence (as they always did until the advent of the Internet … no, not really, but I’ll leave that for another day). But, if we’re not giving users what they need to apply their reason and discernment, well then, however hard we think about what we see, it’ll be garbage in, garbage out.

If you’re committed to this strong free speech position on platform operation that prohibits hard (removal) or soft (downranking) forms of moderation, then I think you should also show up with a clear position on how your imagined users arrive at a state in which they can assess the truth of what they read, and who takes responsibility for putting them in that position. Are we setting people up for free speech success in the current information environment? So far, I think the answer is pretty clearly no. And I believe this is the case in at least two ways, one socially broad and one local to the distribution technologies we’re talking about.

First, and this sort of goes without saying but is worth making explicit, a literate citizenry is harder to misinform. And that’s not going to just happen without policy choices and investment. This raises bigger questions of educational policy better elaborated elsewhere: what intellectual backbone are we providing people to draw on when they encounter and process online information? How about broad and affordable access to critical thinking and media literacy resources? How about technical training on basic internet architecture, like routers and IP addresses and referral links? This isn’t really a platform policy answer. But misleading ads and posts aren’t really just a platform problem. Social media may be a striking and novel-feeling manifestation of the problem (or the one that legacy media are most motivated to report on). But it’s a local symptom of vast tectonic social forces. So, how do we foster a more resilient national intellectual immune system against pernicious nonsense, whether it comes through social media, cable news, tabloid magazines found next to the candy, or friends and loved ones?

Ok, “free college for all” will probably make me no friends in this forum. But does anyone doubt that broad investment in educational areas like history, philosophy, communication technology, and media literacy would likely mitigate the effects of calculated lies on our elections? It turns out, if you want to fight platform misinformation, building strong public schools is a pretty solid place to start.

Second, and on the narrower topic of social media—like Jerry Maguire, we have to help users help themselves make credibility assessments in a complex, mediated environment. Who is the “we” that is helping users and how? Well, one candidate is the companies that provide platforms, who are at least close to the technology. Others include third-party companies, nonprofits, or the government. In any case, someone needs to experiment with figuring out what information is most important to put in front of users and how to present it. For starters, provenance—what’s the source of this information and the funding behind it? Who knows, maybe the cure for untruthful speech is more contextualizing speech.

Facebook has been experimenting with this since shortly after the 2016 election (or even before, depending on what you count). First, they tried throwing up a “disputed” flag next to questionable sources (in organic content, not political ads). Then they stopped pretty quickly because it seemed to have the opposite of its intended effect. But that was just one early experiment in labeling. Nowadays, embedded images in the News Feed that link out to third-party sources have a little “i” icon that expands and then leads you to more information about the source. It’s mainly a link to Wikipedia, and one wonders how many people at this point expand and investigate. But hey, you’ve gotta start somewhere. Third parties like Newsguard have gotten into the game, though none seem to have caught hold or shown itself to be especially effective, and it’s not clear why having a third-party private company offering pre-digested credibility as a service is any more desirable than the platform itself doing so. Another thing Facebook has done to try and balance the ostensible filter bubbling effects of microtargeting ads at custom audiences is provide an ad archive, including information about who saw what. This is more aimed at researchers and journalists, but it provides information that can work its way back to the public through reporting and other channels. These are just a few scattered examples. But, in short, experiments in how to present information to users that will help them assess credibility are happening. And if many people care deeply about enhancing our ability to assess credibility online, and it turns out to work, there may even be a market for doing it well.

I’m not sure whether strategically placed supplemental information would count as a form of “paternalistic” intermediating speech in Samples’s model, where interventions that limit reach constitute a soft, partial (but still disfavored) alternative to full bore removal. Or whether laws requiring or encouraging such information would constitute compelled speech. But for those who see any intervention into a naturally unfolding expressive environment as a problem, I think the answer is: in a constructed software environment undergoing constant design tweaks and incentive rebalancing, there really is no natural or unmediated speech, just varying modes of presentation and the experiences they structure. Assuming otherwise—that natural, unintermediated speech can exist in an online environment, and that we simply need to remove impediments to it—is a primordial fallacy of platform building. Many of the early architects of social media appeared to believe this, and we are just getting started with understanding and dealing with the consequences. And once you accept that when it comes to speech there is no “do nothing” in platform land (because you’re already doing something—build and maintain a platform), then tweaking incentives around ranking, amplification, and other forms of soft moderation are really just a spectrum of somethings you might do. In any case, when it comes to political ads, Facebook has made clear it’s going to take the libertarian path and not moderate. But there’s plenty it might do to present meaningful information to alter the ultimate impact of whatever ads (or organic political pieces of content) are distributed, and some of them Facebook is already doing.

Admittedly, tools for helping users make complex credibility assessments have not exactly come flying out of the gate. But that’s how it goes. Ideally, we iterate, we fail, we learn, we improve. People gradually get savvier, they learn what signs and metadata to look for (especially if we thoughtfully provide it), and they get productively skeptical (but hopefully not to the unproductive point where they believe nothing is true). With time and luck, our collective information immune system develops. Some people are still fooled some of the time, because we still live on Earth. But the number goes down and up and down again, and maybe some of the worst feedback loops of lies and outrage, attention and money lose steam. And as we continue work on information distribution systems, it becomes a matter of course to ask: Do our users have the necessary tools to help themselves assess what is true? This approach will probably not mean a fast lane to the falsehood-free future that some understandably crave. It’s also not the pure, unmediated speech zone Samples champions. But it has a few redeeming virtues. It comports with a strong version of the free speech traditions we’ve gotten pretty used to. And it increases the chance that citizens forming important views on the nature of reality based on what they find online will not simply be left to their own devices.

Seeing Like a Social Media Company

In October of last year, Senator Elizabeth Warren posted an ad on Facebook which included the false statement that Mark Zuckerberg supported President Trump. Hoping to focus attention on the platform’s lack of fact checking, Warren explained, “What Zuckerberg has done is given Donald Trump free rein to lie on his platform—and then to pay Facebook gobs of money to push out their lies to American voters. If Trump tries to lie in a TV ad, most networks will refuse to air it.”

Broadcasters, however, are generally required to run candidate ads under Section 315 of the Federal Communications Act of 1934. Because of the spectre of running afoul of this federal communication law, networks tend to extend a hands-off policy to the entire company and run ads regardless of their verity. Warren might be expressing a popularly held belief about political ads, but as PolitiFact explained, “We could find no evidence that most networks reject false candidate ads.”

John Samples’s opening essay this month tackles the question that Warren was trying to spotlight: “should social media suppress ‘lies’ in political ads?” Where Warren has provoked social media companies to be more involved in fact checking, Samples instead finds his footing in the free speech doctrine, which “assumes that a direct connection between politicians (speakers) and voters (listeners) serves autonomy and the social good.” None of the alternatives are palatable, even the course that Google has taken, which disallows microtargeting: “‘Reach, not speech’ contravenes an idea undergirding free speech: people have the right and ability to discern truth and falsehood.”

If Samples falters, he does so by sidestepping the deeper political and social concerns that have given rise to his essay. In one of the closing paragraphs, he fully admits,

Policies about speech are unlikely to solve the complex political dilemmas faced by the platforms. The Facebook policy may help get elected officials off the company’s back. But it also led many on the left to conclude Facebook was helping Donald Trump’s re-election effort by spreading lies. Of course, if Facebook refused to run the Biden ad (and others later), the President and his supporters would conclude that the company was determined to oppose his re-election. Whatever social media does on speech will be seen as helping or hurting one side or the other in our polarized, zero-sum politics. The same may be true elsewhere. The companies should figure out what they stand for on these issues and stick with their principles. That will be a lot easier to say than to do. But it might offer in the longer term a way to avoid being governed by an “FCC for social media.”

While he agrees that the position Facebook has taken is “a mainstream policy that reflects the traditional value of free speech,” he doesn’t take the next step and ask, why are these platform companies facing backlash for their policies in the first place?

Like broadcasters, most organizations involved in the communication business aren’t also in the business of fact checking political ads. The U.S. Post Office doesn’t open political campaign mailers and check their validity. Telecommunications companies typically aren’t blamed for political robocalls that stretch the truth. There is no federal truth-in-advertising law that applies to political ads and very few states have legislated on the issue. Moreover, while social media will be a major source for 2020 political ads, only one-fifth of total spending will go to digital. Broadcast will get about half, cable will get another 20 percent, and radio will pick up the rest. While social media platforms are routinely criticized for their hands-off approach, their position isn’t aberrant.

To understand the source of ire, some table setting is needed. Google and Facebook are uniquely situated agents within the information ecosystem. Unlike the one-to-many media outlets, platforms perform two types of actions, which might be dubbed operations of legibility and operations of traction.

The first category is a catchall term for the efforts to attach clickstream data and other interactional data to profiles to form a detailed network map. It is through this assemblage that inferences about individuals can be made. The term legibility comes from James C. Scott, a political theorist whose work has focused on early state formation. As he defined it, legibility references,

a state’s attempt to make society legible, to arrange the population in ways that simplified the classic state functions of taxation, conscription, and prevention of rebellion. Having begun to think in these terms, I began to see legibility as a central problem in statecraft. The premodern state was, in many crucial respects, partially blind; it knew precious little about its subjects, their wealth, their landholdings and yields, their location, their very identity. It lacked anything like a detailed “map” of its terrain and its people. It lacked, for the most part, a measure, a metric, that would allow it to “translate” what it knew into a common standard necessary for a synoptic view.

Social media platforms are also blind to their users; they must model the social networks of individuals to make them legible. As research finds, Facebook data about Likes can be used to accurately predict highly sensitive personal attributes like sexual orientation, ethnicity, religious view, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, gender, and, most important for this discussion, political opinions.

Initiatives to make social and informational networks legible are inherently connected to the second grouping of actions, efforts of traction. Traction includes all those measures meant to persuade or influence people through the presentation of information. Traction is best exemplified in advertising and in search engine results and in the ordering of Facebook’s Newsfeed.

Users are on the other side of this process and cannot easily peer behind the veil. They are forced to make their own determinations about how legibility and traction work in practice. As Sarah Myers West, a postdoc researcher at the AI Now Institute, described the process,

Many social network users develop “folk theories” about how platforms work: in the absence of authoritative explanations, they strive to make sense of content moderation processes by drawing connections between related phenomena, developing non-authoritative conceptions of why and how their content was removed.

Research on moderation efforts confirms this finding. Users tend to think Facebook is “powerful, perceptive, and ultimately unknowable” even though there are limits to both legibility and traction.

As surveys from Pew have found, most aren’t aware that Facebook automagically categorizes individuals into segments for advertising purposes. Yet, when asked about how well these categories actually track their preferences, only 13 percent said that they are very accurate descriptions. Another 46 percent of users thought the categories were somewhat accurate. On the negative side of the ledger, 27 percent of users “feel it does not represent them accurately” and another 11 percent of users weren’t assigned categories at all. In other words, over a third of all users are effectively illegible to Facebook. Other examples abound. A Phoenix man is suing the city for a false arrest because data obtained from Google clearly showed that he was in two places at once. A group of marketers sued Facebook for wrongly stating ad placement data.

As for traction, online ads seem to be slightly more effective than traditional ad methods, but in many cases online ads yield nothing in return, as eBay found out. Online political ads tend to be less effective because they are often met with competing messages. Users ignore advertising, leading to ad blindness. Ad blockers are popular, about 30 percent of Americans use them. In short, people aren’t blithely consuming advertisements. Yet, when individuals are asked if they think ads are effective, they are quick to claim they aren’t fooled, but are convinced that others are.

All combined, these folk theories go far to explain why there is so much frustration with online filtering mechanisms. They also explain why political advertising is being targeted. Since users think that platform companies know everything about them, their friends, and their social networks, these companies should easily be able to parse fact from fiction in ads. Online ads are also seen as powerful shapers of opinion, again shifting the burden onto social media. Add in a general concern for the health of this country’s democracy and it makes sense why Senator Warren’s ad hit a nerve.

Samples is right to come down on the side of free speech for the reasons he lays out. But readers will still probably find his position deeply unsatisfying, not because he is wrong, but because of the all the baggage not discussed in the essay.

The Conversation

Reply to Feerst and Rinehart

I appreciate the replies to my essay on falsehoods and political ads. Both are predictably insightful and persuasive. Alex has me at a disadvantage: as mentioned, he has actually moderated content online, and I have not. Will Rinehart’s essay shows a firm grasp of the bigger social media picture and its implications. Readers will benefit from close attention to both, or, in online-speak: “read the whole thing.”

That said, I find both replies overly broad. Pace Alex, I was not making the case for a “do nothing” internet that was the dream of many in 1995. I focused on one question: what should private content moderators do about plausible falsehoods in political ads they present to users? And I don’t believe the companies should “do nothing” in general. As Alex says, they are going to do something. Similarly, Will’s view of the larger context seems plausible. Perhaps we must solve the larger issues of trust to truly deal with content moderation, but Facebook was called upon to decide right now what to do about falsehoods in political ads, a decision with implications for free speech. Perhaps we do not disagree all that much. We appear to be writing about three different aspects of the content moderation puzzle.

In response, would like to adumbrate my view of the social media big picture. Like the government, the companies may adopt content-neutral policies, or they can discriminate against speech on the basis of its content (they can discriminate through regulation or removal). Removing putative lies constitutes content discrimination, in the real world and online. Content discrimination should be minimized while content neutrality should actually avoid taking sides.

Like many Americans, Alex is wary of content discrimination beyond recognized categories of speech like obscenity and libel. Increasing internet literacy is potentially content-neutral unless its taught on the assumption that the claims of [insert favorite demon here] are always illogical or otherwise faulty. I would add that if universities are to be our guide here, at least thirty percent of social media education should be done by private entities. In my view, novel public education programs rarely reject the null hypothesis; a little competition might well advance the internet literacy project. Alex also lauds Facebook’s transparency efforts. Facebook has broad powers to condition their services on disclosure, but those policies may ultimately sit uneasily with concerns about privacy. And we ought to have limited expectations from disclosure. Congress has mandated disclosure of campaign finance for almost fifty years. Disclosure supposedly seeks to educate voters; in fact, much speech about disclosed information seeks to shame donors to disfavored campaigns by suggesting they seek a corrupt bargain through electoral means. Yes, shaming and smearing are permitted speech, but whether they advance public deliberations is less certain. Fortunately, I see little evidence that online disclosure fosters similar speech.

Before the internet, politicians lied in advertising. What has changed? Why do so many people expect Facebook to repress some content of their ads? Broadcast companies could not censor ads, and even when cable television firms could refuse such ads, they rarely did (as Will confirms). The same may be true of newspapers, almost all privately owned. Why not censor? One among many reasons might be that few political elites worried about falsehoods in a world with effective gatekeeping. Walter Cronkite would not allow the Alex Jones of 1969 on the air. Social media promised to end gatekeeping and did, at least of the sort effective in 1969. Now, a newer and presumably looser gatekeeping is being rebuilt by Facebook and others. I think the distinction between content discrimination and content neutrality might inform those new institutions of gatekeeping.

Content neutrality will be a challenge. If you lean left, imagine that eighty percent of Facebook content moderators support Donald Trump and 25 percent of those strongly support him. In that world, what would we expect if those moderators started looking for lies in ads or more generally? If you lean right, if Elizabeth Warren were President, and if Facebook regulated content in a “neutral” way that helps her re-election effort, what would you think? Perhaps the companies should be commended for taking on such a seemingly impossible task.

Finally, I recognize that my distinction between content neutrality and content discrimination does not accord with social media practice. Facebook discriminates against some content (hate speech, hoaxes, and similar falsehoods) that U.S. courts would protect. They also regulate speech that might be viewed as essential to elections. For example, Facebook forbids spreading lies about when and where you can vote and about the U.S. Census. The former concerns voting while the latter affects the allocation of representation. Their advertising policy, as we have seen, directly connects candidates to potential voters. Perhaps Facebook wishes to protect core political institutions in this republic. It may even be these policies could be neutral if not content neutral. But the policies on voting and the Census violate Facebook’s standards because of their content. And the policy to remedy those defects must discriminate against content.

These three essays suggest the breath and difficulties of the issues provoked by the rise of social media. Alex Feerst is known to say, “No one really understands what it means to give 3 billion people a printing press!” Indeed.