Where is Our Intellectual Immune System?

If only all friends were loyal, startups revolutionary, lovers true, and politicians honest. What a wonderful world it would be. But even in Silicon Valley, we live on Earth. And so, we have content moderation.

It should surprise no one that people, including politicians, lie in every era using the tools at hand—pen and paper, leaflets, radio, television, gullible reporters, manipulable recommendation algorithms, social media ads. But what is somehow both more surprising and totally understandable is how many people now seem eager for private companies (or government, or both) to limit their exposure to untruth. At platforms, the call for more moderation is coming in loud and clear from users, non-users, academics, governments, civil society orgs. “SOMEBODY DO SOMETHING,” they are saying. “BEFORE IT’S TOO LATE.” Of course, people are freaked out. Spreading lies on a mass scale used to be really expensive and now it’s not. At the same time—do they really think we can use code to … eliminate lies? Or that if we could sanitize our interpersonal world to that degree, it would be wise?

I’m sympathetic to the call for more moderation of the online social spaces where we now spend so much of our lives. At Medium, I banged my head against this particular wall every day for years, of how to foster an environment where people felt safe and free to express themselves without becoming targets of harassment, intimidation, and myriad other forms of abuse. I worked with content moderators to draft and evolve clear policies and figure out how to enforce them as quickly, consistently, and humanely as possible. No doubt some disagreed with what we chose to do and not do. But that’s what happens when you do something.

In his essay on Facebook’s policy on political ads, John Samples proposes that platforms instead not do something. Such strong free speech objections to the will to moderate go something like this: as harms perhaps attributable to social media pile up, sure, it’s tempting to ask or force online platforms to fix democracy by eliminating ostensibly dangerous expression, in this case, political ads that lie. But … that temptation should be resisted. Whether you do it by creating a bad news cycle for the company, or force them to do it by law, if you let yourself panic, break the constitutional glass, pull out the speech extinguisher, and use it to put out today’s social media dumpster fire, you may not have strong free speech norms left later when you need them, and then we’ll all be sad.

So, then, what do we do? Nothing? Just keep posting and liking and hoping? Won’t that just lead to more of the same?

Samples offers a solution of sorts: personal responsibility. As he puts it, “[w]ith free speech, the audience for the advertising is called upon to hear arguments and form views about truth and falsehoods.”

In other words: you’re on your own, kids. Good luck out there weighing the evidence.

To which I think a fair response is: Gee thanks, I’m drowning in nonsense designed to break my brain and obscure reality, absorbing factual claims I have no time or ability to check, and now I’ve been left on my own to “hear arguments and form views about truth.”

This assumption—that a given citizen using social media has all necessary inputs to form robust views about truth—is a key problem with Samples’s solution. Does the default information environment of social media give users enough to evaluate political claims and decide? Are we fully equipping users to make wise decisions about the information they ingest? If yes, then great, we have no problem. We can hope people make well-informed and carefully considered judgments based on evidence (as they always did until the advent of the Internet … no, not really, but I’ll leave that for another day). But, if we’re not giving users what they need to apply their reason and discernment, well then, however hard we think about what we see, it’ll be garbage in, garbage out.

If you’re committed to this strong free speech position on platform operation that prohibits hard (removal) or soft (downranking) forms of moderation, then I think you should also show up with a clear position on how your imagined users arrive at a state in which they can assess the truth of what they read, and who takes responsibility for putting them in that position. Are we setting people up for free speech success in the current information environment? So far, I think the answer is pretty clearly no. And I believe this is the case in at least two ways, one socially broad and one local to the distribution technologies we’re talking about.

First, and this sort of goes without saying but is worth making explicit, a literate citizenry is harder to misinform. And that’s not going to just happen without policy choices and investment. This raises bigger questions of educational policy better elaborated elsewhere: what intellectual backbone are we providing people to draw on when they encounter and process online information? How about broad and affordable access to critical thinking and media literacy resources? How about technical training on basic internet architecture, like routers and IP addresses and referral links? This isn’t really a platform policy answer. But misleading ads and posts aren’t really just a platform problem. Social media may be a striking and novel-feeling manifestation of the problem (or the one that legacy media are most motivated to report on). But it’s a local symptom of vast tectonic social forces. So, how do we foster a more resilient national intellectual immune system against pernicious nonsense, whether it comes through social media, cable news, tabloid magazines found next to the candy, or friends and loved ones?

Ok, “free college for all” will probably make me no friends in this forum. But does anyone doubt that broad investment in educational areas like history, philosophy, communication technology, and media literacy would likely mitigate the effects of calculated lies on our elections? It turns out, if you want to fight platform misinformation, building strong public schools is a pretty solid place to start.

Second, and on the narrower topic of social media—like Jerry Maguire, we have to help users help themselves make credibility assessments in a complex, mediated environment. Who is the “we” that is helping users and how? Well, one candidate is the companies that provide platforms, who are at least close to the technology. Others include third-party companies, nonprofits, or the government. In any case, someone needs to experiment with figuring out what information is most important to put in front of users and how to present it. For starters, provenance—what’s the source of this information and the funding behind it? Who knows, maybe the cure for untruthful speech is more contextualizing speech.

Facebook has been experimenting with this since shortly after the 2016 election (or even before, depending on what you count). First, they tried throwing up a “disputed” flag next to questionable sources (in organic content, not political ads). Then they stopped pretty quickly because it seemed to have the opposite of its intended effect. But that was just one early experiment in labeling. Nowadays, embedded images in the News Feed that link out to third-party sources have a little “i” icon that expands and then leads you to more information about the source. It’s mainly a link to Wikipedia, and one wonders how many people at this point expand and investigate. But hey, you’ve gotta start somewhere. Third parties like Newsguard have gotten into the game, though none seem to have caught hold or shown itself to be especially effective, and it’s not clear why having a third-party private company offering pre-digested credibility as a service is any more desirable than the platform itself doing so. Another thing Facebook has done to try and balance the ostensible filter bubbling effects of microtargeting ads at custom audiences is provide an ad archive, including information about who saw what. This is more aimed at researchers and journalists, but it provides information that can work its way back to the public through reporting and other channels. These are just a few scattered examples. But, in short, experiments in how to present information to users that will help them assess credibility are happening. And if many people care deeply about enhancing our ability to assess credibility online, and it turns out to work, there may even be a market for doing it well.

I’m not sure whether strategically placed supplemental information would count as a form of “paternalistic” intermediating speech in Samples’s model, where interventions that limit reach constitute a soft, partial (but still disfavored) alternative to full bore removal. Or whether laws requiring or encouraging such information would constitute compelled speech. But for those who see any intervention into a naturally unfolding expressive environment as a problem, I think the answer is: in a constructed software environment undergoing constant design tweaks and incentive rebalancing, there really is no natural or unmediated speech, just varying modes of presentation and the experiences they structure. Assuming otherwise—that natural, unintermediated speech can exist in an online environment, and that we simply need to remove impediments to it—is a primordial fallacy of platform building. Many of the early architects of social media appeared to believe this, and we are just getting started with understanding and dealing with the consequences. And once you accept that when it comes to speech there is no “do nothing” in platform land (because you’re already doing something—build and maintain a platform), then tweaking incentives around ranking, amplification, and other forms of soft moderation are really just a spectrum of somethings you might do. In any case, when it comes to political ads, Facebook has made clear it’s going to take the libertarian path and not moderate. But there’s plenty it might do to present meaningful information to alter the ultimate impact of whatever ads (or organic political pieces of content) are distributed, and some of them Facebook is already doing.

Admittedly, tools for helping users make complex credibility assessments have not exactly come flying out of the gate. But that’s how it goes. Ideally, we iterate, we fail, we learn, we improve. People gradually get savvier, they learn what signs and metadata to look for (especially if we thoughtfully provide it), and they get productively skeptical (but hopefully not to the unproductive point where they believe nothing is true). With time and luck, our collective information immune system develops. Some people are still fooled some of the time, because we still live on Earth. But the number goes down and up and down again, and maybe some of the worst feedback loops of lies and outrage, attention and money lose steam. And as we continue work on information distribution systems, it becomes a matter of course to ask: Do our users have the necessary tools to help themselves assess what is true? This approach will probably not mean a fast lane to the falsehood-free future that some understandably crave. It’s also not the pure, unmediated speech zone Samples champions. But it has a few redeeming virtues. It comports with a strong version of the free speech traditions we’ve gotten pretty used to. And it increases the chance that citizens forming important views on the nature of reality based on what they find online will not simply be left to their own devices.

Also from this issue

Lead Essay

  • John Samples more or less supports Facebook’s refusal to fact-check claims by politicians. Partisans will always have strong views, and finding the line between contested statements and lies will never be easy. Existing campaign finance laws and the precedent that these set for any future social media regulation may make this an exceptionally important area for the private sector to get right: A political solution won’t likely be better.

Response Essays

  • Social media is engineered to deliver our attention to paying advertisers. The results for democratic governance have already been worrisome, and it is high time to develop “a more resilient national intellectual immune system,” writes Alex Feerst. He adds that there is no such thing as “natural, unmediated” online speech; all forms of digital organization involve privileging one or another form of data in some way. A pure free speech position may therefore not be particularly germane to the policy debates at hand.

  • Social media companies know a lot less than you may think, and that’s part of the problem. Trying to set them up as fact-checkers would commit them to a project that they are badly equipped to manage. Will Rinehart argues that just as the post office isn’t required to screen the mail for truth, and just as telecommunications companies are not held responsible for the content of robocalls, so too the new media should not be expected to do our thinking for us.