An open access publication of the American Academy of Arts & Sciences
Summer 2024

Free Speech on the Internet: The Crisis of Epistemic Authority

Author
Brian Leiter
View PDF
Abstract

Much of our knowledge of the world comes not from direct sensory experience, but from reliance on epistemic authorities: individuals or institutions that tell us what we ought to believe. For example, what most of us believe about natural selection, climate change, or the Holocaust comes from our reliance on epistemic authorities (scientists, historians). Sustaining epistemic authority depends, crucially, on social institutions that inculcate reliable second-order norms about whom to believe about what. The traditional media were crucial, in the age of mass democracy, with promulgating and sustaining such norms. The internet has obliterated the intermediaries who made that possible, and, in the process, undermined the epistemic standing of actual experts. This essay considers some possible changes to existing free speech doctrine to remedy the epistemological crisis brought about by the internet.

Brian Leiter is the Karl N. Llewellyn Professor of Jurisprudence and Director of the Center for Law, Philosophy, and Human Values at the University of Chicago Law School. Leiter’s research interests are in moral, political, and legal philosophy, in both the Anglophone and Continental European traditions. His books include Marx (with Jaime Edwards, 2025), Teoría del Derecho realista: Ensayos selectos (2024), Moral Psychology with Nietzsche (2019), Nietzsche on Morality (2015), Why Tolerate Religion? (2012), and Naturalizing Jurisprudence (2007).

Every society has mechanisms for inculcating in its citizens beliefs about the world, about what is supposedly true and known. These epistemological mechanisms include, most prominently, the mass media, the educational system, and the courts. Sometimes these social mechanisms inculcate true beliefs, sometimes false ones, and most often a mix. What the vast majority believe to be true about the world (sometimes even when it is not) is crucial for social peace and political stability, whether the society is democratic or not. In developed capitalist countries that are relatively free from political repression, like the United States, these social mechanisms have, until recently, operated in predictable ways. They insured that most people accepted the legitimacy of their socioeconomic system, that they acquiesced to the economic hierarchy in which they found themselves, that they accepted the official results of elections, and that they also acquired a range of true beliefs about the causal structure of the natural world, the regularities discovered by physics, chemistry, the medical sciences, and so on. 

Although ruling elites throughout history have always aimed to inculcate moral and political beliefs in their subject populations conducive to their own continued rule, it has also been true, especially in the world after the scientific revolution, that the interests of ruling elites often depended on a correct understanding of the causal order of nature. One cannot extract wealth from nature, let alone take precautions against physical or biological catastrophe, unless one understands how the natural world actually works: what earthquakes do, how disease spreads, where fossil fuels are and how to extract them. This is, no doubt, why both authoritarian regimes (like the one in China) and neoliberal democratic regimes (like the one in the United States) invest so heavily in the physical and biological sciences.

In the half-century before the dominance of the internet in America (roughly from World War II until around 2000), the most prominent epistemological mechanisms in society generally helped ensure that a world of causal truths was the common currency of at least some parts of public policy and discourse in the relatively democratic societies. There were, of course, exceptions: the panic over fluoridation of water in the 1950s is the most obvious example, but it was also anomalous. Even false claims about race and gender (that were widespread in the traditional media until the 1960s and 1970s) were met with more resistance from the pre-internet media, especially from the 1960s onwards. The basic pattern, however, was clear: social mechanisms inculcated many true beliefs about how the natural world works, while performing much more unevenly where powerful social and economic interests were at stake.

The internet has upended this state of affairs: it is the epistemological catastrophe of our time, locking into place mechanisms that ensure that millions of people (perhaps hundreds of millions) will have false beliefs about the causal order of nature—about climate change, the effects of vaccines, the role of natural selection in the evolution of species, the biological facts about race—even when there is no controversy among experts. Indeed, a distinguishing and dangerous achievement of the internet era has been to discredit the idea of “expertise,” the idea that if ­experts believe something to be the case, that is a reason for anyone else to believe it. Experts, in this parallel cyber world, are disguised partisans, conspirators, and pretenders to epistemic privilege, while the actual partisans and conspirators are supposed to be the purveyors of knowledge.

Legal philosopher Joseph Raz’s analysis of the concept of “authority” is helpful in thinking about what we mean when appealing to the idea of “authority” in epistemic contexts: that is, contexts in which we want to know whom we should believe when we seek the truth.1 An epistemic authority, on this account, is someone who by instructing people about what they ought to believe makes it much more likely that those people will believe what is true (that is, they will believe what they ought to believe, ceteris paribus) than if they were left to their own devices to figure out for themselves what they are justified in believing.

Suppose, for example, I want to understand the “Hubble constant,” which captures the rate of expansion of the universe. I could try reading various technical articles in scientific journals to figure out what I ought to believe about it. It is unlikely I could make good sense of this material, given my lack of background in the relevant mathematics and astrophysics. Alternatively, I could consult my University of Chicago colleague, astronomer Wendy Freedman, an eminent scientist who has done seminal work on the Hubble constant. I am confident Freedman is an epistemic authority about the Hubble constant and cosmology generally, vis-à-vis me; I am more likely to hold correct views about these matters by attending her lectures (for undergraduates no doubt) than if I tried to figure these matters out for myself. 

Why am I confident that she is an epistemic authority? It is obviously not because I have undertaken an evaluation of her research and published results, something I am not competent to do (if I were, I would not need to consult an epistemic authority on this topic). I rely, rather, on the opinions of others we might call meta-­epistemic authorities: that is, those who can provide reliable guidance as to who has epistemic authority on a subject. So, for example, in the case of Freedman, I am relying on the facts of her appointment as a university professor at a leading research university and her election to the National Academy of Sciences,2 as well as guidance from a philosopher of science with whom I have worked, and in whom I have particular confidence with regard to his meta-epistemic authority based on past experience.

Epistemic authority is always relative. Professor Freedman is an epistemic authority on the expansion of the universe vis-à-vis me, but would not have been vis-à-vis the Nobel laureate and cosmology expert Steven Weinberg, for example. Similarly, I am an epistemic authority on Raz’s view of authority vis-à-vis my students and my colleagues, but not vis-à-vis Leslie Green, Raz’s student who recently retired from Raz’s chair at Oxford. Epistemic authority is relative both to what the purported authority knows and what the subjects of the authority would be able to know on their own. Epistemic authorities, in short, help their subjects believe what is true (or more likely to be true), and without that help, those subjects would be more likely to end up believing falsehoods or partial truths.

Here is the crucial epistemological point: almost everything we claim to know about the world generally—the world beyond our immediate perceptual experience—requires our reliance on epistemic authorities. This includes our beliefs about Newtonian mechanics (true with respect to midsize physical objects, false at the quantum level), evolution by natural selection (the central fact in modern biology, even though it may not be the most important evolutionary mechanism), climate change (humans are causing it), resurrection from the dead (it does not happen), or the Holocaust (it happened). Most education in the natural sciences, apart from some simple lab experiments students actually perform, is a matter of accepting what epistemic authorities report is the case about the nomic and causal structure of the world. The same is also true of most education about history and the empirical social sciences.

The most successful epistemic norm of modernity, the one that drove the scientific revolution—empiricism—demands that knowledge be grounded, at some (inferential) point, in sensory experience, but almost no one who believes in evolution by natural selection or the reality of the Holocaust has any sensory evidence in support of those beliefs. Hardly anyone has seen the perceptual evidence supporting the evolution of species through selection mechanisms, or the perceptual evidence of the gas chambers. Instead, most of us, including most experts, also rely on epistemic authorities: biologists and historians, for example. (The latter, of course, rely in part on testimony from witnesses to the events they describe.) The dependence on epistemic authority is not confined to ordinary persons: most trained engineers, for example, rely on epistemic authorities for their beliefs about the age of the universe, just as most lawyers rely on epistemic authorities for their beliefs about who wrote the U.S. Constitution and why.

But epistemic authority cannot be sustained by empiricist criteria alone. Salient anecdotal empirical evidence, the favorite tool of propagandists, appeals to ordinary faith in the senses, but is easily exploited given that most people understand neither the perils of induction nor the finer points of sampling and Bayesian inference. Sustaining epistemic authority depends, crucially, on social institutions that inculcate reliable second-order norms about whom to believe; that is, it depends on the existence of recognized meta-epistemic authorities. Pre-collegiate education and especially the media of mass communication have been essential, in the modern age of popular democracy, to promulgating and sustaining such norms.

Consider one of the most important newspapers in the United States, The New York Times, which, despite certain obvious ideological biases (in favor of America, in favor of capitalism), has served as a fairly good mediator of epistemic authority with respect to many topics. It has provided a bulwark against those who deny the reality of climate change or the human contribution to it; it has debunked those who think vaccinations cause autism; it gives no comfort to creationists and other religious zealots who would deny evolution; and it treats genuine epistemic authorities about the natural world—for example, members of the National Academy of Sciences—as epistemic authorities. Recognition of genuine epistemic authority cannot exist in a population absent epistemic mediators like The New York Times.

The assault on knowledge—and especially on who counts as an epistemic authority—has been dramatically exacerbated by the rise of the internet. The internet, after all, is the great eliminator of intermediaries, including, of course, those who determine who has epistemic authority and thus deserves to be heard and thus perhaps believed. This was always its great attraction for those previously excluded from public discourse. As cyberspace, however, with its lack of mediators and filters, has become a primary source of information, its ability to undermine both epistemic authority and, as a result, knowledge has become alarmingly evident: it magnifies ignorance and stupidity and is now leading millions of people to act on the basis of fake epistemic authorities and the fantasy worlds they construct. Consider just a few examples.

Tens of millions of people in the United States continue to believe that Hillary Clinton and other Democrats were running a child abuse sex ring out of a pizza parlor in Washington, D.C.; one deluded individual even showed up with a gun at the parlor.3 A recent survey found that 17 percent of Americans still believe that “a group of Satan-worshiping elites who run a child sex ring are trying to control our politics.”4 A man who murdered dozens of Muslims at two mosques in New Zealand was “steeped in the culture of the extreme-right internet,” with “his choice of language [in his online manifesto], and the specific memes he referred to, suggest[ing] a deep connection to the far-right online community.”5 His manifesto explained that he had done research and developed his racist worldview on “the internet, of course. . . . You will not find the truth anywhere else.”6 The latter assertion involves, alas, a rather serious mistake about epistemic authority.

In the United States, millions may have forgone a vaccine for COVID-19 because of misinformation shared widely on the internet, including by an osteopath in Florida:

An internet-savvy entrepreneur who employs dozens, Dr. Mercola has published over 600 articles on Facebook that cast doubt on Covid-19 vaccines since the pandemic began, reaching a far larger audience than other vaccine skeptics, an analysis by The New York Times found. His claims have been widely echoed on Twitter, Instagram and YouTube.7

Unlike those online inspiring mass murder, the possible causal connection between vaccine misinformation and harm to human beings is more uncertain, but one can see how it might proceed. Ignorant, gullible, or disturbed people come to believe that the vaccine is dangerous, rather than helpful; these people then forgo vaccination, and some fall ill and some die, infecting others along the way. Although epistemic authorities are united in rejecting this misinformation, the internet makes it available to millions while undermining the credibility of the actual epistemic authorities.

Speech that leads to bad conduct has been a long-standing problem for the law. The law could adopt a blanket prohibition on advocacy of unlawful conduct, but democratic countries with strong commitments to civil liberties have avoided such an approach, proposing instead to limit such prohibitions to advocacy that poses an “imminent” or “immediate” threat of unlawful conduct. John Stuart Mill’s famous example of the speaker inciting an angry mob in front of the corn dealer’s house by declaring that corn dealers starve the poor is the paradigm for this liberal approach: the speaker addressing the mob could be prohibited from the incitement in that context, but he should not be prohibited from publishing that opinion in the newspaper.8

The choices for speech in Mill’s day were more stark than now: the soapbox agitator inciting the mob in person, at one extreme; or writing an essay in the Times of London, at the other, an essay that would hardly be read—except perhaps by corn dealers and other capitalist elites! (Of course, there were also pamphlets and broadsheets in circulation; as media of communication, they are perhaps a bit like the current internet, but less omnipresent.) Inciting mobs in real time to lawless action is an easy case, even for those otherwise committed to very strong free speech protection; it is perhaps too easy, since it rarely gets prohibited, given that it happens in real time. However, the media for speech are more complex today. There remain, to be sure, speakers inciting mobs in real time in front of proverbial corn dealer’s houses, but there are also pundits and talking heads on radio and television speaking to thousands or millions whom they can’t see, but some of whom might be mobs menacing corn dealers. And then there are those uploading YouTube videos and podcasts, potentially reaching thousands or millions of the alienated, the disturbed, the marginalized, the “highly incitable.” Mill’s distinction has less direct applicability in our internet world.

This point was memorably made by a journalist writing in the wake of the bombings in Sri Lanka by Islamic terrorists in 2018. The government responded by shutting down social media for fear that it would incite anti-Muslim violence. Here is how journalist Kara Swisher described it:

When the Sri Lankan government temporarily shut down access to American social media services like Facebook and Google’s YouTube after the bombings there on Easter morning, my first thought was “good.” 

Good, because it could save lives. Good, because the companies that run these platforms seem incapable of controlling the powerful global tools they have built. Good, because the toxic digital waste of misinformation that floods these platforms has overwhelmed what was once so very good about them. And indeed, by Sunday morning so many false reports about the carnage were already circulating online that the Sri Lankan government worried more violence would follow. . . .

“The extraordinary step reflects growing global concern, particularly among governments, about the capacity of American-owned networks to spin up violence,” The Times reported on Sunday.

Spin up violence indeed. Just a month ago in New Zealand, a murderous shooter apparently radicalized by social media broadcast his heinous acts on those same platforms. Let’s be clear, the hateful killer is to blame, but it is hard to deny that his crime was facilitated by tech. . . . 

Social media has blown the lids off controls that have kept society in check. These platforms give voice to everyone, but some of those voices are false or, worse, malevolent, and the companies continue to struggle with how to deal with them. 

In the early days of the internet, there was a lot of talk of how this was a good thing, getting rid of those gatekeepers. Well, they are gone now, and that means we need to have a global discussion involving all parties on how to handle the resulting disaster, well beyond adding more moderators or better algorithms.9

This journalist’s concerns are articulated within the traditional “speech causing harmful behavior” framework familiar to the law of incitement inspired by Mill’s example. She also aptly identifies the two crucial challenges the internet presents to this paradigm:

  1. Without gatekeepers, the internet can easily become awash in the “toxic waste of misinformation.” The internet “give[s] voice to everyone, but some of those voices are false or, worse, malevolent.”

  2. This “toxic waste of misinformation” can then “spin up violence” in crimes “facilitated by tech.”

The first challenge is an epistemological one. The absence of gatekeepers means everyone can get through the internet gate (as long as they have access, which is less and less of an obstacle), and there is no check on their honesty, their accuracy, or their sanity. The result is that the internet is often an unreliable mechanism for generating true beliefs about the world. The second concerns the consequences of this epistemological failure, although “spin up” and “facilitated” are obviously rather vague for legal purposes. The idea, however, is that “misinformation” on the internet can incite violence thanks to the ubiquity of the message.

Now incitement has two parts: there are the (potentially) inciting words spoken in a particular context, and then there is the reception of those words by hearers (those “incited”). The law of incitement tends to focus on the former, simply assuming a generic hearer. While some words are very inciting to normal hearers, under the right conditions (Mill’s case of the mob in front of the corn dealer’s house, which was presumably the rationale of the Sri Lankan government), it is surely also important that some hearers are especially incitable, perhaps regardless of the context. 

One can be more or less polite about this last point. As Swisher put it: “Social media has blown the lids off controls that have kept society in check.” Why does society need “controls” to “keep” it “in check”? In the modern era, Freud articulated this concern most memorably in Civilization and Its Discontents in 1930, right before the Nazi catastrophe engulfed the world. In Freud’s view, human beings are by nature driven by both aggressive instincts that pull them (and society) apart, and “erotic” instincts that draw them together; Freud’s concern was that this instinctual “brew” was ready to blow up at any time, especially given the excess repression of “erotic” drives characteristic of his time. Even Marxists, who reject Freud’s view of human nature, can agree that society is always on the verge of “blowing up” precisely because of the exploitation of the labor of many for the benefit of the few. Whatever the explanation, it is clear that there are many infuriated and agitated people in all modern societies, a large proportion of whom are “highly incitable” (some of them with good reason). Part of the problem, even from the Freudian or Marxist perspective, is that those who have good reason to be angry and agitated are typically angry and agitated for the completely wrong reasons: they want to kill people of a different religion, for example, not their actual oppressors. Blowing society apart without rhyme or reason is not a “progressive” goal. That is why “keeping the lid on” is something free speech doctrine cannot ignore.

In the pre-internet era, the major media of communication helped to keep the lid on society. In the internet era, however, the law needs to consider the fact that there are people who are “ready to blow”: that is, highly incitable, and thus susceptible to the omnipresent internet. Internet speech is not like Mill’s firebrand agitating the mob in front of the corn dealer’s house; content on the internet is everywhere, always available to those ready to “blow,” wherever they are. It would be as if the agitator against the corn dealer could deliver his message not just to the mob in front of him, but to anyone, anywhere, in front of any corn dealer’s house. In the internet era, and with the collapse of epistemic authority, we need to think about the effects of internet speech on these people.

Unlike in Mill’s time, we can take a meaningful precaution against provoking the “highly incitable” while still allowing free expression: we could, as the Sri Lankan government did, shut down the internet, shut down the “toxic digital waste of misinformation” that might incite normal hearers and will almost certainly trigger the highly incitable; yet speakers can still stand on street corners and submit opinion pieces to the newspapers. Of course, shutting down the whole internet in an emergency is ripe for abuse, and one that would be hard to regulate against in advance. That certainly does not mean the Sri Lankan government was wrong in the case described above, but regulations authorizing generic “emergency internet shutdowns” are plainly risky given the background incentives governments have to shut down communication. 

The internet, however, is huge and has many locations. One possibility would be to authorize regulators to close particular sites during emergencies, such as Google, YouTube, Instagram, and Facebook. The list would change over time, depending on what the most common sources of incitement are. Since Google searches are an instrument of mischief, shutting them down is in all likelihood particularly important.10 Internet users will still be able to find all their regular websites without the benefit of Google (or other search engines), and they should still be able to access the news sites featuring content filtered through gatekeepers (like The New York Times or the BBC). (As an alternative, perhaps government could block certain search terms on Google for a short period of time, depending on the emergency.) Like the Roman Republic’s provision for dictatorial powers, such emergency shutdowns should be temporally limited: in the case of internet sites, say, one week, subject to judicial review of a requested extension. One thing we know is that time cools passions.

A better approach to filtering would reduce the number of places on the internet that offer incitement in the first place. This would require a significant change to First Amendment jurisprudence in the United States, which is particularly permissive. My proposal here would apply only to what I call “pure” internet sites. It would involve, in the first instance, creating analogs of existing “fighting words” and “incitement to imminent illegal action” doctrines under American constitutional law. By “pure internet sites,” I mean websites that do not have analogs in the traditional (or “legacy”) media—print (like TheNew York Times), radio (National Public Radio), television (CNN, ABC, Fox)—and that do not involve serious gatekeepers, who review content for defamation, accuracy, vulgarity, and so on. For these pure internet sites (such as blogs, webzines [some of which pretend to have editors], X, Instagram, and Facebook), I suggest that we apply the familiar categories of “low value” speech, but without their temporal conditions

Fighting words, as the Supreme Court famously said, are words that “by their very utterance, inflict injury or tend to incite an immediate breach of the peace.”11 In the case of pure internet sites, this would mean words that would, in real life and real time, inflict injury or tend to incite an immediate breach of the peace are forbidden. So, too, for incitement to unlawful action, the test would be: whether these words, if said “in front of the corn dealer’s house” (that is, a temporal context ripe for incitement) would lead to illegal action, in which case they would be forbidden. Stripping out the real-world temporal requirement is justified by the wide reach of the internet, and its potential to trigger not only the normally incitable, but the highly incitable as well. The internet constitutes a “virtual reality,” as is often said, so it deserves “virtual” fighting words and “virtual” incitement doctrines. This would no doubt shut down a lot of internet ranting, but the loss to well-being (even accounting for the unhappiness of ranters) would be minimal. It seems plausible that those who want to spout “fighting words” would be less likely to do so in actual reality than in the virtual one. It is hard to see how this is an overall loss to society’s well-being.

Yet none of the preceding, even if it would help with the risk of incitement and ensuing violence, would touch the problem of false information—about vaccines or false COVID-19 cures—that are peddled continuously on the internet (though not only there). Here is where the United States would require a fundamental rethinking of First Amendment doctrine and how it treats harms caused through the mental or intellectual mediation of a hearer/reader.12 The problem with current law is illustrated by the fate of the early 1980s proposal by writer and activist Andrea Dworkin and legal scholar Catharine MacKinnon to create a cause of action for harms suffered due to pornography. One law embodying that proposal was struck down as unconstitutional by the U.S. Court of Appeals for the Seventh Circuit in American Booksellers Association v. Hudnut.13 The Court rejected the ordinance’s definition of “pornography” as involving an unconstitutional content-based restriction on speech. The Court actually agreed with “the premises of this legislation. Depictions of subordination tend to perpetuate subordination.”14 But that did not mean the law passed constitutional muster; rather, the effectiveness of pornography in subordinating women

simply demonstrates the power of pornography as speech. All of these unhappy effects depend on mental intermediation. Pornography affects how people see the world, their fellows, and social relations. If pornography is what pornography does, so is other speech. Hitler’s orations affected how some Germans saw Jews. Communism is a world view, not simply a Manifesto by Marx and Engels or a set of speeches. Efforts to suppress communist speech in the United States were based on the belief that the public acceptability of such ideas would increase the likelihood of totalitarian government.15

The phrase “mental intermediation” does a lot of theoretical work in this part of the opinion. Much of that intermediation is “unconscious” (as the opinion even acknowledges), such that the individual presumably exercises no control. Even conscious intermediation is affected by forces beyond the individual’s control.16 The real question should be about the causal chain from “speech” to harm (subject, perhaps, to foreseeability or “reasonableness” requirements). Someone in the Weimar Republic in 1930 who thought Hitler and the Nazis should be shut down because their speech was very dangerous would in fact have been correct: Hitler and the Nazis made clear the harm they intended to do, in a way wholly unlike Marx and Engels (assuming someone actually read them). The real issue should be “the likelihood” (to quote the Seventh Circuit) of the harm, and the severity of that harm, not whether there is “mental intermediation.”

The latter consideration was the real hurdle for the general Dworkin-MacKinnon idea, not the fact of “mental intermediation,” whose role they would not have denied: the causal connection between the “speech” and the harm was not very clear and is even less clear now as pornography has become widely available. Countries with open internet access are now awash in pornography (“one click away”), to an extent Catharine MacKinnon would never have dreamed of at the time. The massive rise in exposure to pornography has not coincided with restrictions on the rights of women or an increase in sex crimes (although there are so many confounding variables, it is still hard to know the actual effect of pornography). 

The contrast with false speech about COVID-19 and vaccines is instructive. The causal connection between those who hear false information and those who forgo life-saving vaccines and public health measures seems clearer. National Public Radio listeners and cosmopolitan professionals in major urban areas who read The New York Times are not generally forgoing vaccines and masking; Fox News viewers and “conservative” talk radio listeners are at higher rates. Of course, not all of the latter make bad choices, but some do, and do so because of the speech to which they have been exposed. What we need for pure internet sites (perhaps not only them) is tortious liability for harm that a reasonable person would see as a foreseeable consequence of speech they knew or should have known was false. We must be mindful that the concept of “harm” has been inflated in recent years, to encompass psychological states that would have previously been deemed “offensive” or “hurtful.” “Harm” for purposes here should be limited to “injury to physical well-being.” The knowledge requirement on the part of the speaker should be similar to “actual malice” in the defamation context: knowledge of the falsity of the speech or reckless indifference to its truth or falsity. Foreseeability judgments, as we learned from the American legal realists, are famously sensitive to situational factors and policy considerations, and that should be welcome: if you peddle nonsense about cures or vaccines during a pandemic, and people end up sick or dead, you should suffer the legal consequences if those people (or their estates) can prove the causal role of your speech in the outcome. How to think about causation is more complicated. Obviously, if a website is the exclusive source of the false information that leads to the physical harm, that is an easy case; the more likely scenario is one in which there are multiple sources of misinformation leading to harm, sources both on the internet and in the traditional media. Something like the “market share” liability theory from tort law should apply: purveyors of misinformation pay for damages in proportion to how much of the market they reach. 

Whatever the doctrinal regime that is adopted, prevailing in tort will often be challenging (as it should be), but one may hope that the specter of liability will deter some of the worst offenders, even if it becomes a game of whack-a-mole. Sometimes, however, whacking (in court) the biggest moles is enough. The crucial point is that “mental intermediation” should be irrelevant, just as it is when the rabble-rouser incites the mob in front of the corn dealer’s house, since “being incited” to unlawful action also depends on “mental intermediation.” The fact that the “mental intermediation” lasts longer should be legally irrelevant.

There is a final obstacle in the United States to legal remedies in tort for mischief on the internet: namely, Section 230 of the Communications Decency Act. Section 230 shields internet service providers (ISPs), search engines, and websites from liability in tort for content that others provide. It does not exempt them, however, from liability for copyright violations or from violations of federal criminal statutes. The exemption for ISPs makes sense: they are more like the phone company than The New York Times. But the idea that website owners get a free pass on hosting tortious wrongdoing, but not on hosting copyright violations, is prima facie bizarre. Section 230 is hardly the only approach democratic countries can take toward liability on the internet,17 and it should clearly be repealed with respect to websites, but that is an issue I have addressed elsewhere.18 What is important here is that if there is to be tortious liability for false speech on the internet that is foreseeably harmful (like lies about vaccines), we would need to change Section 230. Notice and takedown requirements, together with penalties for reckless or baseless notices, is probably the best approach (along with a prohibition on websites permitting waivers).

Counting against any efforts to regulate the internet is the strongest argument against government regulation of speech, in real or virtual life: distrust of regulators.19 I have bracketed that consideration here, although it is the regulatory paradox that looms over all such discussions. If regulators are not themselves meta-­epistemic authorities (or meta-meta-epistemic authorities), then asking them to regulate speech on the internet with an eye to maximizing epistemic values is a fool’s errand. Perhaps we have passed the point of no return in the United States, in which the potential regulators are themselves epistemically incompetent. I am somewhat more optimistic, but time will tell.


 

author’s note

Thanks to Ariana Vasey for research assistance and to the Alumni Faculty Fund of the University of Chicago Law School for support.

Endnotes

  • 1Joseph Raz, “Authority, Law and Morality,” The Monist 68 (3) (1985): 295. Raz’s main concern is practical authority, which aims to tell people what they ought to do, but that is not what is most important for my purposes. Raz’s account has been disputed as an analysis of the kind of authority law claims. For one kind of doubt (but not the only one), see Brian Leiter, “Legal Positivism as a Realist Theory of Law,” in The Cambridge Companion to Legal Positivism, ed. Torben Spaak and Patricia Mindus (Cambridge: Cambridge University Press, 2021).
  • 2The National Academy of Sciences, like the American Academy of Arts and Sciences, has to some extent compromised its epistemic authority in recent years by giving weight to demographic diversity, rather than purely scientific and scholarly achievement, in its election of new members.
  • 3Matthew Haag and Maya Salam, “Gunman in ‘Pizzagate’ Shooting Is Sentenced to 4 Years in Prison,The New York Times, June 22, 2017.
  • 4Mallory Newall, “More than 1 in 3 Americans Believe a ‘Deep State’ is Working to Undermine Trump,Ipsos, December 30, 2020. See also Kevin Roose, “QAnon Followers Are Hijacking the #SaveTheChildren Movement,The New York Times, August 12, 2020. More recently, see Giovanni Russonello, “QAnon Now as Popular in U.S. as Some Major Religions, Poll Suggests,The New York Times, May 27, 2021.
  • 5Daniel Victor, “In Christchurch, Signs Point to a Gunman Steeped in Internet Trolling,The New York Times, March 15, 2019.
  • 6Ibid.
  • 7See Sheera Frenkel, “The Most Influential Spreader of Coronavirus Misinformation Online,The New York Times, August 27, 2021.
  • 8John Stuart Mill, On Liberty (Boston: Ticknor and Fields, 1863), 107–108.
  • 9Kara Swisher, “Sri Lanka Shut Down Social Media. My First Thought Was ‘Good,’The New York Times, April 22, 2019.
  • 10See Brian Leiter, “Cleaning Cyber-Cesspools: Google and Free Speech” in The Offensive Internet: Speech, Privacy, and Reputation, ed. Saul Levmore and Martha C. Nussbaum (Cambridge, Mass.: Harvard University Press, 2010), 161–162.
  • 11Chaplinsky v. New Hampshire, 315 U.S. 568, 572 (1942).
  • 12This is not the only issue. The plurality opinion by Justice Kennedy in U.S. v. Alvarez, 567 U.S. 709 (2012) rejected the view that false statements of fact were simply “low value” speech, subject to little constitutional protection, and thus held that the Stolen Valor Act of 2005 (which imposed criminal penalties for lying about receipt of military medals and honors) was a content-based regulation subject to strict scrutiny. Concurring in the result, two Justices (Breyer and Kagan) held that false statements of fact deserved only intermediate scrutiny, but still the Act failed to pass constitutional muster even under this less demanding test. The dissent by Justice Alito (joined by Scalia and Thomas) argued that prior cases already stood for the proposition that “the right to free speech does not protect false factual statements that inflict real harm and serve no legitimate interest.” Ibid., 739. The dissent’s view would ultimately have to prevail were regulations like those discussed in the text to be constitutionally viable, in addition to the issues discussed in the text.
  • 13771 F.2d 323 (7th Cir. 1985), aff’d, 475 U.S. 1001 (1986).
  • 14Ibid., 329.
  • 15Ibid.
  • 16On the general philosophical problem, and the relevant psychological evidence, see Brian Leiter, Moral Psychology with Nietzsche (Oxford: Oxford University Press, 2019), chap. 5, 7.
  • 17Ashley Johnson and Daniel Castro, How Other Countries Have Dealt With Intermediary Liability (Washington, D.C.: Information Technology and Innovation Foundation, 2021). Australia, for example, is proposing legislation that would require social media companies to have takedown provisions for defamatory content. See also, for example, “Australia to Introduce New Laws to Force Media Platforms to Unmask Online Trolls,” Reuters, November 28, 2021.
  • 18Leiter, “Cleaning Cyber-Cesspools,” 156.
  • 19See Brian Leiter, “The Case Against Free Speech,” Sydney Law Review 38 (4) (2016): 407, 433–439.