← Return to search results
Back to Prindle Institute

Is Speech Freer Without Fact-Checks?

Meta recently announced that it will end its practice of employing fact-checkers to moderate content on its platforms, including Facebook, Instagram, and Threads. Instead, Meta sites will employ a system similar to the “community notes” feature on Twitter/X, where users can propose notes to provide more information about a post. In a short video posted to Facebook, Mark Zuckerberg explained that he was motivated to make the change due to fact-checkers being too “politically biased,” and accused them of having “destroyed more trust than they created.” At the end of the day, he wanted his platforms to get back to their roots of upholding freedom of speech.

Many have expressed concerns about Zuckerberg’s decision, as well as his motivations for making it. The most prominent concern has been that without moderation from fact-checkers, significantly more false and hateful information will make its way around Meta’s platforms. Others have expressed the concern that Zuckerberg’s decision is itself politically motivated. There is evidence that fact-checked posts are more likely to be shared by conservatives who appeal to low-quality news sites, which has led some to believe that conservative voices are being disproportionately targeted. Zuckerberg’s recent behaviors – including donations to Trump and his removal of protections against certain kinds of hate speech – have led some to conclude that Zuckerberg is really motivated by a dislike of seeing fact-checks on views he and his friends hold.

Even if Zuckerberg truly is motivated by promoting freedom of speech, what is it about having a system of third-party fact-checkers that inhibits free speech? And will shifting responsibility to the community make speech on Meta platforms more free?

Let’s start with the first question: why might third-party fact-checkers inhibit freedom of speech? One might think that the problem is one of quantity: any instance of fact-checking is a form of censorship, perhaps, and thus without dedicated fact-checkers whose job it is to flag content there will simply be less censorship and thus speech will be more free.

If this is our concern, would a community notes system be an improvement? Perhaps. When looking at Twitter/X, for example, The Poynter Institute “found that fewer than 10 percent of Community Notes drafted by users ended up being published on offending posts,” and that these numbers “are even lower for sensitive topics like immigration and abortion.” Shifting to a community notes model where only a fraction of notes ever see the light of day may then decrease the number of posts that are flagged.

At the same time, however, the number of community members of sites like Facebook and Instagram outnumber third-party fact-checkers by many orders of magnitude. So even if a small percentage of community notes end up being published, the result may not be any fewer fact-checks.

Zuckerberg’s argument, however, is not so much with the concept of fact-checking itself, but that third-party fact-checkers are too politically biased, and are thus disproportionately censoring certain views. There has been significant pushback against this claim; regardless, let’s assume for the sake of argument that it’s true. If fact-checkers are biased, will the community be any better?

It’s not clear that it will. After all, the community itself may very well be biased by having a plurality of users falling on one side of the political spectrum. It’s also unclear whether conservative views will receive any fewer flags under a community notes system than a fact-checking system. A recent study in Nature, for example, found that professional fact-checkers and “politically balanced groups of laypeople” largely agreed on which sources of information were low quality, the lion’s share being those that amplified conservative views. If Zuckerberg is concerned that flagging conservative views disproportionately constitutes a form of censorship, then shifting fact-checking responsibilities to the community may not make things any better.

One might think instead that third-party fact-checking just really isn’t necessary anymore. An article in Politico, for instance, recently argued that the “disinformation panic” that started during Trump’s first presidency is “over.” Part of the evidence for this claim is that while the contentious and surprising nature of Trump’s first election win demanded an explanation – which many blamed on misinformation campaigns designed to mislead voters – the second Trump win was definitive and, at least in terms of drama around the manipulation of results, mundane. Per the Politico article: “no one was fooled into voting for Trump.”

There have also been increasingly frequent criticisms that programs dedicated to ameliorating the problems of misinformation and disinformation have largely failed to bear fruit. People spreading false information, the argument goes, is not so much a problem to be solved as it is a feature of humanity to be tolerated; especially so, given the politically fraught nature of labeling information about social issues as either true or false. This is not to say that we should abandon the project of identifying false and potentially harmful information online. Rather, employing third-party fact-checkers is an overcorrection of a non-problem, and thus unnecessarily restricts free speech.

Questions about the fecundity of the study of misinformation and disinformation are open, although there is good evidence that many interventions are, in fact, effective. As stated in the Politico article, there is also good reason to be worried about the quantity and egregiousness of false information being shared on social media in the second term of the Trump presidency given his infamous disinterest in the truth and his choice of appointees. Of course, it may very well turn out that Trump’s rhetoric is met with less opposition during his second term and that changing political winds result in more people agreeing on obvious falsehoods they see on social media. However, this is not an indication that disinformation is over, but instead underlines how those in power have a vested interest in attempting to control narratives around the extent to which disinformation is a problem.

We have seen little reason to think that the existence of fact-checking represents a limitation on free speech, nor have we seen much reason to think that shifting to a community notes model will make things any better. But perhaps shifting responsibility for fact-checking to the community will better promote free speech not by being any less restrictive, but by granting new abilities to its users. By creating a system in which everyone has a say in helping to determine whether some content is fact-checked or flagged, the process becomes democratic in a way that is presumably lacking when outsourcing those duties to third parties, and thus free speech flourishes.

There is a sense in which this shifting of responsibilities gives more freedom to the users, as they now possess an ability they didn’t have before. But a system with only minimal guardrails also risks stifling many more voices. For example, Zuckerberg’s recent changes that allow users to say that gay and trans people have “mental illness” remove restrictions on a certain kind of speech from a certain kind of person, but will undoubtedly result in a lot less speech from members of communities that Meta’s policies refuse to respect. Moderation of speech – be it in the form of fact-checking or policies around what kinds of content are permitted on a platform – can thus promote free speech rather than inhibit it.

It remains to be seen whether Zuckerberg’s version of community notes will be successful in identifying false and misleading information, and it’s perhaps only known to him what his true intentions are in making the change. However, if he really was motivated by making speech freer on his platforms, there’s good reason to think his efforts are misguided.