Illustration of men viewed from behind with flames and speech bubbles around them
Milton Wolf Seminar on Media and Diplomacy

A “Right to Lie”? The Many Facets of Free Speech and Fake News

By Brian Hughes
August 20, 2019

The 2019 Milton Wolf Seminar on Media and Diplomacy included a great deal of discussion about shifting norms and laws surrounding freedom of speech. One panel in particular, “Beyond the Demands of Skirmishing: Legal Norms and International Challenges,” distilled the problem down to a single question: Is there a right to lie? In an age of so-called “fake news,” should the production and promotion of untruth be subject to legal sanction? Panelists were divided, some arguing for expansive protections covering even untrue statements. Other panelists cited the “force multiplier” effect of digital media—how communications can amplify and accelerate the impact of speech acts—as cause for new regulation limiting untrue speech.

This question is by no means easy to parse, much less to definitively answer. Even the categorization of untrue speech remains fuzzy around its edges. Of course, libel, slander, and defamation laws circumscribe speech that is both demonstrably false and damaging. However, the bounds of falsity extend beyond the high standards usually set for such civil actions. Indeed, a sizable bulk of what is today considered “fake news” consists not in factually untrue reportage (though such cases certainly do exist), but rather in misleading or simply contested emotional conjugations. Sometimes referred to as Russell’s Conjugation, the concept is exemplified by Bertrand Russell’s maxim:

I am firm. You are obstinate. He is a pig-headed fool. I am righteously indignant. You are annoyed. He is making a fuss over nothing. I have reconsidered the matter. You have changed your mind. He has gone back on his word.

Of course, some emotional conjugations are truer than others. And in an age when social media can move masses, even across the line of genocide, some conjugations are capable of producing the ultimate harm. When, for example, an anti-Rohingya Facebook post refers to the ethnic and religious minority as “non-human…dogs,” it is doing so not in a spirit of facticity, but of extreme emotional conjugation. While such speech would almost certainly run afoul of British Public Order Act against expressing racial hatred, it would not violate the United States’ laws, which limit hate speech only at the point of “imminent lawless action.”

By the same token, social media sentiments such as “We need to destroy their [Rohingya] race,” are neither factual nor false. They express the desire and possibly intent to do harm. Such statements would almost certainly run afoul of, for example, Austria’s Penal Code Part 20, Section 283. However, it would just as certainly be protected under the United States’ First Amendment, as it specifies neither date, nor place, to constitute imminence. While such standards might not directly answer this question of the “right to lie,” they do offer important precedents for thinking through the question.

In the United States, where my own works focuses, public debates over free speech are frequently based on vague, or even sloppy, articulations of the issues at play. The legal regimes, social norms, and moral sentiments that determine our beliefs in the proper limitations of speech (if any) are often confused and mistaken for one another. Once again, social media has complicated such debates in ways which our public discourse has yet to reconcile.

Questions as to the justice of social media bans, for example, rarely make these important distinctions. Corporations in the United States are free to deny service to those they deem undesirable customers or business partners. They may disinvest from such relationships so long as doing so does not discriminate against individuals based on a handful of protected categories such as race, religion, and sex. Political affiliation is not a similarly protected category. Therefore, as a legal matter, digital platforms such as Facebook and Twitter have every right to ban far-right, Islamist radical, or generally extremist users from their services.

Under Section 230 of the US Communication Decency Act, this is considered a matter of content moderation, and is fully protected under the law. In the United States, Section 230 of the Communication Decency Act offers these platforms the freedom and protection to divest from any content they wish. These intermediaries are classified as interactive computer service providers—not telecommunication providers—and are thus relieved of the burden of common carriership. Thus, they are not obliged to treat all traffic on their platforms equally. They may ban at will—and public pressure, concerns over brand image, the risk of boycott, or the moral conscience of executives and shareholders increasingly motivate them to do so.

As a matter of norms, however, the issue of access of digital platforms is not so clear cut. Social groups differ as to the extent of unpleasantness they are willing to tolerate in their discourse, and indeed as to what they consider abusive. American norms have typically favored allowing even the worst political voices to speak freely, under the assumption that “sunlight is the best disinfectant” and that defeating such ideas in the public sphere generally strengthens our democracy. Given the failure of the public sphere to halt the spread of extremist ideology—and this failure’s correlation to the rise of social media—these inclusive norms may be shifting. The fierce debate over the governance and moderation of social media is in many ways a reflection of these shifting norms.

Moral intuitions are even more individualized and fiercely held. If we see a clear evolution between extremist speech and the atrocities of Rohingya, then our moral intuition may demand zero tolerance for hate online. By the same token, our moral intuition may cultivate a fierce loyalty to liberal values of open discourse. Here, reasonable people may disagree. But it seems even reasonable people are not easily swayed or diverted from their moral commitments on the topic of free speech.

Disagreements such as those described above usually surround treatment of hateful emotive conjugations such as extremism or abusive language. But they also inform this question of the “right to lie.” If a debate over digital platform moderation, hateful speech, and the “right to lie” is to fruitfully proceed, these distinctions must be a part of the debate.

If the limits of hatefully conjugated speech are to be found by assessing their potential harm to society, then questions of the right to lie should also be weighed against the risks of harm untruthful speech poses. Here we should set aside the question of emotional conjugation, which is more important to assessing dilemmas surrounding hate speech. Truly fake news, that is, media relating events that never happened, or denying events that did, has the potential to harm regardless of emotional tone.

However, the risks for harm posed by truly fake news are almost always of second or third order consequences. To turn once again to the United States, where my own research focuses, we may examine a recent example of manipulated video of House Speaker Nancy Pelosi. The video, broadcast by FOX News and tweeted by President Trump, purports to show Pelosi slurring her speech. This factually inaccurate depiction was achieved by slowing down the video—a trivially easy technical manipulation. It was, in the plainest terms, a lie. Even if the impact of this lie were negligible, or non-existent, harm would still been caused, as democracy relies on public trust in the information with which it reaches its political decisions. If this lie were to directly shape the outcome Pelosi’s reelection, or diminish her effectiveness in the U.S. Congress, then that outcome could certainly be said to harm democracy.  In either event, a case for serious harm can be made.

And yet, in either case, the harm inflicted would be indirect. In the latter case, which proposes actual impact to voting patterns, this is a second-order harm. In a two-step process, voters watch the manipulated video and vote differently than they would otherwise. The former scenario implies a three-step process: first, voters see the video and know it is false. While the video may not influence their ultimate voting behavior, its mere existence influences their opinion of American democracy for the worse. This altered perception leads them to more subtle changes in behavior, such as increased cynicism or diminished participation, which are themselves harmful to the functioning of a liberal democracy.

One can very easily imagine these second and third order harms occurring. And such harms could prove very grave. And yet, these harms are prohibitively difficult if not impossible to measure on a case-by-case basis. Such indeterminacy makes qualified enforcement against this type of lie impossible. The right to lie would therefore need to be absolutely permitted or absolutely prohibited. Scant middle ground exists.

Digital communication technology adds yet another dimension of complexity to this question. The Internet allows content to live in perpetuity, served to viewers with the same sense of immediacy as the day’s latest news. While this might increase the negative impact of lies such as the doctored Pelosi video, it also speaks to the ungovernability of speech in online space. The very affordances of digital technology that make lies more damaging, and more tenacious, are the very affordances that make lies practically impossible to ban.

If the current regime of maximum permissiveness is to end, then, it will not be as a result of legislation or even policies put in place by digital platforms themselves. Rather, change will have to come at the more fundamental level of platform design and engineering. However, for such changes to occur, a similarly radical reshaping of the political economy of digital media must occur as well. As currently arranged, lies such as the Pelosi video are drivers of critical revenue for digital media platforms. In many countries, these platforms are shielded from liability for the content they host. In order for the Internet to become a more truthful place, both of these traits must would have to change. Until they do, we may or may not possess a right to lie. But we will certainly enjoy the ability.