What Public Discourse Gets Wrong About Misinformation Online

A new study from the Computational Social Science Lab shows that while online misinformation exists, it isn’t as pervasive as pundits and the press suggest.

By Hailey Reissman

In 2006, Facebook launched its News Feed feature, sparking seemingly endless contentious public discourse on the power of the “social media algorithm” in shaping what people see online. 

Nearly two decades and many recommendation algorithm tweaks later, this discourse continues, now laser-focused on whether social media recommendation algorithms are primarily responsible for exposure to online misinformation and extremist content. 

Researchers at the Computational Social Science Lab (CSSLab) at the University of Pennsylvania, led by Stevens University Professor Duncan Watts, study Americans’ news consumption. In a new article in Nature, Watts, along with David Rothschild of Microsoft Research (Wharton Ph.D. ‘11 and PI in the CSSLab), Ceren Budak of the University of Michigan, Brendan Nyhan of Dartmouth College, and Annenberg alumnus Emily Thorson (Ph.D. '13) of Syracuse University, review years of behavioral science research on exposure to false and radical content online and find that exposure to harmful and false information on social media is minimal to all but the most extreme people, despite a media narrative that claims the opposite. 

A broad claim like “it is well known that social media amplifies misinformation and other harmful content,” recently published in The New York Times, might catch people’s attention, but it isn't supported by empirical evidence, the researchers say.

“The research shows that only a small fraction of people are exposed to false and radical content online,” says Rothschild, “and that it’s personal preferences, not algorithms that lead people to this content. The people who are exposed to false and radical content are those who seek it out.”

Misleading Statistics 

Articles debating the pros and cons of social media platforms often use eye-catching statistics to claim that these platforms expose Americans to extraordinary amounts of false and extremist content, and subsequently cause societal harm, from polarization to political violence.

However, these statistics are usually presented without context, the researchers say. 

For example, in 2017, Facebook reported that content made by Russian trolls from the Internet Research Agency reached as many as 126 million U.S. citizens on the platform before the 2016 presidential election. This number sounds substantial, but in reality, this content accounted for only about 0.004% of what U.S. citizens saw in their Facebook news feeds.

“It’s true that even if misinformation is rare, its impact is large,” Rothschild says. “But we don't want people to jump to larger conclusions than what the data seems to indicate. Citing these absolute numbers may contribute to misunderstandings about how much of the content on social media is misinformation.”

Algorithms Vs. Demand

Another popular narrative in discourse about social media is that platforms’ recommendation algorithms push harmful content onto users who wouldn’t otherwise seek out this type of content.

But researchers have found that recommendation algorithms tend to push users toward more moderate content and that exposure to problematic content is heavily concentrated among a small minority of people who already have extreme views.

“It’s easy to assume that algorithms are the key culprit in amplifying fake news or extremist content,” says Rothschild, “but when we looked at the research, we saw time and time again that algorithms reflect demand and that demand appears to be a bigger issue than algorithms. Algorithms are designed to keep things as simple and safe as possible.”

Social Harms

There has been a recent trend of articles suggesting exposure to false content or extremist content on social media is the cause of major societal ills, from polarization to political violence. 

“Social media is still relatively new and it’s easy to correlate social media usage levels with negative social trends of the past two decades,” Rothschild says, “but empirical evidence does not show that social media is to blame for political incivility or polarization.”

Improving Public Discourse About Social Media

The researchers stress that social media is a complex, understudied communication tool and that there is still a lot to learn about its role in society.

“Social media use can be harmful and that is something that needs to be further studied,” Rothschild says. “If we want to understand the true impact of social media on everyday life, we need more data and cooperation from social media platforms.”

To encourage better discourse about social media, the researchers offer four recommendations: measure exposure and mobilization among extremist fringes; reduce demand for false and extremist content and amplification of it by the media and political elites; increase transparency and conduct experiments to identify causal relationships and mitigate harms; and fund and engage research around the world:

Measure exposure and mobilization among extremist fringes Platforms and academic researchers should identify metrics that capture exposure to false and extremist content not just for the typical news consumer or social media user but also in the fringes of the distribution. Focusing on tail exposure metrics would help to hold platforms accountable for creating tools that allow providers of potentially harmful content to engage with and profit from their audience, including monetization, subscriptions, and the ability to add members and group followers.
Reduce demand for false and extremist content and amplification of it by the media and political elites Audience demand, not algorithms, is the most important factor in exposure to false and extremist content. It is therefore essential to determine how to reduce, for instance, the negative gender- and race-related attitudes that are associated with the consumption of content from alternative and extremist YouTube channels. We likewise must consider how to discourage the mainstream press and political elites from amplifying misinformation about topics such as COVID-19 and voter fraud in the 2020 US elections.
Increase transparency and conduct experiments to identify causal relationships and mitigate harms Social media platforms are increasingly limiting data access even as increased researcher data and API access is needed to enable researchers outside the platforms to more effectively detect and study problematic content. Platform-scale data are particularly necessary to study the small groups of extremists who are responsible for both the production and consumption of much of this content. When public data cannot be shared due to privacy concerns, the social media platforms could follow the ‘clean room’ model used to allow approved researchers to examine, for example, confidential US Census microdata data in secure environments. These initiatives should be complemented by academic–industry collaborations on field experiments, which remain the best way to estimate the causal effects of social media, with protections including review by independent institutional review boards and preregistration to ensure that research is conducted ethically and transparently.
Fund and engage research around the world It is critical to measure exposure to potentially harmful content in the Global South and in authoritarian countries where content moderation may be more limited and exposure to false and extremist content on social media correspondingly more frequent. Until better data is available to outside researchers, we can only guess at how best to reduce the harms of social media outside the West. Such data can, in turn, be used to enrich fact-checking and content moderation resources and to design experiments testing platform interventions.

"Misunderstanding the harms of online misinformation” was published in Nature and authored by Ceren Budak, Brendan Nyhan, David M. Rothschild, Emily Thorson, and Duncan J. Watts.