A Signal Through the Noise: How the Media Bias Detector Is Cutting Through Our Cluttered Media Landscape
Researchers at the Computational Social Science Lab, led by Professor Duncan Watts, are charting new territory in the social sciences.
“Explorer” isn’t one of Duncan Watts’s many titles, but perhaps it should be.
As the Stevens University Professor and the twenty-third Penn Integrates Knowledge Professor, Watts holds appointments in the Annenberg School for Communication, Penn Engineering, and Wharton. He is also the founder and director of the Computational Social Sciences (CSS) Lab, and it’s in this role that he’s charting new territory in the social sciences.
“Superficially, computational social science takes methods from computer science and applies them to social issues. But at a deeper level, computational social science can also mean advancing our understanding of the world by solving practical problems,” he reflects. “I started this lab to embody what computational social science can be.”
Tracking Media Bias in Real Time
Together with Managing Director Jeanne Ruane, Watts and his team of 23 students and researchers are exploring how people behave, how media works, how society functions, and how the human mind operates. For Watts, the key to understanding lies in our modern moment: as new technologies and data sources emerge, their applications hold new possibilities. And there are few better examples of this symbiosis than CSSLab’s Media Bias Detector.
“We’ve all experienced the divisiveness pervading popular media,” he notes. “Our hypothesis is that this is exacerbated not by lies or ‘fake news’ but by bias. It’s a timely question, and in our particular historical moment, CSSLab has these incredible tools—massive new data sets and large language models, in particular—to find an answer. So we threw our energy into investigating.”
A project of the Lab’s Media Accountability Project (PennMAP), the resulting Media Bias Detector uses large language models (LLMs) to identify media bias in headline news. Unlike similar tools, the Media Bias Detector doesn’t rely on the reputation of publishers but rather on the construction and language of the articles themselves. “We can analyze stories in real time, and our tool is better able to capture the heterogeneity within different news organizations over time,” says Watts. “For example, over President Trump’s first 100 days in office, the Media Bias Detector was able to show what the media covered most heavily and also what was ignored—tariffs and protests, for example—by certain publishers.”