
Fact-Checking in the Digital Age: Can Generative AI Become an Ally Against Disinformation?
By Başak Bozkurt
At the 2025 Milton Wolf Seminar, panel discussions tackled one of the urgent questions of the digital age: How can truth be verified in a world where its boundaries are increasingly blurred? Against a backdrop of widespread disinformation, increasing polarisation and declining public trust, the very notion of truth itself has become contested. In such an environment, fact-checking faces a dual challenge: not only is it harder to agree on what qualifies as truth, but disinformation now spreads with unprecedented speed and scale, outpacing traditional methods of verification.
Amid these challenges, large language models (LLMs) have emerged as both a source of the problem and, paradoxically, a potential part of the solution. LLMs are advanced generative artificial intelligence (AI) systems trained on large amounts of internet data to generate human-like output. On the one hand, LLMs can produce convincing falsehoods rapidly and at scale, exacerbating the spread of disinformation. On the other hand, its advanced capabilities might also be harnessed to detect, counter and even stop disinformation. This raises a critical question: Could generative AI, despite its risks, become an ally in the fight for truth?
The 2025 Milton Wolf Seminar placed a strong emphasis on the dangers posed by AI, such as how it can produce disinformation and mislead individuals. This blog post explores an alternative angle. Instead of viewing AI only through the lens of risk, it asks whether this technology might also serve as part of the solution.
In this blog post, I will first discuss the evolution of fact-checking and how it adapted to the changing information ecosystem. Next, I will examine the challenges fact-checkers face today, especially the scale and speed of disinformation. I will then turn to LLMs, to consider whether this technology can help support the fact-checking process. Finally, I will reflect on how LLMs might strengthen the ongoing fight for truth.
The Evolution of Fact-Checking
Disinformation itself is not new, but social media has profoundly transformed how quickly and widely it spreads. Platforms such as Facebook and X (formerly Twitter) have redefined how citizens consume information. While democratising access, they have also created unregulated and minimally controlled spaces where disinformation can rapidly proliferate (Wittenberg & Berinsky, 2020). Unlike traditional journalism, where media professionals served as gatekeepers and information had to pass through institutional filters before reaching the public, social media platforms allow anyone to publish and share content instantly, without any editorial oversight. This has given rise to retroactive gatekeeping: a form of fact-checking that involves verifying the accuracy of claims after they have already begun circulating online (Singer, 2023).
The global growth of fact-checking accelerated significantly around 2016, following major political events, such as the U.S. presidential election and the Brexit referendum, which drew urgent attention to the dangers of disinformation and fake news (Vinhas & Bastos, 2022). Since then, fact-checking has expanded worldwide, playing a vital role in global efforts to combat disinformation. By late September 2023, the Google-operated database ClaimReview contained nearly 300,000 verified claims from fact-checkers around the world (Graves & Cunliffe-Jones, 2024).
As of May 2025, 457 fact-checking organisations are active across the globe (Duke Reporters Lab, 2025). Despite a slight slowdown in growth in recent years, fact-checking teams continue expanding globally. For example, Africa Check has grown from a two-person team in 2012 to a staff of 40, with offices in four countries. Similarly, Maldita, which began as a Twitter account run by two television journalists in Spain, now operates with a team of over 50 people (Graves & Cunliffe-Jones, 2024).
Is fact-checking really effective? Growing evidence suggests that it is. Early studies cast doubt on its impact, showing that corrections often fail and, in some cases, disinformation continues to shape beliefs even after it has been debunked (Johnson & Seifert, 1994; Lewandowsky et al., 2012; Nyhan & Reifler, 2010). There has also been debate about the so-called ‘backfire effect’ (Nyhan & Reifler, 2010), where corrections might unintentionally reinforce false beliefs, though this has rarely been observed in subsequent research (e.g., Pennycook et al., 2018; Wood & Porter, 2018). However, a growing body of recent evidence shows that fact-checking can significantly improve the accuracy of people’s beliefs, even after a single exposure to a correction (Walter et al., 2020). Some real-world cases also point to the usefulness of fact-checking. For instance, in its written evidence submission to the UK Parliament’s Science, Innovation and Technology Committee, Meta highlighted the role of fact-checkers as an effective tool for countering disinformation during the Southport riots in the UK.
Despite these promising results, fact-checking faces a scalability problem. The sheer volume and speed of disinformation often exceeds the capacity of manual fact-checking efforts (Allen et al., 2021; Guo et al., 2022). False information moves far faster than the truth (Vosoughi et al., 2018). The process of manually assessing claims, sourcing evidence and crafting corrections is time-consuming and labour-intensive. In today’s digital ecosystem, fact-checkers are often too late to prevent false claims from shaping public perception.
LLMs: Threat or Ally?
Against this backdrop, generative AI emerges as both a threat and a potential solution. On the one hand, they can generate an unprecedented scale and fluency of disinformation. On the other, they offer tools that could help fight that very problem.
LLMs introduce new risks, including the potential to mislead through convincing yet inaccurate or manipulated content. They can misinform individuals due to misuse (Menczer et al., 2023), their tendencies to hallucinate, their reliance on outdated data or a lack of domain expertise (Augenstein et al., 2024; Wang et al., 2024). The fluent, persuasive style and confident tone of LLM-generated content intensifies the problem, which may make false and misleading information more convincing to users (DeVerna et al., 2024). One recent misuse case, which involved orchestrating coordinated bot activity on Facebook and X using Claude to generate fake personas, starkly illustrates these risks. This example show how generative AI can lower the barrier for malicious actors to carry out sophisticated campaigns.
Yet the same technology also offers significant potential for supporting fact-checking. Trained on vast amounts of data across diverse topics and equipped with rapid information retrieval capabilities (Brown et al., 2020), LLMs can potentially be good at fact-checking (Chen & Shu, 2023). LLMs can support fact-checkers at various stages of their work, including detecting check-worthy claims, identifying previously fact-checked claims, crafting explanations, detecting stances and offering multilingual support, summarisation and transcription (Augenstein et al., 2024; Chen & Shu, 2023; Choi & Ferrara, 2024). As recent research has shown, LLMs can even personalise counter-arguments to the evidence presented by individuals and reduce belief in conspiracy theories (Costello et al., 2024).
Another area where LLMs hold significant potential in supporting fact-checking is domain-specific verification, where they can enable users to query fact-checked corpora and receive more targeted, contextually relevant results. In that sense, the development of Snopes’ FactBot offers a glimpse of what’s possible. Using retrieval-augmented generation, FactBot can search Snopes’ 30-year archive in real time to respond to user queries with citations and context. This approach addresses two central challenges in LLM-driven fact-checking: transparency and source verification. It also signals a model in which generative AI augments, rather than replaces, human expertise.
Conclusion: Empowering Fact-Checking
Fact-checking is a dynamic field that must constantly adapt to ‘a moving target’ (Graves, 2016). The emergence of generative AI complicates this picture considerably. While AI introduces new threats by enabling the generation of disinformation at unprecedented levels, it simultaneously provides tools that could help fact-checkers to respond swiftly.
In an era marked by disinformation, polarisation and declining trust, fact-checking has never been more vital. Human effort alone cannot keep pace with the rapid spread of disinformation. Generative AI, while powerful, is not a panacea that can solve all problems. Instead, generative AI’s actual value lies in augmenting fact-checkers: supporting, accelerating and amplifying their work.
So, could AI become an ally in the fight for truth? The answer depends not just on how the technology evolves, but on how we use it. If we responsibly harness generative AI’s potential, pairing its power with human judgement, we stand a chance in the crucial fight against disinformation.
Allen, J., Arechar, A. A., Pennycook, G., & Rand, D. G. (2021). Scaling up fact-checking using the wisdom of crowds. Science Advances, 7(36), 1–10. https://doi.org/10.1126/sciadv.abf4393
Augenstein, I., Baldwin, T., Cha, M., Chakraborty, T., Ciampaglia, G. L., Corney, D., DiResta, R., Ferrara, E., Hale, S., Halevy, A., Hovy, E., Ji, H., Menczer, F., Miguez, R., Nakov, P., Scheufele, D., Sharma, S., & Zagni, G. (2024). Factuality challenges in the era of large language models and opportunities for fact-checking. Nature Machine Intelligence, 6(8), 852–863. https://doi.org/10.1038/s42256-024-00881-z
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., … Amodei, D. (2020). Language models are few-shot learners. https://doi.org/10.48550/ARXIV.2005.14165
Chen, C., & Shu, K. (2023). Combating misinformation in the age of LLMs: opportunities and challenges. http://arxiv.org/abs/2311.05656
Choi, E. C., & Ferrara, E. (2024). Automated claim matching with large language models: empowering fact-checkers in the fight against misinformation. Companion Proceedings of the ACM on Web Conference 2024, 1441–1449. https://doi.org/10.1145/3589335.3651910
Costello, T. H., Pennycook, G., & Rand, D. G. (2024). Durably reducing conspiracy beliefs through dialogues with AI. Science, 385(1183). https://doi.org/10.1126/science.adq1814
DeVerna, M. R., Yan, H. Y., Yang, K.-C., & Menczer, F. (2024). Fact-checking information from large language models can decrease headline discernment. Proceedings of the National Academy of Sciences, 121(50), 1–9. https://doi.org/10.1073/pnas.2322823121
Duke Reporters Lab. (2024). Fact-Checking Database with Dates [Dataset].
Graves, L. (2016). Deciding What’s True: The Rise of Political Fact-Checking In American Journalism. Columbia University Press.
Graves, L., & Cunliffe-Jones, P. (2024). Misinformation: How fact-checking journalism is evolving – and having a real impact on the world. The Conversation. http://theconversation.com/misinformation-how-fact-checking-journalism-is-evolving-and-having-a-real-impact-on-the-world-218379
Guo, Z., Schlichtkrull, M., & Vlachos, A. (2022). A survey on automated fact-checking. Transactions of the Association for Computational Linguistics, 10, 178–206. https://doi.org/10.1162/tacl_a_00454
Johnson, H. M., & Seifert, C. M. (1994). Sources of the continued influence effect: When misinformation in memory affects later inferences. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20(6), 1420–1436. https://doi.org/10.1037/0278-7393.20.6.1420
Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106–131. https://doi.org/10.1177/1529100612451018
Menczer, F., Crandall, D., Ahn, Y.-Y., & Kapadia, A. (2023). Addressing the harms of AI-generated inauthentic content. Nature Machine Intelligence, 5(7), 679–680. https://doi.org/10.1038/s42256-023-00690-w
Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303–330. https://doi.org/10.1007/s11109-010-9112-2
Pennycook, G., Cannon, T. D., & Rand, D. G. (2018). Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology. General, 147(12), 1865–1880. https://doi.org/10.1037/xge0000465
Singer, J. B. (2023). Closing the barn door? Fact-checkers as retroactive gatekeepers of the covid-19 “infodemic”. Journalism & Mass Communication Quarterly, 100(2), 332–353. https://doi.org/10.1177/10776990231168599
Vinhas, O., & Bastos, M. (2022). Fact-checking misinformation: eight notes on consensus Reality. Journalism Studies, 23(4), 448–468. https://doi.org/10.1080/1461670X.2022.2031259
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559
Walter, N., Cohen, J., Holbert, R. L., & Morag, Y. (2020). Fact-checking: a meta-analysis of what works and for whom. Political Communication, 37(3), 350–375. https://doi.org/10.1080/10584609.2019.1668894
Wang, Y., Wang, M., Manzoor, M. A., Liu, F., Georgiev, G., Das, R. J., & Nakov, P. (2024). Factuality of large language models in the year 2024 http://arxiv.org/abs/2402.02420
Wittenberg, C., & Berinsky, A. J. (2020). Misinformation and its correction. In N. Persily & J. A. E. Tucker (Eds.), Social Media and Democracy (pp. 163–198). Cambridge University Press.
Wood, T., & Porter, E. (2018). The elusive backfire effect: mass attitudes’ steadfast factual adherence. Political Behavior, 41(1), 135–163. https://doi.org/10.1007/s11109-018-9443-y