
From Disinformation to Resilience: Rethinking Generative AI in Today’s Information Landscape
By Menna Elhosary, MA
When people think of generative AI and the current information landscape, risk is the first thing that often comes to mind: deepfakes, propaganda, and disinformation. Rightly so, harmful content is now being produced faster and more convincingly than ever, facilitated by the capacity of generative AI. We have already seen AI-generated political content circulate during elections and ongoing political conflicts, raising valid concerns about its potential to undermine democratic practices. But what if we stopped framing generative AI solely as a threat? What if, instead, we started asking how it could become part of the solution? This is one of many insightful ideas that stayed with me after attending the 2025 Milton Wolf Seminar on Media and Diplomacy as an emerging scholar.
Detecting Disinformation with Generative AI
Detecting disinformation is one of the most promising applications of generative AI. These tools can interpret, summarize, and compare claims against verified information. Their generative capabilities allow them to handle repetitive or large-scale misinformation, supporting journalists and fact-checkers in monitoring and responding to false narratives in real time. This is not just an aspirational use case; several news organizations are already integrating generative AI into their fact-checking workflows. Generative models are increasingly becoming part of the disinformation defense toolkit, from analyzing viral claims to verifying media from conflict zones. In Germany, Der Spiegel has tested a GPT-based internal tool that scans articles, pulls factual claims, and checks them against trusted online sources to flag possible inaccuracies (Roy, 2024). Project VERDAD in the U.S. applies Google’s Gemini model to transcribe, translate, and highlight potentially misleading segments from Spanish-language radio, allowing human fact-checkers to scale their review work (Willison, 2024). At MythDetector, a fact-checking initiative in Georgia, the team has begun integrating generative AI into their workflow to improve how they track and respond to misinformation. Once the team verifies that a piece of content is false, the generative AI can scan for similar examples of misleading information. This approach enables them to catch related disinformation more efficiently and respond before it spreads further (Khan, 2024). These are just a few real-world examples. While Generative AI does not replace human judgment, it offers a crucial first line of defense, especially when speed matters.
Empowering Journalists and Fact-checkers
When used efficiently and responsibly, generative AI is helping journalists and fact-checkers work more effectively, saving both time and resources. Many news organizations are already experimenting with it as a productivity tool. Some are using it to translate fact-checking articles across different languages; others are applying it to summarize complex reports, draft initial verification notes, or quickly surface relevant background information. For instance, in Norway, Faktisk Verifiserbar uses ChatGPT to help generate structured fact-checking summaries, which has drastically reduced verification time (Khan, 2024). These initiatives not only make fact-checking content more accessible to the public but also free up time for journalists and fact-checkers to focus on deeper analysis and editorial decisions, or the kinds of work that cannot be easily automated.
Strengthening Public Resilience
While fact-checkers’ efforts are crucial, building resilience against disinformation depends mostly on the public, especially in terms of how they perceive, interpret, and react to online information. For that, generative AI is being explored as a tool to enhance media and digital literacy among the public. One notable example is AI Unlocked, a collaboration between the Poynter Institute and PBS News Student Reporting Labs, which uses AI-generated content and modules to teach school students about how to identify generative media, understand algorithmic bias, and consider the ethical implications of using generative AI in public discourse (OBS NEWS, 2025). Interestingly, generative AI allows for personalization, which means that these tools can craft digital literacy training adapted to users’ age and cultural background, facilitating media literacy efforts more than ever before.
In conclusion, while generative AI has undeniably amplified the scale and speed of disinformation, it also holds real potential to counter it. These tools can support journalists and empower the public with the means to critically engage with complex information landscapes. Realizing this potential, however, requires more collaboration between media organizations, policymakers, tech companies, and educators. The goal should not only be to regulate generative AI tools, but also to invest in their public-interest applications and ensure they are developed and deployed in ways that serve democratic values. This includes supporting transparency, improving access to reliable information, and strengthening media literacy from the ground up. Generative AI will continue to evolve — the challenge is not just keeping up, but choosing to shape its trajectory in ways that benefit the public good.
Kahn, G. (2024, April 29). Generative AI is already helping fact-checkers. but it’s proving less useful in small languages and outside the West. Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/news/generative-ai-already-helping-fact-checkers-its-proving-less-useful-small-languages-and
OBS NEWS. (2025, March 27). AI unlocked: A new ai literacy curriculum from Poynter and PBS News Student Reporting Labs. PBS News Student Reporting Labs. https://studentreportinglabs.org/news/ai-unlocked-a-new-ai-literacy-curriculum-from-poynter-and-pbs-news-student-reporting-labs/
Roy, N. (2024, December 2). Case study: Enhancing fact-checking with AI at Der Spiegel – Ona Resources Center. Online News Association. https://journalists.org/resources/case-study-enhancing-fact-checking-with-ai-at-der-spiegel/
Willison, S. (2024, November 7). Verdad - tracking misinformation in radio broadcasts using Gemini 1.5. VERDAD - tracking misinformation in radio broadcasts using Gemini 1.5. https://simonw.substack.com/p/verdad-tracking-misinformation-in