Skip to main content

AI-driven fake news now takes days to debunk, Thessaloniki expert warns

A media scholar tells Voria.gr and 'Homo sAIence' how it can take days to tackle AI-generated misinformation, as nearly half of chatbot answers contain errors

Artificial intelligence is rapidly reshaping the global information landscape, making it harder than ever to detect and debunk false or misleading content, according to insights shared by a media scholar in Thessaloniki.

Speaking on the "Homo sAIence" vidcast, Ioanna Kostarella, an associate professor in the Journalism department at Aristotle University of Thessaloniki, highlighted how advances in generative AI are extending the lifespan and impact of misinformation. Whereas false claims could previously be disproved "within minutes or at most a few hours", she noted that it can now take "up to three days" for experts to dismantle a convincing piece of synthetic content fully.

This shift is largely driven by what she describes as "synthetic information" - AI-generated material that closely mimics authentic reporting, making it difficult even for specialists to evaluate. "When experts need three days to debunk a false claim, we understand very well what this means for people who do not have relevant knowledge and may be more vulnerable to misinformation," she said.

Despite the growing threat, she stressed that verification tools do exist, including initiatives such as the European Digital Media Observatory and its regional hubs. However, she argued that the most important defence remains human judgement: "to return to the basics… how we use our critical thinking to evaluate information".

Read more: Homo sAIence: Can AI help Greek startups grow without venture capital?

The discussion also touched on the future of journalism in an AI-driven ecosystem. Research by the BBC and the European Broadcasting Union suggests that 7% of news consumers already rely on AI assistants for information, rising to 15% among under-25s. At the same time, a separate study of 3,000 chatbot responses found that 45% contained at least one error.

For Kostarella, this dual reality offers both concern and reassurance. While automated systems and "content farms" are putting pressure on the profession, journalism retains a fundamentally human core. "Journalism has at its centre the human being… it continues to carry a social, humanitarian and emotional dimension," she said, expressing confidence that the human role in news production will remain vital for years to come.

Read more: Thessaloniki academic breaks down AI hype, weighs in on AGI during Trump’s term

Kostarella argues that journalists should focus on core AI functions rather than specific tools, as technologies quickly become outdated. Key capabilities include content production tasks such as summarisation, speech-to-text conversion, and audio restoration, as well as audience attention tracking, speaker identification, and improved audiovisual quality. 

She stresses that the most critical requirement is ethical responsibility, ensuring these tools are used in line with journalistic principles like accuracy and source verification, especially given the gap between rapid technological development and slower regulation.