A growing chorus of technology leaders and futurists claims that Artificial General Intelligence (AGI), meaning systems able to understand and apply knowledge across domains at or beyond the human level, could arrive within the next few years. Figures such as Sam Altman of OpenAI and Dario Amodei of Anthropic have suggested timelines that stretch into the late 2020s, with some speculation placing AGI within the Trump presidency. Others, including physicist Max Tegmark, have warned that such a breakthrough could trigger an uncontrollable “intelligence explosion” that fundamentally reshapes global power dynamics.
In a new episode of Homo sAIence, the AI-focused vidcast by Voria.gr, Tasos Tefas, professor at the Department of Informatics at the Aristotle University of Thessaloniki and director of its postgraduate programme in Artificial Intelligence, urges caution: “Corporations and states have strong incentives to sell the idea that we are close to the AGI.”
Much of the optimism around the imminent AGI is driven less by scientific reality than by political and corporate incentives, according to Tefas. “The logic is ‘because we are pioneers in this, invest in us, work with us’. So there is a lot of politics behind these announcements and a lot of corporate strategy,” he says.
From a technical standpoint, the professor remains sceptical. While today’s AI systems can generate fluent language and make impressive predictions, such as forecasting the next word in a sentence, they do not actually understand meaning, context, or human intention. Crucially, they lack a grounded model of how the world works. "Whether we will have a TGN in 10 years or 15, I can't say. However, in all likelihood, in the next five years we won't have something like that," Tefas estimates.
Tefas contrasts this with human learning, noting that even infants grasp basic physical and social rules long before they can speak. "Human babies, after a period of time, before they even start talking, have already understood many of the laws of our natural world. This is not the case with AI models today, and although there are proposals for how to change it, we are not convinced that we have the solution," he notes, adding that another problem is the absence of a physical body for AI: "If you have no experience of the natural world, how will you understand how it works?" Tefas explains.
The professor also addressed the global race for AI leadership. The United States, Tefas argues, currently holds a general advantage due to its technological ecosystem, corporate power and access to advanced chips, coupled with a looser regulatory framework than the EU. China, by contrast, can move quickly and focus on targeted applications, though it lacks some core technologies. Russia appears further behind, while Europe prioritises incremental progress and human safety.
Ultimately, he argues, all countries are set to move in parallel, to some extent. "I think that the development model will be the one we have now. That is, there will be companies that develop something pioneering in their field and will probably dominate this particular market segment. As for countries, some will give more weight to armament programs with the aim of dominance, and others to dominate economically through businesses," he notes.
Do we even want AGI after all? Tefas frames it as a double-edged sword. “AGI will probably give us medicines for incurable diseases, it will possibly reveal new possibilities for producing cheap energy, it will offer solutions for problems that are currently considered unsolvable. In other words, it will really make our lives much better and easier.” But misalignment with human values or deliberate military misuse could pose existential risks. The hardest question, he concludes, is which values an intelligent machine should ultimately serve.