For years, artificial general intelligence, or AGI, has been positioned as the defining milestone in the race for advanced AI. It has been framed as a breakthrough so transformative that it would fundamentally alter society overnight. Yet despite the scale of investment and rhetoric surrounding it, the term itself has remained loosely defined, often shifting meaning depending on who is asked.
Now, OpenAI chief executive Sam Altman is suggesting something unexpected: AGI may already be here—and it may not have been the dramatic turning point many anticipated.
Speaking on a recent episode of the Big Technology Podcast, Altman questioned whether the industry ever clearly agreed on what AGI actually meant. He argued that the technology may have quietly crossed that threshold without producing the immediate, world-shaking effects long predicted.
Altman described AGI as having “kind of whooshed by,” adding that while its long-term impact could still be significant, the moment itself did not radically change daily life. According to him, the industry is now in a gray zone where some believe AGI has already been achieved, while others remain unconvinced.
With that in mind, Altman suggested the focus has already shifted to a new goal: superintelligence. He offered a more concrete benchmark for that next stage—AI systems capable of outperforming humans in the most complex leadership and intellectual roles, such as running a nation, leading a major corporation, or directing a large scientific research lab, even when humans have AI assistance.
These remarks follow a broader recalibration among major technology leaders. Microsoft chief executive Satya Nadella has recently emphasized measurable, real-world benefits from AI over abstract milestones like AGI. At the same time, Altman has spoken more openly about long-term concepts such as self-replicating systems, suggesting that OpenAI’s ambitions extend well beyond today’s commercial products.
Altman has previously predicted that AGI would be achieved within five years, a timeline he shared publicly in 2024. Even then, he cautioned that its arrival might be less disruptive in the short term than many expect. Similar views have been echoed by Google DeepMind chief executive Demis Hassabis, who has said AGI could be close while warning that society is unprepared for the consequences.
Concerns about readiness and safety remain significant. Microsoft AI chief Mustafa Suleyman has publicly warned about the risks posed by increasingly autonomous systems, including the possibility of conscious or self-directed AI. He has stated that Microsoft’s approach prioritizes AI that serves human interests and that the company would halt development if it posed a clear threat.
These debates are unfolding amid criticism that OpenAI has shifted its priorities away from safety and governance in favor of high-profile breakthroughs. The lack of a shared definition for AGI has only added to the unease, allowing companies to claim progress without clear accountability.
If Altman is correct, the question facing the industry may no longer be whether AGI is coming—but whether society can recognize, regulate, and respond to it in time.
