That was the final question in The Minds of Modern AI, where Geoffrey Hinton, Yoshua Bengio, Yann LeCun, Fei-Fei Li, Bill Dally, and Jensen Huang shared their perspectives.
Their answers showed not just different timelines, but different philosophies of intelligence itself.
🔹 Yann LeCun cautioned that there won’t be a single “AGI day.”
Capabilities will grow progressively across domains like perception, reasoning, and planning, but not all at once.
He expects major progress within 5–10 years, yet believes it will take much longer before machines reach the flexibility of human intelligence.
🔹 Fei-Fei Li reminded that AI and human intelligence are fundamentally different.
“Airplanes fly, but they don’t fly like birds,” she said.
Machines will surpass us in specific skills such as recognizing 22,000 objects or translating 100 languages, but human creativity and empathy remain uniquely ours.
🔹 Jensen Huang believes parts of AGI are already here.
“We have enough general intelligence to translate technology into useful applications and we’re doing it today.”
To him, the question isn’t when AGI arrives, but how far we can apply it.
🔹 Geoffrey Hinton predicted that within 20 years, machines will be able to “win a debate with you”, a practical definition of AGI as systems that can reason and argue more effectively than humans.
🔹 Bill Dally emphasized that the goal isn’t to replace humans but to *augment* them.
AI should handle tasks like perception and computation so humans can focus on creativity and understanding.
🔹 Yoshua Bengio noted that AI’s ability to plan over long horizons has been improving exponentially, suggesting that more capable systems are coming, though he cautioned against overconfidence.
Their views suggest that AGI won’t appear overnight. Instead, it will emerge in stages, unevenly across domains.
From my perspective, I don’t think we are close to functional AGI yet.
AI already outperforms humans in many tasks, but that’s not general intelligence.
To me, AGI must be capable of self-learning, mastering new knowledge and adapting to future, unseen situations without retraining.
That kind of adaptability is far beyond what today’s systems can achieve.
So in my view, AGI will not arrive any time soon.
The breakthroughs we’re seeing are remarkable, but they are steps on a much longer journey, and it will require new paradigms, not just larger models.
What do you think? How would you define AGI, and how far do you think we are from it?
#AI #ArtificialIntelligence #AGI #DeepLearning #MachineLearning #Innovation #Leadership #TechEthics
