
Here's the thing. Elon Musk just stood on a stage at Davos and, without blinking, told the world that AI is on track to be smarter than all of humanity combined in about five years.
Let that sink in. Not just smarter than you or me. Smarter than all eight billion of us, with our collective history, creativity, and intuition, put together. The initial reaction is a gut punch. Is this the starting gun for the end of human relevance?
Actually, that's not quite right. It's more complicated.
The New Religion

The race to build artificial superintelligence has attracted over $100 billion in U.S. investment alone.
Look, the momentum is undeniable. Money is pouring into AI like a tidal wave—the U.S. alone saw over $100 billion in private investment last year. The performance charts from places like Stanford's AI Index look like a rocket launch, with AI models crushing benchmarks that were considered difficult just a year ago.
Anthropic's CEO, Dario Amodei, claims AI will replace all software developers within a year and achieve "Nobel-level" scientific breakthroughs in two. It feels like an arms race where the finish line is a god-like intelligence.
The sheer force of will and capital being thrown at this problem is staggering. It feels less like a research project and more like a religion where the believers are building their own deity.
The Contrarian Take: "The AI Industry is Completely LLM-Pilled"

Yann LeCun, Turing Award winner and AI pioneer, argues that Large Language Models will never achieve human-level intelligence.
But then you have the other guys. The pioneers. The ones who built the foundations everyone else is standing on.
Yann LeCun, a man who literally won the Turing Award for his work on neural networks, is the ultimate skeptic. He argues that the entire industry has become dangerously obsessed with Large Language Models (LLMs). "The AI industry is completely LLM-pilled," he said recently.
His point? Language is easy.
AI can pass the bar exam, sure. But as LeCun points out, we still don't have robot butlers or cars that can truly drive themselves in any condition. Why? Because today's AI doesn't understand the physical world. It can predict the next word in a sentence, but it can't predict the consequences of its actions in reality. It's a sophisticated mimic, not a true intelligence.

Demis Hassabis, Nobel Prize-winning CEO of Google DeepMind, says we need "one or two more breakthroughs" before AGI.
Demis Hassabis, the CEO of Google DeepMind, agrees. He says we are "nowhere near" Artificial General Intelligence (AGI) and that we probably need "one or two more breakthroughs" to get there.
What Does "Smarter Than All Humanity" Even Mean?

The concept of machine intelligence surpassing collective human intelligence raises profound questions about what "intelligence" actually means.
This is the framework we need to be using. Musk's claim isn't just about raw processing power. It's about surpassing a system that is more than the sum of its parts.
Collective human intelligence isn't just eight billion brains running in parallel. It's the emergent wisdom that comes from our collaboration, our culture, our arguments, our art, and our mistakes. It's the global network of trust, intuition, and shared experience.
Comparing AI to the combined intelligence of humanity is like comparing a single, massive data center to the entire living, breathing internet—including all the people who use it. One is a machine for computation. The other is a dynamic, adaptive, and chaotic system for creating meaning.
The Real Bottleneck

Musk in conversation with BlackRock CEO Larry Fink at Davos 2026, where he warned that energy—not computing power—may be AI's biggest constraint.
Ironically, the biggest obstacle might not be code, but power. Musk himself admitted it. "AI chips are being produced faster than we can power them," he warned.
The computational thirst of these models is astronomical. Without a revolution in energy production, we might simply run out of juice to fuel this exponential growth
So, are we five years away from a machine that outthinks our entire species? Honestly, the evidence points to no.
The Turing Test, once the gold standard for machine intelligence, is now considered a "no longer ambitious goal" by the experts at Stanford. We've moved the goalposts. But the new goal—surpassing the collective, emergent intelligence of every human being—is a fundamentally different kind of challenge.
The Bottom Line
The five-year countdown has started. But the real story isn't about a machine becoming a god. It's about us, right now, grappling with the most powerful tool humanity has ever created. The question isn't whether the machine will get smarter, but whether we'll be wise enough to know what to do with it.

