The Great Equalizer
There is something refreshingly blunt about Larry Ellison. While other tech CEOs are breathless about the “magic” of their specific AI models, Ellison just called them all the same. “They’re all trained on the same data,” he said. “It’s all public data from the internet. So they’re all basically the same.”
He isn’t wrong. If OpenAI, Google, and Meta all scrape the same Wikipedia articles, the same Reddit threads, and the same GitHub repositories, their “brains” are inevitably going to think alike. The differentiation - the “secret sauce” - is evaporating. We are witnessing the rapid commoditization of intelligence. It is becoming like electricity: essential, powerful, but indistinguishable regardless of who you buy it from.
The Second Wave: Private Data
This is where the real game begins. If public data is the table stakes, private data is the pot. Ellison’s argument is that the next trillion dollars won’t come from a slightly better chatbot; it will come from AI that knows your secrets (securely).
Think about it. A generic AI can write a poem about finance. But it can’t tell you why your specific supply chain in Ohio is failing, or predict which of your patients is at risk of a heart attack next week. That insight requires data that isn’t on the internet—data that lives in corporate servers (conveniently, often Oracle’s).
The Moat is Boring
It is ironic. The most exciting technology in history might end up being won by the most boring asset: the database. The companies that win 2026 won’t be the ones with the flashiest demo; they will be the ones that can safely connect the “commodity” brain to the “proprietary” memory.
Ellison is betting $50 billion that while everyone else fights over who has the smartest child, he will be the one building the library where they all go to study. And in the long run, the library always charges late fees.