Jensen Huang Just Showed the World What $1 Trillion Looks Like — Nvidia's Vera Rubin Explained
Jensen Huang Just Showed the World What $1 Trillion Looks Like — Nvidia's Vera Rubin Explained
At GTC 2026 in San Jose today, Nvidia's CEO walked onto a stage in front of 30,000 people from 190 countries and announced purchase orders for his chips expected to hit $1 trillion through 2027. The chip at the centre of all of it is called Vera Rubin. Here's what it is, what it does, and why it matters to everyone — not just the people buying it.
Let me set the scene for you. It's Monday morning in San Jose, California. Jensen Huang — Nvidia's founder and CEO, and arguably the most important figure in the technology industry right now — walks onto the stage at the SAP Center wearing his trademark black leather jacket to a crowd of 30,000 engineers, researchers, and executives from 190 countries. He pulls up a single number on the screen behind him. Not a benchmark. Not a performance spec. A purchase order figure. One trillion dollars. That's how much money he expects customers to spend on Nvidia's Blackwell and Vera Rubin chips through 2027.
The room went slightly insane. Nvidia's stock rose 2% before the keynote was even over. And across the tech industry, the people who build the AI systems that power ChatGPT, Claude, Gemini, and virtually every other AI product you've used in the past three years sat up very straight and started taking notes.
Vera Rubin is the chip that is going to power the next generation of AI. If you've used any AI tool this year — any chatbot, any image generator, any code assistant, any voice assistant — the models those tools run on were trained on chips. And the chips that will train and run the next generation of those models, the ones that will feel dramatically more capable than what you're using today, are what Jensen Huang spent the better part of Monday explaining to the world. Let me translate it from chip engineering to plain English.
What Is Vera Rubin — And Why Is It Named After an Astronomer?
Before we get into the technology, I want to mention the name — because Jensen Huang's tradition of naming Nvidia chips after scientists is one of my favourite things about the company, and this one is especially meaningful. Vera Florence Cooper Rubin was an American astronomer who spent decades studying galaxy rotation. Her observations provided some of the strongest evidence for the existence of dark matter — the invisible substance that makes up most of the universe's mass. She spent years being dismissed and overlooked by the scientific establishment, largely because she was a woman working in an era when women in physics were rare and frequently underestimated. She never received the Nobel Prize that her work arguably deserved. She died in 2016.
Naming a chip after her is not a trivial gesture. It's a statement about what kind of science Nvidia is trying to enable — boundary-pushing, world-changing work by people who refuse to accept conventional limits. Whether you think that's marketing or genuine tribute, I find it a more interesting choice than most chip codenames. Now, to the actual technology.
The Six-Chip Architecture — What Vera Rubin Actually Is
Here's the thing most articles about Vera Rubin get slightly wrong: they describe it as a GPU. It's not — or rather, it's much more than that. The Vera Rubin NVL72 rack integrates no fewer than seven brand-new chips, including the Vera CPUs, Rubin GPUs, NVLink 6 switches, ConnectX-9 NICs, BlueField 4 DPUs, Spectrum-X NICs, and Groq 3 LPUs. It's an entire computing platform — a complete AI factory blueprint, in Nvidia's own language — where every component is co-designed to work together at maximum efficiency.
The Numbers That Matter — What 10× Performance Per Watt Actually Means
The headline performance number from GTC is that Vera Rubin will deliver 10 times more performance per watt than its predecessor, Grace Blackwell. That sounds impressive. Let me explain why it's actually extraordinary — and why energy efficiency is the most important number in AI right now, not raw speed.
Building and running AI at scale requires enormous amounts of electricity. Data centres running the current generation of AI chips are consuming power at a rate that is straining electrical grids in the US, Europe, and Asia simultaneously. One of the biggest limitations on how fast AI companies can scale their capabilities right now is not money, not chips, not engineers — it's electricity. There genuinely isn't enough power infrastructure to run as many AI chips as the industry wants to deploy.
A chip that delivers 10 times more useful AI computation per watt of electricity doesn't just make AI faster. It means you can run 10 times more AI work on the same power budget. It means data centres that are currently limited by their power supply can suddenly do 10 times as much. It means the AI scaling that has been hitting physical infrastructure bottlenecks suddenly has room to run again. That's why Nvidia claims the full NVL72 rack promises 10x lower cost per token at inference compared to Blackwell. Cheaper tokens means cheaper AI. Cheaper AI means more people and businesses can afford to use it. That has a compounding effect on the entire industry.
Who Is Actually Buying These Chips
This is where the scale of what's happening becomes viscerally real. Look at who has committed to deploying Vera Rubin systems in 2026.
The Generational Leap — Blackwell to Rubin to Feynman
Nvidia now ships a new generation of AI chips every year. This is an extraordinary pace — historically, major chip architectures took three to five years between generations. Nvidia has compressed that to twelve months. Let me put Vera Rubin in context with what came before and what's coming next.
| Platform | Year | Key Chip | Inference Performance | Memory |
|---|---|---|---|---|
| Hopper | 2022–23 | H100 | Baseline | 80GB HBM3 |
| Blackwell | 2024–25 | B200 | 5× Hopper | 192GB HBM3e |
| Vera Rubin | 2026 | Rubin GPU + Vera CPU | 10× Blackwell per watt | 288GB HBM4 |
| Feynman | 2028 (planned) | Unknown | Preview teased at GTC | TSMC 1.6nm A16 |
The Feynman teaser is worth discussing separately. Jensen Huang also teased next-gen Feynman systems, which feature a new GPU, new LPU, a new CPU called Rosa, BlueField 5, and a new architecture called Kyber with copper and CPO scale-up. Feynman is planned for 2028 on TSMC's A16 1.6nm process — the most advanced semiconductor process that TSMC will have ever put into mass production. It will also be the first major AI chip to use silicon photonics — using light rather than electrical signals to move data between components, at speeds and efficiencies that current copper-based interconnects simply cannot match. The fact that Huang previewed it at GTC when production is two years away tells you something about the confidence level at Nvidia right now. They're not worried about competitors catching up. They're already showing you what comes after next.
The Thing Jensen Huang Said That Nobody Is Talking About
There was a moment in the GTC keynote that I think got lost in the avalanche of chip specifications and performance benchmarks, and I want to highlight it because it's the most revealing thing said on stage all day.
When asked about the extraordinary demand for Nvidia chips, Huang said: "If they could just get more capacity, they could generate more tokens, their revenues would go up." Read that sentence slowly. He's describing a world where the limiting factor on business revenue for AI companies is not ideas, not software, not talent, not customers — it's chips. Specifically, it's Nvidia chips. The companies running the AI models that millions of people use daily are leaving revenue on the table because they can't get enough Vera Rubin systems to meet demand. That is an extraordinary position for any company to be in.
🌿 Named for Vera Rubin — a woman who changed how we understand the universe and never got the Nobel Prize she deserved. Jensen Huang's chip naming tradition consistently honours scientists whose work reshaped human knowledge. Hopper (Grace Hopper, computing pioneer). Blackwell (David Blackwell, statistician). Rubin (Vera Rubin, dark matter discoverer). Feynman (Richard Feynman, physicist). Whatever you think of Nvidia as a business, the tradition of naming transformative technology after transformative scientists is something I find genuinely admirable. It's a form of credit that outlasts any press release.
What This Means for You — The Non-Technical Reader
I want to end with the question that matters most to most people reading this: what does any of this actually mean for me?
Here's the honest answer. Every AI product you use — ChatGPT, Claude, Gemini, Perplexity, Copilot, the AI features in your phone, the recommendation systems on every platform you use — runs on chips. Better chips mean smarter AI, faster AI, and cheaper AI. The Vera Rubin generation means that the AI tools available in 2027 will be meaningfully more capable, more responsive, and potentially less expensive than what you're using today. The 10× efficiency improvement is not abstract. It translates directly into models that can reason more deeply, respond more quickly, and handle more complex tasks than current systems.
Nvidia says Vera Rubin can deliver 700 million tokens per second, compared to just 2 million on comparable older x86 and Hopper systems. In plain English: the AI responses you get in 2027 will be generated 350 times faster than equivalent responses from systems two generations ago. That's not a marginal improvement. That's a transformation in what real-time AI interaction can feel like.
⚠️ The bubble question — worth asking honestly: Since the launch of ChatGPT three years ago, the global data centre lease market has ballooned to over $700 billion — a 340% increase in just two years. But the economic productivity gains promised by these investments have yet to fully manifest in national GDP figures. Jensen Huang is walking a very fine line between genuine technological transformation and a capital spending frenzy that outpaces real-world value creation. The demand for his chips is real. Whether the $1 trillion in purchase orders translates into commensurate value for the businesses buying them is the most important unanswered question in the technology industry right now.
GTC 2026 is the moment Nvidia made its ambitions completely explicit. Not just chips. Not just data centres. From rack-scale compute to integrated CPU-GPU platforms to software infrastructure to emerging domains like robotics and autonomous driving, Nvidia is executing on a vision of end-to-end AI infrastructure that no competitor can currently match in breadth or depth. Whether the $1 trillion holds, whether the bubble concerns prove prescient, whether AMD or Google's own chips eventually chip away at Nvidia's dominance — those stories will unfold over the next few years. What happened in San Jose today was the clearest statement yet that one company intends to be the foundation on which the entire AI era is built. Stay tuned to TechZenith — every GTC announcement gets covered here. 🚀
Comments
Post a Comment