Jensen Huang delivered another GTC keynote this week in Washington D.C., and if you weren't paying attention, you might have missed some genuinely massive announcements buried between the technical specs and partnership slides.
The headline everyone's focusing on: Nvidia is working with Uber to launch an autonomous fleet of 100,000 robotaxis starting in 2027. The thing nobody's talking about enough: they're building a 100,000 GPU supercomputer for the Department of Energy that might fundamentally change how scientific research works.
Oh, and they announced partnerships with basically every major American manufacturer to build AI-powered robotic factories. Just another Tuesday at Nvidia.
The Robotaxis Everyone Saw Coming
Let's start with the obvious one. Nvidia's self-driving platform is coming to Stellantis, Lucid, and Mercedes-Benz vehicles to give them robotaxi capabilities. But the real news is the Uber partnership—100,000 autonomous vehicles hitting the streets starting in 2027.
I know, I know. Every tech company has been promising self-driving cars since 2016. But here's what's different: Nvidia isn't building the cars themselves. They're providing the AI platform that multiple automakers are integrating. It's the pick-and-shovels strategy applied to autonomous vehicles.
And Uber makes sense as a partner because they already have the operational infrastructure—the app, the routing algorithms, the payment systems, the rider network. They just need vehicles that can drive themselves. Which, okay, that's still a massive technical challenge, but it's fewer challenges than starting from scratch.
My friend who works in the autonomous vehicle space told me the interesting part isn't the technology—which is genuinely impressive—but the regulatory arbitrage. By launching in 2027, they're betting that regulations will have caught up by then. That's either very optimistic or they know something about upcoming federal policy that we don't.
The Department of Energy Supercomputer That's Actually Wild
This is the announcement that got buried but shouldn't have. Nvidia and Oracle are building the U.S. Department of Energy's largest AI supercomputer—the Solstice system—featuring 100,000 Blackwell GPUs.
One hundred thousand. That's not a typo.
To put that in perspective, most of the largest AI training runs you've heard about—GPT-4, Claude, Gemini—used between 10,000 and 30,000 GPUs. This system will have more than three times that.
And it's not for training chatbots. This is for "accelerating the DOE's mission of driving technological leadership across U.S. security, science and energy applications." Translation: climate modeling, nuclear weapons simulation, materials science, drug discovery—the kind of compute-intensive scientific research that could actually move the needle on civilization-scale problems.
The system will be housed at Argonne National Laboratory and will be available to public researchers. That last part is important. This isn't a private company hoarding compute for competitive advantage. This is the government investing in infrastructure that any qualified researcher can access.
The Manufacturing Revolution Nobody's Ready For
Here's where things get really interesting for the physical world. Nvidia announced that America's leading manufacturers are using Omniverse technologies to build robotic factories powered by physical AI.
Belden, Caterpillar, Foxconn, Lucid Motors, Toyota, TSMC, and Wistron are all building Omniverse factory digital twins to accelerate AI-driven manufacturing. These aren't concept demos. These are actual factories being built right now.
The digital twin piece is crucial. Before you build a $500 million factory, you can simulate it completely in Omniverse, test different layouts, optimize robot workflows, identify bottlenecks—all in software before pouring a single piece of concrete.
Foxconn is building an autonomous robot factory in Texas that will manufacture GPUs. Think about that for a second. Robots building the chips that power the AI that controls the robots. We've officially entered some kind of recursive manufacturing loop.
The Robotics Companies That Matter
Agility Robotics, Amazon Robotics, Figure, and Skild AI are all building collaborative robots using Nvidia's three-computer architecture. These aren't the industrial robots locked in cages on factory floors. These are robots designed to work alongside humans in unstructured environments.
The key technical challenge with collaborative robots is real-time decision-making. They need to navigate spaces with humans walking around, objects being moved, lighting changing, unexpected obstacles appearing. That requires a level of compute and intelligence that wasn't feasible even two years ago.
Nvidia's architecture splits the workload across three systems: one for perception (understanding what's happening in the environment), one for planning (deciding what to do about it), and one for control (actually executing the movements). Each system can be optimized independently while working together in real-time.
The America Manufacturing Angle
There's a not-so-subtle nationalism to all of these announcements. The event was in Washington D.C. The DOE supercomputer is explicitly about "American reindustrialization." The manufacturing partnerships are all framed as bringing production back to the U.S.
In 2025, $1.2 trillion in investments toward building out U.S. production capacity was announced—led by electronics providers, pharmaceutical companies, and semiconductor manufacturers. That's not pocket change. That's a fundamental restructuring of global supply chains.
And Nvidia is positioning itself as the infrastructure provider for that entire transformation. Need to build a factory? Use Omniverse. Need robots for that factory? Use our robotics platform. Need to train the AI that powers those robots? Use our GPUs.
It's a remarkably ambitious vertical integration play disguised as an ecosystem approach.
What This Actually Means
I've watched Nvidia's transformation from graphics card maker to AI infrastructure company over the past decade, and this GTC felt different. Previous events focused on bigger models, faster training, more parameters. This one focused on physical infrastructure and real-world applications.
Robotaxis aren't just a tech demo anymore—they're a fleet of 100,000 vehicles launching in 18 months. The DOE supercomputer isn't just impressive specs—it's dedicated to solving actual scientific problems. The factory digital twins aren't concept videos—they're being used to build actual factories right now.
The gap between "cool AI research" and "this is changing the physical world" has collapsed.
Does that mean we're on the verge of a manufacturing renaissance? Maybe. The infrastructure is being built. The technology exists. The investment capital is flowing. But we're also talking about replacing human workers with robots at a scale we've never attempted before, and we haven't really grappled with what that means for employment, economic inequality, or social stability.
The Timeline That Matters
Here's what's actually happening in the next 24 months according to these announcements:
- Late 2025: Solstice supercomputer comes online at Argonne
- 2026: First Nvidia-powered factory digital twins go fully operational
- 2027: 100,000 robotaxis on the road through Uber partnership
- 2028: Autonomous driving features that allow hands-off, eyes-off operation in GM vehicles
That's not some distant sci-fi future. That's a 36-month timeline from now.
I don't know if I'm more excited or terrified. Probably both. The technology is legitimately incredible. The applications could genuinely improve people's lives. The scale of investment shows serious commitment to making it happen.
But we're also rushing headlong into a world where major sectors of the economy are automated by AI, and I'm not convinced we've thought through the implications. Jensen Huang was on stage talking about robots and supercomputers, but nobody was talking about what happens to the people whose jobs those robots are designed to do.
Maybe that's a conversation for the next GTC.