Skip to content

GTC Wasn't Over. NVIDIA Just Put AI in Orbit and Booked a Trillion Dollars.

DS
LDS Team
Let's Data Science
15 minAudio · 1 listens
Listen Along
0:00/ 0:00
AI voice
On the final day of GTC 2026, NVIDIA announced the Vera Rubin Space-1 module for orbital data centers, revealed Feynman's full architecture with 3D die-stacking and optical interconnects, signed BYD, Nissan, Hyundai, and Geely onto the Drive Hyperion platform, and closed the week with $1 trillion in confirmed orders from hyperscalers including AWS, Microsoft, Google, and Meta.

For context: On Monday, Jensen Huang unveiled Vera Rubin, a chip capable of 50 petaflops of NVFP4 inference performance. Read the full keynote breakdown here.

The SAP Center in San Jose seated 30,000 people on Monday. By Thursday, a lot of those badges had been handed in, flights caught, and laptops reopened back at the office. The people who stayed got the stranger announcements.

NVIDIA's closing day at GTC 2026 was quieter than the keynote, but in some ways more revealing. The sessions and press releases that landed on March 19 filled in the blanks — where Feynman is actually going, what Space-1 means in practice, which automakers are betting their autonomy programs on NVIDIA silicon, and what a trillion-dollar order book looks like when you name the names.

March 16 — Day One
Jensen Huang Keynote: Vera Rubin, Rubin Ultra, Feynman Preview
The two-hour keynote at SAP Center covers the Vera Rubin platform, the 600kW Rubin Ultra rack for 2027, a first look at the Feynman architecture on TSMC A16, and NemoClaw for enterprise agents.
March 17 — Day Two
Space-1 and DLSS 5 Drop; Drive Hyperion Partners Announced
NVIDIA reveals the Vera Rubin Space-1 module for orbital data centers with 25x the performance of an H100. DLSS 5 is announced as "the GPT moment for graphics." BYD, Geely, Isuzu, and Nissan adopt DRIVE Hyperion for Level 4 vehicles; Hyundai and Kia expand their strategic partnership.
March 18 — Day Three
Feynman Architecture Details; Groq 3 LPX Roadmap Clarified
NVIDIA confirms Feynman will use 3D die-stacking and co-packaged optical interconnects on TSMC A16. The Rosa CPU and LP40 LPU complete the 2028 platform. Groq 3 LPX ships Q3 2026 into Vera Rubin racks.
March 19 — Day Four
Trillion-Dollar Order Book Named; Uber Robotaxi Timeline Confirmed
AWS commits to deploying over 1 million NVIDIA GPUs globally starting in 2026. NVIDIA and Uber confirm Level 4 robotaxi deployment across 28 cities by 2028, beginning in Los Angeles and San Francisco in H1 2027. Analysts raise NVIDIA price targets, citing the trillion-dollar revenue path through 2027.

A Trillion Dollars Is Not a Prediction — It Is an Order Book

The headline figure from this week was $1 trillion. Jensen Huang said it plainly during the keynote: he expects purchase orders across Blackwell and Vera Rubin to hit that number through 2027. Last October, Huang had projected 500 billion dollars in demand through 2026. The new figure doubles both the revenue target and the time horizon — and, in Huang's framing, reflects actual signed demand rather than a revised projection.

By the close of the conference, analysts and reporters had begun naming the names behind the number. AWS committed to deploying over a million NVIDIA GPUs globally beginning in 2026, alongside Groq 3 LPUs for inference workloads. Azure, Google Cloud, Oracle, and CoreWeave are all in the queue for Vera Rubin allocations. On the hyperscaler side, Meta, ByteDance, and Alibaba have placed orders. NVIDIA Cloud Partners (NCPs) — the sovereign and regional cloud operators separate from the big hyperscalers — collectively doubled their AI factory footprint year-over-year, deploying cumulatively 1.7 gigawatts of capacity.

Huang framed the demand as structurally different from past compute cycles. "Computing demand has increased by 1 million times over the last few years," he said. The architecture driving that demand is no longer general-purpose processors sitting idle between requests. It is AI factories — always-on, always inferring, always training.

Morningstar raised its fair value estimate for NVIDIA following the announcements, pointing specifically to the trillion-dollar revenue forecast as a credible near-term target rather than aspirational guidance.

Space Computing Has Arrived — NVIDIA Just Brought the Data Center to Orbit

The announcement that drew the most surprised reactions from engineers on the floor was not a GPU. It was Space-1.

NVIDIA revealed the Vera Rubin Space-1 module — a space-hardened AI compute platform engineered for orbital data centers. The module delivers up to 25 times the AI performance of an H100 GPU in a form factor built for satellites. It integrates a tightly coupled CPU and GPU architecture capable of running large language models and foundation models directly in orbit, without a downlink to a ground station for every inference call.

"Space computing, the final frontier, has arrived," Huang said during the announcement. He framed the vision in terms of data gravity: as satellite constellations scale and autonomous space operations expand, it stops making sense to stream raw sensor data to Earth for processing. "Intelligence must live wherever data is generated."

Six companies confirmed they are building on the Space-1 platform: Aetherflux, Axiom Space, Kepler Communications, Planet Labs PBC, Sophia Space, and Starcloud. The most concrete launch timeline came from Starcloud, which is scheduled to put its second satellite into orbit in October 2026. That satellite will carry Nvidia Blackwell B200 hardware — roughly 100 times the power-generating capability of Starcloud-1, which launched in November 2025 with an H100 GPU. The Vera Rubin Space-1 module does not yet have a confirmed launch date; NVIDIA listed it as available "at a later date."

The engineering problems are real. NVIDIA acknowledged that radiation exposure and thermal management remain open challenges. Cooling a high-performance AI accelerator in the vacuum of space — without airflow, without liquid loops, with extreme temperature swings between sunlit and shaded orbit — is not a solved problem. But Sophia Space's passively-cooled hosted platforms suggest at least one architectural approach is close enough to production to sign contracts on.

The strategic logic is straightforward. Orbital data centers have been discussed for years as a hedge against Earth-side energy constraints. NVIDIA arrived at GTC with actual silicon and actual launch contracts.

Feynman Is Not Just the Next Chip — It Is a Different Kind of Machine

The Feynman architecture got its public preview on day one, but the technical depth came in the sessions that followed.

What NVIDIA confirmed across the week is that Feynman represents a fundamental departure from how NVIDIA has built chips for the past decade. The platform, targeted for 2028, introduces 3D die-stacking for GPUs — a first for NVIDIA. Instead of a single large die on a single plane, Feynman stacks compute layers vertically, enabling shorter interconnect distances and lower latency between compute and memory.

The memory architecture also changes. Feynman moves to custom HBM rather than standard HBM4 or HBM4E. NVIDIA and Intel are working on advanced packaging using EMIB technology for the die-stacking integration.

The interconnects are where Feynman makes its most aggressive break with current architecture. The platform is expected to replace traditional copper interconnects with silicon photonics — using light rather than electrical signals to move data between chips and across racks. This would make Feynman the first NVIDIA architecture to use optical interconnects natively, rather than as an external add-on at the rack level.

The Feynman family is a full platform: the new GPU, an LP40 LPU built with the Groq team, a Rosa CPU named for physicist Rosalind Franklin, BlueField-5 networking, and Kyber-CPO rack interconnects. At NVL1152 scale — 1,152 GPUs across eight Kyber racks connected via optical NVLink — Feynman is positioned to handle model sizes and inference workloads that do not yet exist at commercial scale.

The Kyber rack itself, which previewed during the week, showed the physical evolution: 144 GPUs per rack, compute trays oriented vertically instead of horizontally, PCB midplanes replacing all traditional cabling. Assembly time per tray: from nearly two hours down to five minutes. A full Vera Rubin NVL144 system delivers 3.6 NVFP4 exaflops for inference — with the larger Rubin Ultra NVL576 configuration (arriving H2 2027) scaling that to 15 FP4 exaflops.

The Autonomous Vehicle Industry Bought In — All at Once

Several major automakers and transit operators announced Level 4 commitments at GTC, all using the DRIVE Hyperion platform. BYD, Geely, and Nissan confirmed passenger vehicle programs targeting full autonomy. Isuzu and TIER IV announced Level 4 autonomous buses for public transit — Isuzu's program targets its ERGA electric and diesel bus lines, not consumer vehicles. Hyundai and Kia expanded an existing strategic partnership with NVIDIA for next-generation autonomous driving systems. BYD, Geely, Nissan, and Hyundai/Kia combined represent well over 15 million consumer vehicles in annual production.

The Uber partnership was the most operationally concrete announcement. NVIDIA and Uber plan to deploy a fleet of Level 4 robotaxis — entirely software-driven, without human safety operators — across 28 cities on four continents by 2028. Bloomberg reported ahead of the announcement that Uber is targeting as many as 100,000 vehicles over subsequent years. The first deployments are targeted for Los Angeles and San Francisco in the first half of 2027. Bolt, Grab, and Lyft are also scaling robotaxi programs on DRIVE Hyperion.

The technology driving all of it is Alpamayo 1.5, NVIDIA's reasoning-based autonomous vehicle AI. The model uses chain-of-thought logic to handle long-tail scenarios — construction zones that weren't mapped, pedestrians moving in unexpected patterns, emergency vehicles approaching from unusual directions. Huang called it "the ChatGPT moment of self-driving cars."

The claim carries weight because the reasoning model can narrate its decisions in natural language. A vehicle running Alpamayo can explain, in plain text, why it changed lanes, how it identified a double-parked obstruction, and why it reduced speed. That auditability matters for regulators. It matters even more for insurance underwriters.

DLSS 5 Signals Where Gaming AI Is Going

Away from the data center, NVIDIA's gaming announcement was DLSS 5 — and it arrives this fall, exclusively on RTX 50-series hardware.

The prior version, DLSS 4, generated most pixels via AI upscaling, optimizing for performance. DLSS 5 shifts the emphasis to visual fidelity. It takes a game's color and motion vectors as input and uses a neural rendering model trained end-to-end to add photoreal lighting and materials — subsurface scattering through skin, sheen on fabric, accurate backlighting through translucent objects. Effects that have historically required offline rendering at studios.

"DLSS 5 is the GPT moment for graphics — blending handcrafted rendering with generative AI to deliver a dramatic leap in visual realism while preserving the control artists need for creative expression," Huang said.

Confirmed titles with DLSS 5 support at launch include Resident Evil Requiem and Phantom Blade Zero (upcoming releases), alongside planned patches for already-released titles including Assassin's Creed Shadows, Starfield, and The Elder Scrolls IV: Oblivion Remastered, plus a dozen others. DLSS 5 is exclusive to RTX 50-series, which means the installed base is small now. But the announcement draws a clear line in the sand for what next-generation PC gaming will look like.

The Numbers Don't Impress Wall Street the Way They Used To

NVIDIA's stock closed the week up roughly 1.7% on the day of the keynote — a muted reaction to what would have triggered a 10% rally two years ago. By Thursday, shares were essentially flat for the week. The trillion-dollar forecast did not break the stock out of the six-month trading range it has been stuck in since late 2025.

The problem is size. TD Cowen analysts noted that "the market cap has gotten so large that Nvidia no longer trades like other stocks." The company is valued above 4 trillion dollars. To double from here, NVIDIA would need to approach 9 trillion — roughly the combined economic output of Germany and India. That kind of upside is structurally difficult for large institutional funds to model, let alone bet on.

Investor skepticism runs deeper than valuation math. A growing body of research questions whether the enterprises ordering all this hardware are generating returns that justify the spend. MIT Nobel laureate economist Daron Acemoglu has argued that much of the industry's AI productivity narrative is overstated. Most firms report that deployed AI systems are not yet moving their bottom lines. The question hanging over every GTC announcement is not whether NVIDIA can deliver the chips, but whether the customers' customers can justify the capital expenditure.

The $630 billion that NVIDIA lost in market capitalization in the three days after its February earnings release — despite record quarterly revenue of 68.1 billion dollars — did not disappear because Jensen Huang had a good week on stage. Supply, for now, is constrained by TSMC's CoWoS packaging capacity. The constraint keeps prices high and margins fat. When packaging capacity catches up — if it does — the economics shift.

There is also the China factor. NVIDIA cannot sell its most advanced chips in China under current export controls, and China's domestic AI chip ecosystem is advancing faster than most Western analysts predicted. DeepSeek's efficiency gains last year showed that training competitive models on constrained hardware is possible. If that trend continues, the total addressable market for NVIDIA's highest-end silicon may be smaller than the trillion-dollar projection assumes.

The Bottom Line

What NVIDIA announced at GTC 2026 — across the full four days — is a company that has positioned itself at every layer of where AI is going: Earth-side data centers, orbital data centers, autonomous vehicles, enterprise agents, gaming, and the 2028 generation of chips that will underpin all of it.

The trillion-dollar order book is real. The demand is not speculative. AWS, Microsoft, Google, Meta, and dozens of second-tier cloud providers have placed commitments, and Vera Rubin is already in production. Space-1 has launch contracts. Uber has 28 cities and a 2028 deadline.

The counterargument — ROI uncertainty, valuation ceiling, China exclusion, and the long tail of infrastructure buyers who may not recoup their investment — is also real. NVIDIA's stock sitting flat on a week like this is the market's way of saying: we believe you can sell the chips; we're not sure the chips are worth what everyone is paying for them yet.

That question will not be answered at a conference. It will be answered over the next two years, as the enterprises that ordered a trillion dollars' worth of hardware try to turn inference calls into revenue. NVIDIA has done everything it can to ensure the hardware is ready. What happens next is someone else's problem to solve.

Sources

Practice interview problems based on real data

1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.

Try 250 free problems
Free Career Roadmaps8 PATHS

Step-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.

Explore all career paths