Tesla Reveals A15 AI Chip, Confirms A16 and Dojo3

Tesla has successfully taped out its next-generation AI accelerator, the A15, and Elon Musk posted first photographs showing a central compute die surrounded by 12 DRAM modules. The design uses SK hynix memory and, per Tesla's prior statements, targets roughly 2500 TOPS of AI compute and 144 GB of memory per chip. Tesla positions A15 as the follow-up to HW4, claiming up to 40x system-level improvement versus HW4 driven by 8x raw compute and 9x memory gains. Musk also confirmed work on A16 and the next Dojo supercomputer generation, Dojo3. High-volume production is targeted for late 2026 or early 2027 and Tesla intends to shift next-gen fabrication to its planned TeraFab facility.
What happened
Elon Musk posted first pictures and confirmed the successful tape-out of Tesla's A15 AI chip, showing a large central compute die flanked by 12 DRAM modules from SK hynix. The imagery and timestamps indicate the tape-out occurred in the 13th week of 2026 (March 23-29). Tesla frames A15 as the successor to HW4, targeting roughly 2500 TOPS of AI compute and 144 GB of on-chip memory capacity per module, and promises substantial system-level gains versus HW4.
Technical details
The visible package suggests a multi-die approach with a dominant primary SOC and multiple external memory stacks. Tesla has publicly claimed the following performance and design targets:
- •Compute: ~2500 TOPS per chip, described as 8x raw compute over HW4.
- •Memory: 144 GB per chip, cited as 9x DRAM capacity improvement.
- •I/O and topology: 12 DRAM modules around the primary die, indicating wide-bandwidth, capacity-first memory architecture.
The company plans multiple A15 SKUs, including a single-SOC variant positioned against NVIDIA Hopper-class accelerators and a dual-SOC variant pitched toward Blackwell-class performance with lower cost and power per workload. Tesla also reiterated work on A16 and the next-generation Dojo, Dojo3.
Context and significance
A15 is a strategic move in Tesla's vertical integration play: owning silicon and software for Full Self-Driving reduces reliance on external vendors and aims to optimize Perf/$ and Perf/W for fleet-scale inference and training. If Tesla achieves the claimed 2500 TOPS with the described memory architecture, it narrows the hardware gap to datacenter incumbents while controlling supply and cost. The dual-SOC positioning is notable because it signals Tesla seeks to compete across price-performance tiers, not only peak performance. The revival and continuation of Dojo3 development alongside plans for a TeraFab foundry effort underscore Tesla's intent to internalize both production and supercomputer assembly.
What to watch
Validate the claims with independent benchmarks, power measurements, and real-world FSD workloads once production chips are available. Track TeraFab progress, A16 design disclosures, and how Tesla integrates A15 into Dojo and fleet inference pipelines.
Scoring Rationale
A successful tape-out of a next-generation AI accelerator is a notable infrastructure milestone, with direct implications for fleet-scale AI and competition with NVIDIA. The story is significant for practitioners but not yet industry-shaking until independent benchmarks and production volume confirm Tesla's claims.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.



