Tesla Announces HW4 Plus Doubling Vehicle Memory

Tesla CEO Elon Musk announced an AI4 revision branded AI4 Plus that doubles per-chip RAM from 16 gigabytes to 32 gigabytes, taking total system memory to 64 gigabytes, with an expected production ramp next year pending Samsung's modifications. Musk also reconfirmed that HW3 lacks the capability for unsupervised Full Self-Driving, and said Tesla will defer the next-generation AI5 to Optimus robots and data centers rather than vehicles. Tesla already shipped an AI4.5 revision in January, indicating rapid hardware iteration. The combination of repeated hardware revisions and Musk's admission raises practical and legal questions about fleet upgradability and the promise that earlier HW generations contained "all the hardware needed" for FSD.
What happened
Tesla and Elon Musk announced a revision to the vehicle AI stack, labeling it `AI4 Plus`, that doubles RAM per SoC from 16 gigabytes to 32 gigabytes and raises total system memory to 64 gigabytes. Musk said the change will likely yield about a 10% uplift in compute and memory bandwidth. He said it would go into production next year, contingent on Samsung completing the required process changes. He also reiterated that `HW3` "simply does not have the capability" for unsupervised Full Self-Driving.
Technical details
Tesla has been iterating the AI4 family rapidly. Key technical points practitioners should note:
- •Tesla shipped an AI4.5 revision in January that appears to change the board-level design to a three-chip layout vs the original two-chip AI4.
- •The announced `AI4 Plus` increases per-SoC RAM to 32 gigabytes, totalling 64 gigabytes in-vehicle, and promises roughly 10% higher compute and memory bandwidth.
- •The AI5 architecture is being prioritized for Optimus robots and data-center appliances rather than cars, shifting the road-map for in-vehicle silicon.
- •Samsung remains the fabrication partner on a 7nm node for AI4 family chips, and production timelines depend on its process modifications.
Context and significance
Tesla's cadence mirrors a common industry pattern: iterative hardware revisions while claiming existing hardware is sufficient for feature goals. The admission that HW3 cannot support unsupervised FSD reopens long-standing concerns about hardware obsolescence, customer expectations, and the technical limits of retrofitting advanced perception and planning workloads on older SoCs. For ML engineers, this matters because system memory, memory bandwidth, and on-chip topology materially affect model architecture choices, quantization strategies, and runtime scheduling across SoCs and accelerators.
What to watch
Monitor Samsung's fabrication timeline, Tesla's detailed spec release for AI4 Plus, and whether Tesla offers retrofit pathways or policy changes for earlier HW owners. The hardware churn will influence model compression, partitioning, and fleet-level rollout strategies.
Scoring Rationale
Technically relevant to ML practitioners because memory and bandwidth changes materially affect onboard model design and deployment. Not a paradigm shift, but notable due to fleet scale and implications for hardware obsolescence; freshness adjustment applied.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.


