Dell Predicts Explosive AI Memory Demand By 2028

What happened
Michael Dell framed memory as the bottleneck and the profit lever of the AI infrastructure cycle, saying that as memory per accelerator and system scale expand together, “total memory demand increases by approximately 625 times.” He quantified that a single accelerator could require ~25× more memory and that accelerator deployments could grow ~25×, projecting the supercycle continuing up to 2028.
Technical context
Modern generative-AI training and inference shift memory demand in two dimensions: more on-chip and near-chip capacity per accelerator (HBM, package-level memory, SOCAMM-like designs) and far greater numbers of accelerators per datacenter fleet. Memory (DRAM, HBM, NAND) capacity scaling lags wafer and fab capacity expansions, so supply response typically takes multiple quarters or years. That mismatch creates sustained upward price pressure when demand accelerates rapidly.
Key details from sources
Michael Dell’s estimate—625× total memory demand by 2028—comes from combining a projected ~25× increase in per-accelerator memory with ~25× growth in accelerator deployments. He emphasized that expanding memory supply is slow, implying limited short-term elasticity. Market reactions are mixed: some Wall Street firms (UBS, BofA and others) maintain buy/outperform positions and raised targets into the roughly $160–$200 range on confidence in Dell’s AI-led growth, while Morgan Stanley flagged risk and downgraded its target to ~$110, pointing to margin pressure from a memory supercycle.
Why practitioners should care
If Dell’s framing holds, teams that design models, architect clusters, or own procurement budgets must treat memory availability and unit cost as primary constraints—more so than raw accelerator FLOPS alone. Expect persistent DRAM/HBM price volatility to affect total cost of ownership for training and inference clusters, capacity-planning timelines, and vendor selection. Delays in memory supply scaling mean architects should evaluate alternatives: model-memory tradeoffs (activation compression, sharding), software techniques (offloading, quantization), different accelerator classes, and contractual hedges with hardware suppliers.
What to watch
Track memory spot and contract pricing, HBM/DRAM capacity announcements from fabs, NVIDIA and other accelerator vendors’ memory-per-socket roadmaps (architectures named in the dialogue include Hopper and Vera Rubin), and hyperscaler procurement trends. Monitor how OEMs (Dell included) pass costs to customers or absorb them—this will determine margin outcomes versus revenue growth.
Scoring Rationale
The claim of a multi-year, orders-of-magnitude increase in memory demand materially affects infrastructure procurement, architecture choices, and TCO for ML teams. It's not a technical breakthrough but is strategically important for practitioners planning capacity and budgets.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.


