Dell Leverages CPUs for AI Inference Growth

Seeking Alpha reports that Dell Technologies was rerated to a "BUY" on Apr 25, 2026, citing robust AI infrastructure demand and a new growth cycle driven by inference workloads. Per Seeking Alpha, Dell's ISG segment is growing at 40% FY26 with an $43 billion AI backlog, while the CSG segment is described as flat and under margin pressure from rising DRAM/NAND prices. The piece frames AI inference as shifting server configurations toward higher CPU demand rather than GPUs and notes Dell's forward price-to-sales remains below 1, arguing valuation is attractive given backlog visibility. Seeking Alpha also flags supply-chain risk and memory-driven margin compression as ongoing headwinds.
What happened
Seeking Alpha reports that Dell Technologies was rerated to a "BUY" on Apr 25, 2026, based on accelerating AI infrastructure demand and an anticipated growth cycle driven by inference workloads. The article attributes 40% FY26 growth to Dell's ISG business and cites an $43 billion AI backlog for that segment. Seeking Alpha describes Dell's CSG business as flat, noting margin pressure from rising DRAM/NAND memory prices, and observes a forward price-to-sales below 1.
Editorial analysis - technical context
Seeking Alpha frames the shift toward inference as favoring higher CPU capacity in servers, a pattern seen when workloads emphasize many smaller, lower-latency queries rather than large-batch training. Companies deploying inference at scale commonly evaluate throughput-per-dollar, latency, and utilization; these factors can make CPU-centric nodes or specialized inference ASICs more attractive than GPU-heavy designs for certain workloads.
Industry context
Industry-pattern observations: memory price volatility tends to compress margins in client and storage-heavy segments even as enterprise infrastructure benefits from backlog visibility. Comparable vendor cycles show that visible, multi-billion-dollar backlogs can sustain revenue growth through hardware refresh cycles, but supply-chain constraints and component-price swings materially affect gross margin dynamics across vendors.
What to watch
Key indicators to follow include reported ISG revenue growth and backlog conversion rates in upcoming earnings, trends in server CPU shipments and average core counts, enterprise adoption of CPU-optimized inference frameworks and libraries, and DRAM/NAND price trajectories. Observers should also track announcements from major CPU and accelerator vendors for product timing that alters the CPU-versus-GPU inference economics.
Practitioner takeaway
For infrastructure and ML ops teams, the broader pattern reported here implies revisiting procurement models and TCO calculations for inference deployments, including benchmarking CPU-based nodes for target latency and cost profiles rather than assuming GPU-first architectures.
Scoring Rationale
Notable for practitioners because a meaningful shift toward CPU-centric inference would change procurement, benchmarking, and cost models across enterprise deployments. The single-source analysis limits immediacy, but the story is relevant to infrastructure decisions.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems

