SK hynix Launches 192GB SOCAMM2 for AI Servers

SK hynix has begun mass production of the 192GB SOCAMM2, a server-optimized module built on LPDDR5X using a 1cnm process node and designed for NVIDIA's Vera Rubin platform. The module targets AI training and inference workloads by offering more than 2x bandwidth and over 75% improved power efficiency versus conventional RDIMM server memory. SOCAMM2 adapts mobile low-power DRAM into a modular, serviceable form factor with a compression connector to improve signal integrity. Mass production aligns SK hynix with cloud service providers and NVIDIA, and intensifies competition with Samsung and Micron as the industry seeks a new memory tier between HBM and DDR5 for ultra-large language model training.
What happened
SK hynix has begun mass production of the 192GB SOCAMM2, a next-generation server memory module based on LPDDR5X fabricated on the 1cnm (sixth-generation 10-nanometer-class) process. The module is explicitly engineered for compatibility with NVIDIA's Vera Rubin AI platform and is positioned as a primary memory solution for next-generation AI servers, offering more than double the bandwidth and over 75% improved power efficiency compared with conventional RDIMM server memory. "By supplying the 192GB SOCAMM2, SK hynix has established a new standard for AI memory performance," said Kim Joo-sun, president and head of AI infrastructure at SK hynix.
Technical details
SOCAMM2 adapts low-power LPDDR chips, traditionally used in mobile devices, into a server-optimized, compression-attached module. Key technical points practitioners need to know:
- •The module leverages LPDDR5X dies assembled into a slim, high-density package to reach 192GB per module while keeping power per bit low.
- •A compression-type connector enhances signal integrity and enables bolt-secured, serviceable module replacement, differentiating SOCAMM2 from soldered HBM stacks.
- •The product is built on the 1cnm process node, which SK hynix credits for bandwidth and energy efficiency gains relative to RDIMM.
Feature highlights
- •Capacity: 192GB per SOCAMM2 module, enabling larger per-node working sets for model training.
- •Bandwidth and efficiency: Claimed >2x bandwidth and >75% power efficiency improvements versus RDIMM.
- •Form factor: Slim, replaceable module with compression connector to improve assembly and field serviceability.
Context and significance
Memory bandwidth and energy are primary bottlenecks for training and serving ultra-large language models. SOCAMM2 sits between high-bandwidth memory (HBM) and conventional DDR5 RDIMM, combining LPDDR energy efficiency with a modular server form factor. That hybrid position is strategic: it reduces power and thermal load compared with RDIMM at higher throughput, while avoiding the cost and assembly complexity of HBM. Industry players including Samsung and Micron are pursuing similar SOCAMM2 or module initiatives; reports note Samsung has focused on warpage mitigation using low-temperature soldering and Micron previously sampled 256GB SOCAMM2 modules. The SK hynix mass-production announcement signals momentum toward an ecosystem-level shift where CSPs and OEMs adopt a new memory tier optimized for Vera Rubin-class accelerators.
Why practitioners should care
For ML engineers and infrastructure architects, SOCAMM2 affects node design tradeoffs for training LLMs: denser capacity per module reduces the need for additional nodes, and improved energy efficiency lowers TCO at scale. Software and system-level tuning will be required to realize benefits: memory controllers, interconnects, and scheduler stacks must manage the different latency/bandwidth profile of SOCAMM2 versus RDIMM or HBM. Adoption by NVIDIA's Vera Rubin platform accelerates the likelihood of ecosystem support from server vendors and CSPs.
What to watch
Monitor adoption signals from cloud providers and OEMs, interoperability with existing memory controllers and CXL fabrics, and how Samsung and Micron counter with capacity, reliability, or thermal fixes. Also watch for field reports on module reliability and warpage under server thermal cycles, and for software-level optimizations that expose SOCAMM2 bandwidth to ML frameworks.
Scoring Rationale
Mass production of a high-capacity LPDDR-based server module optimized for NVIDIA's next-gen platform materially shifts AI server design and TCO considerations. This is a major infrastructure development with broad implications for CSPs and hardware vendors.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.


