China unveils LineShine supercomputer targeting 2 exaflops

China's National Supercomputing Center in Shenzhen has unveiled LineShine (also reported as Lingsheng), an all-domestic, CPU-only supercomputer that its designers say targets 2 exaflops of sustained performance, according to reporting by SCMP, Jon Peddie, and HPCWire. Multiple outlets report LineShine uses roughly 47,000 Huawei LX2 Armv9 processors across 92 compute cabinets and includes 650 PB of storage and 10 TB/s of storage bandwidth (Jon Peddie; SCMP). SCMP quotes Huang Xiaohui saying the system has achieved "full-stack independence" and asserting sustained performance beyond 2 exaflops. No independent Linpack benchmark data for LineShine has been published so far; El Capitan's 1.8 exaflops Linpack result is publicly listed in the TOP500 (Cryptobriefing; Jon Peddie). Reporting frames LineShine as a response to recent US export controls on advanced accelerators.
What happened
China's National Supercomputing Center in Shenzhen unveiled LineShine (reported also as Lingsheng), an exascale supercomputer claimed to deliver 2 exaflops of sustained performance, per reporting by SCMP, Jon Peddie, HPCWire, and other outlets. Per Jon Peddie and SCMP, the system is built using roughly 47,000 Huawei LX2 Armv9 CPUs, arranged across 92 compute cabinets and supported by a proprietary high-speed interconnect. Jon Peddie reports the LX2 integrates two compute dies for 304 cores per processor and includes on-package HBM plus off-package DDR5 memory. The full installation is reported to include 650 PB of storage capacity and 10 TB/s of storage bandwidth (Jon Peddie; HPCWire).
SCMP carries a direct quote from Huang Xiaohui, deputy director associated with the project, who said: "By the end of 2025, we completed full system deployment and activation, with sustained performance exceeding 2 exaflops. Its performance has already surpassed that of the United States' El Capitan, returning China to the world's No 1 position." Multiple outlets note that El Capitan currently holds a 1.8 exaflops Linpack result listed in the TOP500, making direct comparison contingent on independent benchmarking (Cryptobriefing; Jon Peddie).
No independent Linpack or TOP500 submission for LineShine has been published in the public record as of reporting (Cryptobriefing). Several outlets frame the project as a response to tightened US export controls on GPU and accelerator technology that have limited access to foreign high-performance accelerators (Jon Peddie; Digitimes).
Editorial analysis - technical context
All-CPU exascale designs are an uncommon architectural choice at present; most recent exascale and large AI-training systems rely on GPU or accelerator fabrics to concentrate parallel floating-point throughput. Industry-pattern observations: CPU-first builds require far larger counts of cores and much higher memory and interconnect bandwidth to approach comparable floating-point throughput, which increases demands on system-level networking, memory hierarchy design, and software optimization for vector/SIMD extensions and NUMA-awareness. Practitioners familiar with HPC-to-AI convergence will note that an all-CPU system shapes software and workload trade-offs differently than GPU-accelerated clusters.
Context and significance
Editorial analysis: Reporting frames LineShine as both a technical achievement in domestic capabilities and a strategic response to supply-chain restrictions. For the global HPC and AI communities, the claim of a 2-exaflop all-CPU system raises questions about comparative efficiency: peak FLOPS numbers do not directly translate to AI training throughput or energy efficiency, and independent benchmarks are required to validate sustained performance on standard suites such as Linpack and AI-specific workloads. Observers should treat the reported target as an unverified system-level claim until third-party results appear.
What to watch
- •Whether LineShine is submitted to the TOP500 and other benchmark suites, and what sustained Linpack numbers are reported (indicator of claimed performance vs verified result).
- •Publications or preprints describing the LX2 microarchitecture, memory subsystem, and the LingQi interconnect details; Jon Peddie reports specific HBM + DDR5 packaging and a 1.6 Tb/s per-node interconnect figure, which will affect application scaling.
- •Software ecosystem support: availability of optimized MPI, compilers, and libraries that exploit the LX2 SVE/SME vector units and HBM tiers, which will determine real-world HPC and AI performance.
- •Power and cooling metrics or efficiency numbers, which are central for comparing CPU-dense exascale systems to GPU-accelerated designs.
Editorial analysis: For practitioners, LineShine underscores how export controls and supply-chain constraints can push alternate architectural paths. Developers and system architects should expect an increased role for cross-stack optimization (compiler/vectorization, NUMA-aware memory management, and interconnect tuning) when porting large-scale workloads to CPU-first exascale machines. Until independent benchmarks and more detailed technical disclosures appear, the story is best read as a notable domestic engineering effort with important implications for procurement, software portability, and international HPC competition.
Scoring Rationale
The reported **2-exaflop** all-CPU system is a notable infrastructure development with implications for HPC procurement and software optimization. Its impact depends on independent benchmarks and efficiency data, making it a significant but not paradigm-shifting story for practitioners.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems