Astera unveils Scorpio X PCIe fabric switch

Astera Labs unveiled an AI fabric switch codenamed Scorpio X, which "crams 320 lanes of PCIe 6.0 connectivity into a single ASIC with 5.12 TB/s of bidirectional bandwidth," The Register reports. The switch incorporates in-network compute features and a multicast operation called Hypercast, which Astera describes as optimised for mixture-of-experts inference, Ahmad Danesh, AVP of product management at Astera, told The Register: "One of the limitations of the standard multicast is the number of groups you can actually support, as well as the dynamic nature of needing to change those groups on the fly for mixture-of-experts models." The Register frames Scorpio X as an alternative to Nvidia's NVSwitch, noting NVSwitch 6 offers 14.4 TB/s of bandwidth as announced at CES.
What happened
Astera Labs unveiled an AI fabric switch codenamed Scorpio X, The Register reports. Per The Register, Scorpio X integrates 320 lanes of PCIe 6.0 into one ASIC and provides 5.12 TB/s of bidirectional bandwidth. The Register also reports Scorpio X includes in-network compute features to accelerate collective communications and a multicast operation called Hypercast. Ahmad Danesh, AVP of product management at Astera, told The Register: "One of the limitations of the standard multicast is the number of groups you can actually support, as well as the dynamic nature of needing to change those groups on the fly for mixture-of-experts models."
Technical details
Per The Register, Scorpio X targets rack-scale AI connectivity by placing a large PCIe switch in the fabric and adding switch-side collective-communication primitives. The Register contrasts Scorpio X's 5.12 TB/s with Nvidia's NVSwitch 6 bandwidth of 14.4 TB/s announced at CES, and notes Astera positions Scorpio X for broad accelerator compatibility rather than matching NVSwitch raw bandwidth.
Editorial analysis
Industry observers note a pattern where larger PCIe switches and switch-side compute are being used to build vendor-agnostic AI fabrics that reduce dependence on proprietary interconnects. Such fabrics can simplify integration across diverse accelerators but trade off against custom interconnects that deliver higher peak bandwidth and tighter accelerator coupling.
What to watch
For practitioners: monitor benchmarked collective performance, MoE inference latency with Hypercast, vendor interoperability tests, and system-level power and cabling tradeoffs reported in independent evaluations.
Scoring Rationale
Notable infrastructure news: a vendor-agnostic high-density PCIe fabric with switch-side compute could affect system designs and integration choices, but Scorpio X's bandwidth is lower than Nvidia NVSwitch 6, limiting near-term displacement.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems


