Networks Struggle to Handle Growing AI Traffic

AI compute growth has outpaced networking readiness, creating a new bottleneck for enterprises and GPU cloud providers. Analyst firm Omdia warns that many "neocloud" providers scaled GPUs aggressively but left networking underbuilt, making latency, resilience, and sovereign data controls the next gating factors for AI performance. Operators with origins in crypto mining, CDN, or web hosting have widely varying network maturity, and some are scrambling to partner, acquire, or build backbone and edge connectivity. Global providers such as Lumen are pushing enterprises to upgrade networks, calling the network the "nervous system" of AI deployments. For practitioners, procurement must move beyond raw GPU counts to include topology, bandwidth, peering, data locality, and security.
What happened
AI compute capacity has ballooned, but networking is now a systemic bottleneck. Analyst firm Omdia finds many "neocloud" and GPU-as-a-service providers scaled GPU fleets faster than they hardened network stacks, creating real-world performance and sovereignty constraints. Camille Mendler warns that "network infrastructure will make or break neoclouds." Lumen CEO Kate Johnson frames the network as the "nervous system" for AI, and the industry is responding with upgrades and vendor repositioning.
Technical details
The mismatch shows up across several dimensions: throughput, latency, resilience, and data locality. AI workloads move large datasets and parameter updates between clouds, datacenters, and edge endpoints, so bandwidth alone is not sufficient. Practitioners should evaluate:
- •network topology and peering quality between sites, not just raw bandwidth
- •latency and jitter for synchronous training and inference workloads
- •secure, auditable paths for cross-border data movement and sovereignty controls
- •edge connectivity and last-mile performance for low-latency inference
Practical mitigations
Providers and customers are using a mix of approaches: colocating datasets with compute, private backbones or direct interconnects, WAN optimization and compression, RDMA/GPUDirect-style transports where supported, and hybrid topologies that minimize cross-site gradients. Origin stories matter: firms that evolved from CDN or hosting may already have peering and backbone assets, while those from crypto mining often lack sophisticated network operations.
Context and significance
This is a structural shift in how AI infrastructure is evaluated. The market previously emphasized GPU count and accelerator flops; the next wave privileges integrated network engineering, data governance, and cross-site orchestration. That rebalances competition: hyperscalers and network-savvy providers gain an advantage, and neoclouds without robust connectivity face pressure to partner, buy, or build. Enterprises need to revise procurement checklists to include network SLAs and data flow architecture.
What to watch
Expect vendor differentiation around private interconnects, managed backhaul, and sovereign networking features. Procurement teams should require topology diagrams, measured latency/throughput tests, and clear data sovereignty controls as part of any AI compute RFP.
Scoring Rationale
This story highlights a tangible infrastructure bottleneck that affects model training and inference at scale. It is a notable industry signal that procurement and architecture must shift from focusing only on GPUs to include network design and sovereignty, influencing provider competition and enterprise deployment plans.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.



