Mallaby Urges US to Align with China on AI Safety

Sebastian Mallaby argues the United States should pivot from containment to cooperation with China on AI safety. He reports Chinese firms have sidestepped US chip export controls by renting offshore data-center capacity, stacking older accelerators, and rapidly copying research, leaving Chinese models only months behind US counterparts and ahead in many industrial deployments. Mallaby proposes an AI analogue of the Nuclear Nonproliferation Treaty: trade away certain export bans in exchange for a binding US-China safety framework to keep advanced AI away from rogue states and nonstate actors. He found Chinese executives and scholars receptive to safety collaboration. The op-ed reframes export controls as a blunt instrument that may exacerbate risks by driving inadvertent proliferation and reducing channels for joint governance.
What happened
Sebastian Mallaby argues the United States should shift from trying to slow China to negotiating AI safety cooperation. After reporting in Beijing, Shanghai, Shenzhen, and Hangzhou, Mallaby finds Chinese capability has outpaced the effect of US export controls and that China is months, not years, behind US frontier models. He calls for an AI analogue of the Nuclear Nonproliferation Treaty as a practical way to keep advanced capabilities out of the hands of rogue states and terrorists.
Technical details
Mallaby documents practical workarounds and diffusion paths that undermine export controls. Chinese firms have used a combination of tactics to keep pace:
- •renting foreign data-center capacity to bypass local hardware constraints
- •stacking older accelerators and optimizing software to compensate for limited cutting-edge chips
- •rapidly reproducing research and iterating industrial applications
Context and significance
This is not a technical paper, but it matters to practitioners because export controls and decoupling policies change the incentives and operational constraints for R&D, deployment, and collaboration. If American policy emphasizes denial over engagement, it can slow lawful collaboration on safety research while leaving offensive or clandestine actors unimpeded. Mallaby reports Chinese industry voices expressing genuine safety concerns, which creates a narrow political opening for bilateral safety agreements.
Policy implications for practitioners: Negotiated frameworks could restore channels for joint safety audits, benchmark sharing, red-teaming exchange, and cross-border research on alignment. Conversely, entrenched export controls risk fragmenting supply chains, increasing operational complexity for multinational teams, and reducing transparency into model provenance and deployment modes.
What to watch
Whether US and Chinese policymakers, not just academics and executives, treat Mallaby's NPT analogy as a policy prescription rather than rhetorical framing. Track signals from export-control reviews, multinational consortiums on AI safety, and any bilateral safety dialogues that include technical terms for verification and access to compute and models.
Scoring Rationale
This op-ed advances a high-impact policy argument that, if adopted, would materially change international AI governance and researcher collaboration. It is speculative rather than enacted policy, so its practical immediacy is notable but not definitive.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.


