
AI Governance Reveals Compliance Design Vulnerabilities Under Political Turnover
A new formal paper by Andrew J. Peterson models how embedding probabilistic AI in a compliance layer for public administration creates tradeoffs between oversight and exploitability. The model frames three design choices-scale of automation, degree of codification, and safeguards on iterative use-and shows that making systems more usable and reviewable can paradoxically produce a stable approval boundary that political successors learn to navigate. Reforms that strengthen detectability can, over time, increase strategic manipulation. The paper highlights path dependence: expansions of automation are hard to unwind once institutional actors learn to exploit approval rules. For practitioners deploying AI in government, the work reframes alignment as a socio-technical design problem where compliance mechanics interact with political turnover.

















