Five Eyes Publish Agentic AI Security Guidance

A coalition of Five Eyes cybersecurity agencies published joint guidance urging slow, cautious adoption of agentic AI. The guidance, co-authored by the U.S. Cybersecurity and Infrastructure Security Agency (CISA), the United Kingdom National Cyber Security Centre (NCSC), the Australian Signals Directorate/Australian Cyber Security Centre, the Canadian Centre for Cyber Security, and New Zealand's National Cyber Security Centre, warns that agentic systems expand attack surfaces, can behave unpredictably, and are already being used in critical infrastructure, according to reporting by CyberScoop and The Register. The document lists five broad risk categories, including privilege, design/configuration, behavioral, structural, and supply-chain risks, and recommends folding agentic controls into existing cybersecurity frameworks such as least-privilege and defense-in-depth.
What happened
A coalition of Five Eyes cybersecurity agencies released joint guidance on safe deployment of agentic artificial intelligence systems, published in a guide reported by CyberScoop and The Register. The document was co-authored by CISA, the United Kingdom National Cyber Security Centre (NCSC), the Australian Signals Directorate/Australian Cyber Security Centre, the Canadian Centre for Cyber Security, and New Zealand's National Cyber Security Centre, according to CyberScoop. The guidance warns that "Agentic artificial intelligence (AI) systems increasingly operate across critical infrastructure and defense sectors and support mission-critical capabilities," and states, "Until security practices, evaluation methods and standards mature, organisations should assume that agentic AI systems may behave unexpectedly," per the guidance text quoted by The Register.
Technical details
The guidance describes agentic AI as software built on large language models that can plan, make decisions, and take actions autonomously by connecting to external tools, databases, memory stores and automated workflows, a characterization reported by CyberScoop. It identifies five broad categories of risk: privilege, design and configuration, behavioral, structural, and supply-chain risks, according to CyberScoop. The document highlights attack-surface expansion from multi-component stacks and gives concrete examples, such as an agent with broad write permissions deleting firewall logs after receiving a crafted prompt, as used in The Register's coverage.
Editorial analysis - technical context: Organizations deploying autonomous agents typically integrate LLMs, tool connectors, orchestration layers and persistent state. That architecture amplifies traditional security failure modes: privileged credentials become higher-value targets, chaining failures across services becomes easier, and emergent agent behaviors can produce unexpected side-effects. Observed patterns in comparable deployments show instrumentation, monitoring and fine-grained access control are the hardest components to retrofit once agents are live.
Context and significance
Industry context
This guidance represents an intergovernmental attempt to place agentic AI within established cybersecurity practice rather than creating a separate regulatory regime. By recommending that operators apply principles such as least-privilege and defense-in-depth, the document aligns agentic risk-management with mainstream operational security controls, a framing noted in CyberScoop and OpenGovAsia reporting. For practitioners, the immediate implication is that security teams and architects will need to treat agentic components as high-risk assets during threat modeling and change management.
What to watch
- •Whether sector-specific regulators incorporate the guidance into compliance and procurement rules, especially in critical infrastructure sectors.
- •Adoption of technical controls vendors build for agent governance, including credential vaulting, attested execution environments, and policy enforcement hooks.
- •Development of formal evaluation methods and standards; the guidance explicitly calls for maturation of evaluation practices before broad rollout, per The Register.
For practitioners: Monitor agent telemetry for behavioral drift, treat agent interfaces as privileged endpoints in IAM systems, and require staged rollouts that limit access and downstream dependencies. Industry observers and vendors will likely iterate on tooling for observability and access containment as organizations balance automation benefits against systemic risk.
Limitations in reporting
What the guidance does not provide is prescriptive regulatory action or a binding international standard; the document is advisory. Several outlets (CyberScoop, The Register, OpenGovAsia) report examples and recommended controls, but none publish a government-mandated compliance schedule in the guidance itself.
Bottom line
The Five Eyes guidance frames agentic AI primarily as a cybersecurity challenge and urges cautious, incremental adoption while existing security controls, evaluation methods, and standards mature, according to CyberScoop and The Register. Practitioners should treat agentic components as high-impact elements during threat modeling and prioritize containment, monitoring, and least-privilege access controls as immediate mitigations.
Scoring Rationale
This joint guidance from Five Eyes security agencies elevates agentic AI to a core cybersecurity concern for critical infrastructure and defense sectors. It matters to practitioners because it frames controls, threat-modeling priorities, and procurement expectations, likely accelerating adoption of containment and access-control tooling.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems
