Five Eyes Warn on Agentic AI Risks
A coalition of Five Eyes cybersecurity agencies published joint guidance warning about security risks from agentic artificial intelligence systems, according to reporting by The Register and Let's Data Science. The guide was co-authored by the U.S. Cybersecurity and Infrastructure Security Agency (CISA), the United Kingdom National Cyber Security Centre (NCSC), the Australian Signals Directorate/Australian Cyber Security Centre, the Canadian Centre for Cyber Security, and New Zealand's National Cyber Security Centre, per Let's Data Science reporting. The guidance says agentic systems expand the attack surface, can behave unpredictably, and already operate across critical infrastructure and defense sectors, per The Register. It lists five risk categories and recommends folding agentic controls into established practices such as least-privilege and defense-in-depth, according to CyberScoop as cited by Let's Data Science.
What happened
The Five Eyes coalition of cybersecurity agencies published joint guidance addressing risks from agentic artificial intelligence systems, according to reporting by Let's Data Science and The Register. The document was co-authored by the U.S. Cybersecurity and Infrastructure Security Agency (CISA), the United Kingdom National Cyber Security Centre (NCSC), the Australian Signals Directorate/Australian Cyber Security Centre, the Canadian Centre for Cyber Security, and New Zealand's National Cyber Security Centre, per Let's Data Science reporting. The guidance opens with the observation that "Agentic artificial intelligence (AI) systems increasingly operate across critical infrastructure and defense sectors and support mission-critical capabilities," a line quoted by The Register. The guidance also states, "Until security practices, evaluation methods and standards mature, organisations should assume that agentic AI systems may behave unexpectedly," according to The Register.
Technical details
Per reporting attributed to CyberScoop and repeated in Let's Data Science coverage, the guidance characterises agentic AI as systems built on large language models that can plan, make decisions, and take actions autonomously by connecting to external tools, databases, memory stores, and automated workflows. The document enumerates five broad risk categories, described in CyberScoop reporting as privilege, design and configuration, behavioral, structural, and supply-chain risks. The guidance illustrates these risks with deployment examples, including a cited scenario where an agent with broad write permissions both applies patches and deletes firewall logs, an example quoted in The Register to show how permissions misuse can enable harmful actions.
Industry context
Editorial analysis: Agencies framing agentic AI as a distinct operational class highlights practical security gaps that standard vulnerability management and incident response workflows were not designed to address. Industry reporting places the guidance in a broader pattern where autonomous orchestration layers combine multiple services and credentials, widening attack surfaces and complicating attribution. Organizations running distributed automation, CI/CD, or infrastructure-as-code systems will find these patterns familiar, because the same privilege and dependency interactions drive cascading failures in non-AI automation.
Controls and recommended practices
What the guidance recommends, per CyberScoop reporting cited by Let's Data Science, is integrating agentic-specific controls into established security architectures such as least-privilege access models and defense-in-depth. The guidance emphasises evaluating component trust boundaries, reducing unnecessary connectivity between agents and critical systems, and applying rigorous configuration and supply-chain scrutiny, according to the coverage in Let's Data Science and The Register.
What to watch
For practitioners: monitor three observable indicators that will matter to defenders and auditors. First, inventory where autonomous agents have write or privileged access to infrastructure, identity stores, or financial systems. Second, track emergent cross-service dependencies where one agent's outputs feed other automated workflows, creating potential for cascading failures. Third, watch for supply-chain exposure where third-party tools, connectors, or datasets grant agents unintended capabilities. Observers should also follow whether national regulators or sectoral bodies reference the Five Eyes guidance in compliance or audit frameworks, as reported guidance from major cyber agencies often informs standards.
Limitations and reporting notes
The authoritative text quoted by The Register and the risk taxonomy reported by CyberScoop are the basis for this summary. The agencies did not provide new technical standards in the publicly reported guidance; coverage highlights conceptual risk categories and deployment examples rather than prescriptive testing protocols. The Five Eyes agencies have not, in the sources cited here, released a single, consolidated technical standard document; reporting is based on the joint guidance as described by The Register, CyberScoop, and aggregated coverage in Let's Data Science.
Scoring Rationale
Joint guidance from Five Eyes agencies elevates agentic AI from a niche research topic to a practical security concern for critical infrastructure and enterprise automation, making it relevant to security engineers and platform teams. The guidance is high-profile but not a technical standards release, so impact is notable but not paradigm-shifting.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems

