Google Deploys New AI Security Agents to Hunt Threats

Google Cloud announced three preview AI security agents and complementary controls at Cloud Next 2026, pushing an "AI-led defense" model for enterprise security. The agents expand threat hunting and routine security work by leveraging Google Threat Intelligence, Mandiant best practices, and the company's own model and chip stack. Google positions an agentic fleet model where AI handles routine security tasks at machine speed while humans remain in oversight. New governance and safety services aim to reduce the operational risk of agentic automation, reflecting industry tension between defensive automation and new AI-driven attack surfaces.
What happened
Google Cloud, led by Francis deSouza, announced three new AI security agents in preview and a set of supporting governance services at Cloud Next 2026. Google framed the shift as an "AI-led defense" driven by an agentic fleet that performs large-scale threat hunting and other routine cyber security work while humans oversee decisions. The company highlighted its vertical integration across chips, models, and cloud tooling as a differentiator for deploying these agents at scale.
Technical details
The new agents expand last years security agents and the earlier Wiz integrations. The lineup announced includes three new agents:
- •Threat Hunting agent, which performs continuous, large-scale hunting for stealthy behaviors using Google Threat Intelligence and Mandiant best practices.
- •Two other agents were introduced in preview; Google did not detail their specific names or functions in this report.
Google emphasized integrating model outputs with existing telemetry and Mandiant playbooks rather than replacing them. The company also announced governance and safety-focused services to secure the fleet, without detailed public descriptions of the specific controls. Google repeatedly referenced its internal model pipeline and chip design as reasons it can adapt to new model capabilities quickly.
Context and significance
Enterprises and MSSPs already struggle with alert fatigue and staffing shortages, so automating hunting and routine response is a logical next step. Google is competing with other cloud and security vendors that are embedding large language models and agents into security operations. The key differentiator is Googles claim of tighter model-to-product feedback and access to its global telemetry and Mandiant threat intel. However, agentic automation introduces new risks: runaway automation, adversarial attempts to manipulate agents, and expanded attack surfaces if agents are compromised. Googles simultaneous release of governance tooling acknowledges these risks but leaves open how robust those controls are in adversarial scenarios.
What to watch
Evaluate these agents in a controlled environment before broad deployment, focusing on agent decision boundaries, auditing, and escalation paths. Watch for technical detail releases about model architectures, fine-tuning, and the specific remediation APIs that agents can call, plus third-party red-teaming results and early customer case studies.
Scoring Rationale
Product announcements materially affect enterprise security operations and SOC automation strategy, but they are incremental in the broader AI frontier. The simultaneous focus on governance raises the story above a routine product update because it acknowledges and begins to address operational and adversarial risks.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.



