Google identifies AI-developed zero-day bypassing 2FA

According to Google's Threat Intelligence Group (GTIG), researchers identified what they believe is the first real-world instance of an AI-assisted zero-day: a Python exploit that bypassed two-factor authentication in a "popular open-source, web-based system administration tool" (report shared with The Register). GTIG said the exploit bore machine-made hallmarks, including polished docstrings and a fabricated CVSS score, and that Google worked with the unnamed vendor to quietly patch the flaw before a planned mass-exploitation campaign could proceed (The Register). Tom's Hardware also reports GTIG found self-morphing malware, AI-generated obfuscation, and Android backdoors such as PROMPTSPY that leverage cloud LLM services including Google Gemini.
What happened
According to Google's Threat Intelligence Group (GTIG) as shared with The Register, GTIG identified what it believes is the first real-world case of an AI-assisted zero-day. The exploit was a Python script that bypassed two-factor authentication in a "popular open-source, web-based system administration tool," and GTIG described the code as bearing hallmarks of machine generation, including educational-style docstrings and a hallucinated CVSS score. GTIG said Google worked with the unnamed vendor to patch the issue before the planned mass-exploitation campaign could gain traction (The Register). Tom's Hardware reports the GTIG dossier also documents malware that self-modifies, generates decoy code, and uses multilayer obfuscation.
Technical details
Editorial analysis - technical context: GTIG characterises frontier LLMs as particularly capable at reasoning about high-level program logic and developer intent, which lets them find semantic flaws that traditional fuzzers and static analysis miss, per the report shared with The Register. Tom's Hardware highlights examples of attacker tooling that use AI to dynamically alter payloads, add filler or indirection to hinder signature-based detection, and to generate exploit scaffolding. The Tom's Hardware piece also notes an Android backdoor family called PROMPTSPY that, according to the report, leverages Google Gemini cloud services for parts of its workflow.
Context and significance
GTIG frames this incident as evidence that AI-assisted vulnerability discovery and exploit synthesis have moved beyond proof-of-concept. For defenders, that shift raises the risk surface in two ways reported by GTIG: attackers can find high-level logic bugs faster, and they can produce more evasive, rapidly changing payloads. John Hultquist, chief analyst at GTIG, is quoted saying, "There's a misconception that the AI vulnerability race is imminent. The reality is that it's already begun" (The Register).
What to watch
For practitioners: monitor vendor advisories and coordinated disclosure notices for the affected administration tool, track telemetry for rapidly morphing payloads and atypical docstring or metadata patterns in exploit artifacts, and follow detection-vs-semantic-flaw tooling efforts that attempt to complement static analysis with behavioral and intent-aware techniques.
Scoring Rationale
This is a notable inflection point: GTIG attributes a real-world zero-day to AI-assisted discovery and exploit development, which materially raises attacker capability. The story matters for practitioners building detection and patching workflows, though the immediate incident was reportedly mitigated before wide abuse.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems

