AI Vendors Avoid Fixing Critical Security Flaws

AI vendors are increasingly treating serious vulnerabilities as "expected behavior" rather than root-cause defects, paying token bug bounties and updating documentation instead of assigning CVEs or issuing public security advisories. Recent disclosures show researchers exploited agents that integrate with GitHub Actions, Claude Code Security Review, Gemini CLI Action, and GitHub Copilot, to steal API keys and tokens; vendors paid small rewards (for example, $100, $1,337, and $500) but did not publish CVEs. A separate disclosure around MCP stdio shows a protocol-level design left unpatched, exposing as many as 200,000 servers and packages with more than 150 million downloads. The pattern signals vendor immaturity in vulnerability triage and remediation and creates systemic downstream risk for engineers, security teams, and software supply chains.
What happened
AI vendors are deflecting responsibility for security flaws, labeling exploitable behaviors as "working as intended" and choosing documentation changes or token bounties over systemic fixes. Researchers demonstrated hijacks against Claude Code Security Review, Gemini CLI Action, and GitHub Copilot that can exfiltrate API keys and tokens. Vendors paid small rewards, $100, $1,337, $500, but did not assign CVEs or publish full advisories.
Technical details
The incidents split into two classes. First, agent integrations with CI/CD (GitHub Actions) allow malicious inputs or crafted workflows to trigger credential leaks; remediation requires changes to how agents handle action context, secrets, and runner permissions. Second, a protocol-level issue in MCP stdio was classified as "expected behavior," yet researchers say a root fix would have reduced exposure across packages with more than 150 million downloads and protected up to 200,000 servers. Vendors cited design tradeoffs rather than issuing patches, leaving high- and critical-severity CVEs at the open-source tooling layer without upstream consolidation.
Context and significance
This is not isolated tech drama; it reflects a broader maturity gap in how AI product teams handle security. AI systems operate as runtime components in developer pipelines and infrastructure, so deferring fixes amplifies supply-chain risk. The behavior also undercuts responsible disclosure norms: small bounties and documentation edits do not compensate for absent CVE triage, coordinated advisories, or mitigations that reduce attack surface for downstream integrators.
What to watch
Security teams should treat vendor statements of "expected behavior" as a red flag, perform independent threat modeling for agent integrations, and push vendors for CVE assignments and coordinated disclosures. Incident-driven fixes will remain partial unless the industry adopts stronger remediation standards and shared protocols for agent safety and secret handling.
Scoring Rationale
The pattern of vendors downplaying vulnerabilities and skipping CVE triage raises material security and supply-chain risk for practitioners, but it is not a single new paradigm shift. The story is notable for operational impact on engineering and security teams.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.


