Agentic AI Enables Memory Attacks Across Sessions and Users

Agentic AI systems are exposing a new attack surface: persistent memory objects that can be poisoned and propagate across sessions, users, and subagents. Cisco researcher Idan Habler disclosed the MemoryTrap technique that compromised Claude Code memory, demonstrating how a single malicious memory object can be injected, persist in a shared memory store, and be reloaded by unrelated sessions and subagents. Most organizations lack governance, observability, and isolation controls for agent memory, leaving deployments vulnerable. Defenders must treat agent memory like secrets and identities, enforce access controls, implement immutability and provenance checks, and instrument memory stores for telemetry and rollback. The risk affects any deployment that uses shared or long-lived memory for agentic reasoning and tool use, and it elevates supply-chain and insider threat vectors for AI-driven automation.
What happened
Idan Habler, AI Security Researcher at Cisco, detailed a practical attack surface named MemoryTrap that compromises agentic systems by poisoning persistent memory objects. The disclosed exploit targeted Claude Code, showing a single crafted memory object can survive and propagate across sessions, users, and subagents, turning memory into a lateral attack channel and persistent implant.
Technical details
MemoryTrap leverages weaknesses in how agentic architectures persist and share memory. Key mechanics demonstrated include:
- •malicious memory object creation that encodes instructions or corrupts state
- •reuse of the memory object by different sessions or subagents due to shared stores
- •lack of provenance and access controls allowing untrusted writes to memory
Practitioners should note the following technical implications
first, memory stores used for planning, context, or tool state become high-value attack surfaces; second, typical data protection controls do not cover model memory. Defenses must include access control lists, immutability or versioning, provenance metadata, and runtime telemetry to detect unusual memory reads/writes. Where applicable, treat memory stores with the same guards as secrets and identities and instrument agent orchestration frameworks to validate memory before execution.
Context and significance
Agentic systems are widening the attack surface because they chain models, tools, and memory to act semi-autonomously. This class of risk is distinct from prompt injection because it persists beyond a single session and can escalate across tenants and subagents. The MemoryTrap disclosure highlights the intersection of classic security concerns, like supply-chain and insider risk, with emergent AI behaviors. Organizations adopting agentic automation without explicit governance for memory are likely to experience stealthy persistence and cross-user contamination that standard SIEMs will miss.
What to watch
Short term, expect vendors to add memory governance features, immutability options, and memory-scanning tools. Longer term, standardization of memory provenance and access control APIs for agent platforms will be necessary to make agentic deployments enterprise-ready. Security teams should prioritize discovery of persistent memory stores, threat modeling for agentic workflows, and integrating memory telemetry into incident response pipelines.
Scoring Rationale
The disclosure reveals a practical, cross-session persistence vector that elevates agentic systems' risk profile. It is a high-priority operational security issue for organizations deploying agentic automation. Freshness reduces the score slightly.
Practice with real Telecom & ISP data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Telecom & ISP problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.



