Microsoft Entra Blocks Prompt Injection at Network Layer

Microsoft Entra's network-level Prompt Injection Protection, delivered via the AI Gateway and Global Secure Access, inspects and blocks malicious prompts in real time before they reach LLMs. The capability enforces consistent guardrails across devices, browsers, and applications without code changes by routing traffic through the Global Secure Access client and performing TLS inspection and prompt scanning. It is positioned inside Microsofts Security Service Edge and ties into Defender, Purview, and the emerging Agent 365 control plane, enabling centralized visibility and policy enforcement for agentic AI. Deployment requires Microsoft Entra licensing, device enrollment (Entra-joined or hybrid-joined), and correct traffic routing through the AI Gateway. For enterprises, this shifts prompt-injection defense from per-app controls to a single network chokepoint, reducing engineering overhead while introducing operational dependencies like TLS inspection and traffic routing.
What happened
Microsoft introduced network-level prompt injection defenses as part of the AI Gateway / Global Secure Access stack, exposing a Prompt Injection Protection capability that inspects user prompts in transit and blocks malicious or adversarial inputs before they reach models. The feature is integrated with the Microsoft Entra suite and the larger Security Service Edge architecture, and is being rolled out in preview with GA signals at recent conferences and product announcements.
Technical details
The protection works by routing enterprise internet traffic through the Global Secure Access client and the AI Gateway, where inline analysis inspects prompts and related content. Deployments require Entra ID tenant configuration, device enrollment (Entra-joined or hybrid-joined Windows VMs and endpoints), and enabling an Internet Access traffic forwarding profile plus TLS inspection to decrypt and scan encrypted flows. Administrators create prompt policies to define what is blocked, logged, or quarantined.
Key operational and technical aspects:
- •Centralized, network-layer inspection prevents adversarial prompts and jailbreak attempts across applications without any code changes to individual AI apps.
- •Real-time blocking and logging, enabling security teams to quarantine risky requests and generate telemetry for investigation.
- •Integration points with existing Microsoft security tooling, including Microsoft Defender and Purview, and with the forthcoming Agent 365 control plane for agent governance.
Context and significance
Prompt injection is a top operational risk for generative AI because malicious inputs can cause models to ignore safety constraints, exfiltrate secrets, or take unauthorized actions. Shifting enforcement to a network chokepoint changes the tradeoffs: instead of instrumenting each AI client and model, organizations can apply uniform policies across the estate. That reduces developer friction and accelerates consistent rollout of guardrails, which is critical for enterprises scaling agentic AI.
However, this design also creates new operational dependencies. TLS inspection requires certificate management and may raise privacy and compliance considerations for some environments. Enforcing routing through the AI Gateway requires endpoint clients to be managed and up to date, and the protection's efficacy depends on policy tuning and the gateway's ability to correctly parse diverse prompt formats and nested content (including indirect prompt injection embedded in web content).
What to watch
Evaluate the feature in a staged pilot: verify TLS inspection readiness, measure false positives on legitimate prompts, validate telemetry and incident workflows, and test integration with Defender/Purview and your agent governance process. Expect Microsoft to extend policy controls, richer telemetry, and tighter integration with Agent 365 over the next quarters.
Scoring Rationale
This is a notable enterprise security advance: centralized, network-level prompt protection materially reduces engineering work and increases consistent enforcement across AI apps. It is not a research breakthrough, but it meaningfully changes operational security posture for organizations deploying generative or agentic AI.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.



