Anthropic releases Claude Opus 4.7 with safeguards
Anthropic launches `claude-opus-4-7`, a generally available model tuned for long-horizon agentic workflows, advanced software engineering, and higher-fidelity vision. Opus 4.7 raises the image input limit to 2576px / 3.75MP, supports 128k max output tokens, introduces a new xhigh effort level and beta task budgets, and preserves prior pricing at $5 per million input tokens and $25 per million output tokens. Importantly, Anthropic has integrated automated cybersecurity safeguards that detect and block prohibited or high-risk cyber requests, and it intentionally curtailed Opus 4.7's offensive cyber capabilities relative to the restricted Mythos preview. The release is positioned as a safer, broadly available workhorse for autonomous coding and multimodal agent tasks while Anthropic studies real-world safeguard effectiveness under Project Glasswing.
What happened
Anthropic released its new generally available model, `claude-opus-4-7`, positioned as the most capable GA Claude to date for long-horizon, agentic work and advanced software engineering. The model increases image resolution support to 2576px / 3.75MP, accepts up to 128k max output tokens, and is available across Claude products, the public API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry. Anthropic kept pricing at $5 per million input tokens and $25 per million output tokens and added automated safeguards that detect and block prohibited or high-risk cybersecurity uses. Anthropic also notes Opus 4.7 is intentionally "less broadly capable" on cyber-offense tasks than its restricted Mythos preview as part of an incremental safety deployment strategy.
Technical details
claude-opus-4-7 introduces several practitioner-facing changes that affect cost, latency, and harness design. The release includes a new xhigh effort level to trade capability for token spend and speed; Anthropic recommends xhigh for coding and agentic scenarios and a minimum of high for intelligence-sensitive work. The model supports a 128k max output token window and a higher-resolution image pipeline that maps coordinates to actual pixels, removing scale-factor math for image localization. Opus 4.7 also adds beta task budgets, which provide the model a running token countdown to manage multi-step agent loops and graceful task termination. Anthropic updated the tokenizer for Opus 4.7; expect higher token counts for certain prompts and plan cost testing before migration. The model improves on low-level perception, image localization, long-run instruction fidelity, and file-system based memory for multi-session workflows.
Feature summary
- •High-resolution image support (up to 2576px / 3.75MP) with pixel-accurate coordinates
- •New xhigh effort level for capability vs cost/speed tuning
- •Beta task budgets to bound agentic loops and prioritize outputs under token constraints
- •Improved long-horizon instruction following, verification behaviors, and file-system memory
Safety and policy changes
Anthropic deployed automated cybersecurity safeguards on Opus 4.7 that detect and block requests indicating prohibited or high-risk cybersecurity uses. The company says it experimented with "differentially reducing" cyber capabilities during training so Opus 4.7 is less able to expose novel vulnerabilities than the restricted Mythos preview. This rollout is part of Project Glasswing, Anthropic's initiative to collaborate with security teams and partners to validate cyber-capability controls before any wider release of Mythos-class models.
Context and significance
Opus 4.7 is a tactical move in Anthropic's product ladder: provide a broadly available, high-utility model for real-world agentic workloads while keeping frontier offensive-security capabilities limited to a controlled Mythos cohort. The release responds to two market trends: increased demand for autonomous coding agents that can run long sessions reliably, and heightened scrutiny over models that can find or weaponize software vulnerabilities. By baking in safeguards and task-level controls, Anthropic is signaling a deployment-first, learn-in-production approach that prioritizes monitored scaling over immediate capability parity with restricted research models.
What to watch
Monitor real-world effectiveness of the automated cyber safeguards, token-cost delta from the updated tokenizer, and benchmark behavior versus GPT-5 and Gemini-class models on sustained agentic coding tasks. Also watch Project Glasswing outcomes and whether Anthropic relaxes Mythos restrictions as safeguards mature. Organizations should test Opus 4.7 on staging to retune prompts, effort levels, and task budgets before migrating mission-critical agentic pipelines.
Scoring Rationale
This is a significant model release with practical tooling for long-horizon agentic use and integrated cybersecurity safeguards. It materially affects developer workflows and safety practices but is not a paradigm-shifting frontier-model launch.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.


