Goldman Collaborates with Anthropic to Address AI Cyber Risks
Goldman Sachs is actively working with Anthropic after the release of Anthropic's advanced model, Mythos, triggered alarm among officials and security experts. Goldman CEO David Solomon said the bank is doubling down on cyber protections and assessing new risks that powerful generative models create for financial institutions. The collaboration focuses on threat modeling, red-teaming, and deployment controls to limit misuse vectors such as automated phishing, vulnerability discovery, and high-fidelity social engineering. For practitioners, this is a signal that frontline financial firms will push vendors for stronger safety controls, monitoring, and operational safeguards before wider integration of frontier models.
What happened
Goldman Sachs is working directly with Anthropic following concerns about the capabilities of Anthropic's new model, Mythos. David Solomon, Goldman Sachs CEO, said the bank is doubling down on cyber protections as the model's advanced abilities raised alarm among officials and experts.
Technical details
The collaboration targets model-driven attack surfaces that matter to financial institutions. Key risk classes include automated social engineering, high-quality phishing content generation, code and exploit synthesis, and tools that accelerate vulnerability discovery. Practical defensive levers likely under evaluation include:
- •robust red-teaming and adversarial testing against Mythos-style capabilities
- •API access controls, rate limits, and fine-grained usage policies
- •content filtering, prompt engineering constraints, and response sanitization
- •model watermarking and forensic telemetry for provenance and output detection
- •deployment segregation such as private-hosted models or vetted on-prem instances
Context and significance
This is a material shift in how large financial institutions engage with frontier-model vendors. Banks have long been high-value targets for adversaries using automation. The arrival of models with improved reasoning and code synthesis increases the attack automation multiplier. Institutional responses will shape vendor product requirements: hardened APIs, contractual security SLAs, expanded red-team programs, and stronger detection tooling. For the AI vendor ecosystem, enterprise adoption will increasingly depend on demonstrable safety engineering, transparent risk assessments, and joint incident playbooks.
What to watch
Monitor whether collaborations produce standardized safety checklists or new contractual controls for enterprise model access, and whether regulators ask for formal risk assessments for models used in critical infrastructure.
Bottom line: The Goldman-Anthropic engagement signals that threat modeling and operational safeguards are now a procurement requirement for frontier models in finance. Security teams should prioritize adversarial testing, telemetry, and access governance when evaluating large generative models.
Scoring Rationale
This is notable for AI practitioners because it shows a major financial institution engaging directly with a frontier model developer to manage emergent cyber risks. It will influence vendor requirements and operational controls for enterprise model deployment.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.



