AI-Related Conduct Risk Reshapes Corporate Risk Management Agenda

Risk intelligence strategies are shifting as firms confront faster, costlier and less predictable conduct risks driven by AI, data integrity failures and climate transition exposures. A RepRisk and Oxford Economics survey of more than 500 C-suite executives across banks, asset managers and asset owners finds limited trust in AI-only approaches and growing demand for explainability, defensibility and human oversight. Firms now treat conduct risk as a commercial threat to balance sheets and reputation, not just a compliance tick-box. Practical implications include reweighting investments toward explainable models, audit trails, hybrid human-AI workflows and clearer accountability for model-driven decisions.
What happened
A new analysis led by RepRisk with Oxford Economics, based on a survey of more than 500 C-suite executives across banks, asset managers and asset owners, finds that AI-related conduct risk is rising rapidly and reshaping enterprise risk priorities. The report concludes that AI-only risk solutions lack sufficient explainability and defensibility for material financial exposures, and that firms must combine machine scale with human judgement.
Technical details
The study highlights that traditional conduct risk categories remain but are now joined by fast-moving exposures that evade historical taxonomies. Executives pointed to three shifting risk vectors:
- •AI-related conduct risk, where automated decision processes create opaque, hard-to-defend outcomes
- •Data integrity failures, where poor data provenance undermines model outputs
- •Climate and energy transition issues, which introduce cross-jurisdictional and model-driven exposures
The practical requirements emerging from the findings are explicit: accuracy, explainability and auditability become non-negotiable when downside is financial. That implies investments in model documentation, provenance tracing, explainability toolchains, robust monitoring, and human-in-the-loop checkpoints that can pause or override automated actions.
Context and significance
This is a risk-management inflection point, not just another compliance memo. Firms that treated automation as a productivity project now face the reality that opaque automation can amplify losses and reputational harm. The shift parallels broader governance trends: regulators and stakeholders are demanding traceable decisioning, and legal defensibility increasingly depends on transparent model governance. For risk teams, this elevates skills needs toward interpretability methods, causal validation, scenario testing and incident forensics.
What to watch
Expect procurement and vendor assessments to prioritize explainability and SLAs for model governance, and for internal risk frameworks to formalize hybrid human-AI workflows and accountable owners. The open questions are how standards for explainability will be operationalized and how regulators will treat model-driven conduct failures.
Scoring Rationale
The report highlights a notable, practical shift in enterprise risk priorities that matters to practitioners but does not introduce a new technology or regulation. It signals a material change in governance and operational requirements, so it is notable for risk managers and ML engineers.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.


