ISACA Launches Advanced AI Risk Certification AAIR
ISACA has launched the Advanced in AI Risk (AAIR) credential to equip experienced IT risk professionals with practical skills for governing, assessing, and managing enterprise AI risk. The credential targets three core practice areas: AI risk governance and framework integration, AI lifecycle risk management, and AI risk program management. Candidates must have prior IT risk experience plus one of 25 prerequisite certifications (examples include CISA, CISM, CRISC, CISSP). ISACA will support candidates with an online review course, a questions-and-answers database, and a review manual. The launch addresses widening gaps between rapid AI adoption and existing control frameworks and ties into ISACA guidance on structured AI risk capability and practical vulnerability frameworks for AI coding assistants.
What happened
ISACA announced the launch of the Advanced in AI Risk certification, AAIR, on 15 April 2026 to formalize practical expertise for IT risk professionals managing AI across the enterprise. The credential is aimed at practitioners who already hold established risk credentials and require applied capability to govern systems that evolve post-deployment. ISACA CEO Erik Prusch said, "The launch of AAIR gives risk professionals the practical skills they need now to understand, assess and manage AI risk." The program requires proven IT risk experience plus one of 25 prerequisite certifications.
Technical details
The AAIR exam and competency model center on three practice domains that reflect governance and operational control needs:
- •AI risk governance and framework integration
- •AI lifecycle risk management
- •AI risk program management
Candidates are expected to translate technical uncertainty into board-level evidence, recommend responses to AI-specific vulnerabilities, and design lifecycle monitoring and third-party oversight. ISACA provides study materials including the AAIR Online Review Course, the AAIR Questions, Answers & Explanations Database, and the AAIR Review Manual. Prerequisite certifications cited as qualifying include CISA, CISM, CRISC, CGEIT, CDPSE, CRMP, CRMA, CGRC, CISSP, CERP, and CRCM among others.
Context and significance
The certification arrives as a practical response to a recurring problem: AI adoption outpacing organizational controls and governance. ISACA's parallel content explains why mere awareness is insufficient and argues for structured capability that can operate under pressure during go-live decisions, vendor model failures, or regulatory scrutiny. The organization also published operational frameworks such as a four-phase approach to assessing AI coding assistant vulnerabilities that delivered a 36% reduction in remediation time in deployment case studies. AAIR therefore bundles governance theory with applied, auditable practice-positioning holders to bridge technical, security, and board-level conversations.
For practitioners, this matters because it standardizes a set of expectations and artifacts that auditors, boards, and regulators will begin to recognize. The credential is not an entry-level certificate; it presumes experience and an existing certification footprint, making it a signal that a practitioner can operationalize AI-specific controls, monitor model drift, and manage third-party model risk in production.
What to watch
Adoption by large regulated sectors-financial services, healthcare, and government-will determine AAIR influence. Expect organizations to reference AAIR competencies in vendor oversight, procurement requirements, and internal hiring for model risk and AI governance roles. Also watch for vendor training tie-ins and whether regulators cite AAIR as a recognized professional standard in enforcement or guidance documents.
Bottom line
AAIR codifies applied AI risk capability for experienced practitioners, aligning ISACA educational assets and practical frameworks to close a growing operational gap between AI deployment velocity and enterprise controls.
Scoring Rationale
This is a notable development for practitioners: a major professional body is standardizing applied AI risk capability. It is not a frontier research breakthrough, but it materially affects hiring, vendor oversight, and internal control expectations for AI deployments.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.

