The AI Risk Manager training course equips participants with the essential knowledge and skills to identify, assess, mitigate, and manage AI-related risks. Based on leading frameworks such as the NIST AI Risk Management Framework, the EU AI Act, and insights from the MIT AI Risk Repository, this course provides a structured approach to AI risk governance, regulatory compliance, and ethical risk management.
Participants will also analyse real-world AI risk scenarios from the MIT AI Risk Repository, gaining practical insights into AI risk challenges and effective mitigation strategies.
Fees
COURSE OUTLINES
Course Agenda
- Day 1: Introduction to AI risk management
- Day 2: AI risk identification, assessment, and measurement
- Day 3: AI risk mitigation, governance, and incident response
- Day 4: AI risk monitoring and continual improvement
- Day 5: Certification exam
Examination
The “PECB Certified AI Risk Manager” exam meets all the requirements of the PECB Examination and Certification Program (ECP). It covers the following competency domains:
- Domain 1: Fundamental principles and concepts of AI risk management
- Domain 2: AI risk identification and assessment
- Domain 3: AI risk measurement
- Domain 4: AI risk mitigation, governance, and incident response
- Domain 5: AI risk monitoring and continual improvement strategies
COURSE DETAILS
Duration and Access
- Duration: Up to 6 months
- Starts: Upon Registration
- Ends: After Examination
- You will be enrolled on the PECB platform KATE, where you will have access to all relevant training procedures.
- Certification fees are included in the price of the exam.
- Training material containing over 450 pages of information and practical examples will be provided.
- In case of exam failure, you may retake the exam once within 12 months at no additional cost. This is included in the original training fee. Subsequent retakes are subject to additional fees.
Who Should Attend?
- Professionals responsible for identifying, assessing, and managing AI-related risks within their organisations
- IT and security professionals seeking expertise in AI risk management
- Data scientists, data engineers, and AI developers working on AI system design, deployment, and maintenance
- Consultants advising organisations on AI risk management and mitigation strategies
- Legal and ethical advisors specialising in AI regulations, compliance, and societal impacts
- Managers and leaders overseeing AI implementation projects and ensuring responsible AI adoption
- Executives and decision-makers aiming to understand and address AI-related risks at a strategic level
Learning Objectives
Upon successfully completing the training course, participants will be able to:- Understand AI risk management fundamentals, including key concepts, approaches, and techniques for identifying, assessing, and mitigating AI-related risks
- Apply established AI risk management frameworks, such as the NIST AI Risk Management Framework and the EU AI Act, to ensure governance, compliance, and ethical AI use
- Identify and assess AI risks, such as bias, security vulnerabilities, transparency issues, and ethical concerns
- Develop and implement risk mitigation strategies and incident response measures to address AI-related threats and vulnerabilities
- Integrate AI risk management into business strategy, ensuring AI initiatives align with organisational objectives while maintaining compliance with industry regulations
- Continually monitor, evaluate, and improve AI risk management processes to adapt to emerging risks and evolving AI technologies
- Advise key stakeholders on responsible AI adoption, providing guidance on ethical AI deployment, regulatory compliance, and best practices in AI risk governance.