Building Trust and Transparency in Intelligent Systems
As artificial intelligence becomes increasingly integrated into decision-making processes across industries, the need for transparency and accountability in these systems has never been more critical. This course explores the principles and practices of Explainable AI (XAI)—a vital field that seeks to make AI decisions understandable and interpretable by humans.
Participants will gain insight into the challenges posed by so-called “black-box” models, where complex algorithms produce outcomes that are difficult to trace or justify. Through practical examples and real-world case studies, learners will examine the risks of opacity, including potential impacts on fairness, regulatory compliance, and public trust.
By the end of the course, participants will be equipped with the knowledge to assess the value of explainability in AI, communicate model behaviour to stakeholders, and support responsible AI adoption within their organisations.
This course is ideal for professionals working in data science, compliance, risk management, AI development, or any role where ethical and transparent AI deployment is essential.
Fees
COURSE OUTLINES
Learning Outcomes
By the end of this course, participants will be able to:
- Define explainable AI and differentiate it from non-transparent ("black-box") models.
- Understand the legal, operational, and reputational risks posed by non-explainable AI systems
- Evaluate various explainability techniques and their appropriateness for different AI models.
- Assess how explainability supports due diligence, auditability, and compliance in AI deployments.
- Formulate strategies for ensuring accountability from internal technical teams and third-party AI vendors.
- Interpret AI outputs using accessible explanation frameworks, aiding in more transparent communication with stakeholders.
- Advocate for the integration of explainability requirements in AI procurement and implementation policies.
COURSE DETAILS
Duration: 2 Days
Tutor: Dr. Graziella De Martino
Course Scheduling
These courses are scheduled based on demand and will run once the minimum number of 6 participants is reached. We accept a maximum of 15 participants per course to ensure quality and engagement. Interested? Please email us on training@nouv.com for more information or to express your interest.
In-House Training Available
All our courses can be delivered in-house for your team or organisation. Whether you're looking to upskill a department or deliver a course across your organisation, this flexible option allows us to tailor content to your specific needs and deliver training at a time and location that works for you. Contact us to discuss how we can support your team’s development goals or to arrange a session.
PREREQUISITES
A basic understanding of machine learning models and their applications is recommended. This course is suitable for professionals involved in AI deployment, risk management, compliance, or related fields.