top of page

Making AI Explainable and Trustworthy

John Willis

ABOUT THE SESSION

In this deeply insightful session from the Artificial Unintelligence Conference 2025, John Willis explores what it really takes to build AI systems that people can trust. Moving beyond technical jargon, he breaks down the difference between transparency and explainability, and shows how both must work together to create responsible, reliable, and human-centered AI.

John argues that trust in AI isn’t built by algorithms alone — it’s engineered across the entire lifecycle, from data collection to deployment. He also highlights the human and cultural dimensions of AI governance, emphasizing that ethical behavior, communication, and education are as critical as any model architecture.

Key themes include:

The difference between transparency and explainability

Embedding trust across the AI lifecycle

The human and cultural side of responsible AI

Why “contextual clarity” is the next frontier of explainability

Building organizational systems for ethical accountability

AI, ExplainableAI, ResponsibleAI, TrustworthyAI, ArtificialIntelligence, Ethics, Governance, MachineLearning, AUI2025, ArtificialUnintelligence, JohnWillis

bottom of page