Hallucinated Justice: Tracking AI Errors in Legal Proceedings
Damien Charlotin
ABOUT THE SESSION
In this thought-provoking session from the Artificial Unintelligence Conference 2025, Damien Charlotin (Lecturer, HEC Paris & Sciences Po) explores how hallucinations in AI systems expose a fundamental weakness in legal reasoning — our reliance on authority as truth.
Damien argues that hallucinated citations and fabricated precedents aren’t just factual mistakes; they’re epistemic breaches that challenge the very foundations of legal argumentation. As AI tools increasingly participate in drafting and analysis, the risk is not only in false outputs, but in false trust — transferring legal authority to systems that simulate certainty without true understanding.
Through the lens of jurisprudence, epistemology, and technology ethics, this session examines how human oversight, interpretive judgment, and source transparency must evolve to preserve the integrity of law in the age of generative AI.
Key themes include:
How AI hallucinations disrupt the authority-based structure of legal reasoning
The epistemic risks of false citations and synthetic sources
Why human interpretive judgment remains irreplaceable
Transparency and verification as the new due diligence
Rebuilding trust in the age of AI-assisted legal practice
AI, LegalAI, ArtificialIntelligence, Ethics, Hallucinations, Law, Governance, Epistemology, ResponsibleAI, AUI2025, ArtificialUnintelligence, DamienCharlotin