top of page

Causal Models for LLMs

Abi Aryan

ABOUT THE SESSION

In this technically rich and forward-looking session from the Artificial Unintelligence Conference 2025, Abi Aryan explores how causal reasoning can transform the capabilities and reliability of Large Language Models. While today’s LLMs excel at predicting text through patterns and correlations, Abi explains why this approach breaks down in domains that require grounded logic, counterfactuals, and causal understanding.

She outlines how integrating causal models with LLMs can improve robustness, reduce hallucinations, and support more trustworthy decision-making. Through clear frameworks and real-world examples, Abi makes the case for hybrid architectures that combine generative fluency with structural reasoning.

Key themes include:

The limitations of correlation-based LLMs

How causal models enable richer, more stable reasoning

The role of counterfactuals in scientific and strategic tasks

Hybrid architectures that combine LLMs with causal graphs

Reducing hallucinations and improving generalization through causal grounding
She outlines how integrating causal models with LLMs can improve robustness, reduce hallucinations, and support more trustworthy decision-making. Through clear frameworks and real-world examples, Abi makes the case for hybrid architectures that combine generative fluency with structural reasoning.

CausalAI, LLM, ArtificialIntelligence, MachineLearning, Counterfactuals, Reasoning, ResponsibleAI, AUI2025, ArtificialUnintelligence, AbiAryan

bottom of page