top of page

LLMs - Where they Shine, Where they Fall Short

Lily Chen

ABOUT THE SESSION

In this sharp and highly clarifying session from the Artificial Unintelligence Conference 2025, Lily Chen breaks down the true capabilities and limitations of Large Language Models using a simple but powerful quadrant framework. She shows how LLMs excel at fluency, pattern completion, and language generation — while also revealing where they consistently struggle: grounded understanding, factual accuracy, and causal reasoning.

Lily explains why hallucinations aren’t just bugs, but structural artifacts of prediction-based systems, and why human oversight remains essential in high-stakes environments. She also outlines the path forward: hybrid architectures that combine LLMs with retrieval, structured data, and validation layers to create systems that are reliable as well as expressive.

Key themes include:

Understanding LLM strengths through a quadrant framework

Why LLMs are fluent but not grounded

Hallucinations as structural limitations, not rare errors

Ideal use cases for LLMs — and where humans must stay in the loop

The future of hybrid, grounded, and trustworthy AI systems

LLM, AI, ArtificialIntelligence, MachineLearning, GenerativeAI, Hallucinations, ModelLimitations, ResponsibleAI, AUI2025, ArtificialUnintelligence, LilyChen

bottom of page