1. Trust Challenge
What is the core risk to user trust, and when does it matter most?
LLMs are confident storytellers. When data is missing, unclear, or contradictory, they often fill gaps with fluent but incorrect answers instead of admitting uncertainty.
From a user's perspective, this is worse than "I don't know." It feels like being lied to by a system that sounds authoritative.
Critical moments where this pattern matters most:
The answer depends on live or factual data (records, schedules, prices, policies, status).
The system is expected to be source-of-truth, not just a brainstorming tool.
Mistakes have real consequences (money, access, health, legal, safety).
Without a verification layer, users are forced to double-check every single output manually, negating the efficiency gains of using AI in the first place.