Skip to content

Why AI Hasn’t Taken Off in Law: A Counterintuitive Reason

Posted on:February 26, 2025 at 09:00 AM

When large language models (LLMs) entered the spotlight, many predicted they would replace lawyers outright. After all, if a machine can pass the bar exam, draft contracts, or summarize case law, why not let it handle matters end to end?

The reality has turned out differently. Adoption is high—Bloomberg’s 2025 report shows most attorneys already use AI — but transformation remains rare. Tools assist with drafts, memos, or research, yet they almost never carry a case through to completion.

Why? Beyond ethics, compliance, or hallucinations, there is one counterintuitive reason: to use LLMs effectively, one must understand both AI and the legal domain deeply.

At first glance, this seems unnecessary — aren’t models supposed to replace expertise? In truth, they are powerful but brittle tools. To use them responsibly, lawyers need dual literacy: knowing what AI can and cannot do, and checking outputs against doctrine, precedent, and procedure.

The paradox is clear: the promise of less expertise has only made expertise more valuable. Firms that succeed are those who combine legal mastery with an informed understanding of how AI actually works.


Sources