Legal AI is having a moment.
In the past 12 months alone, we’ve seen a surge in new startups, high-profile funding rounds, and bold claims about the future of legal work. From contract review bots to precedent search engines, the message is clear: AI is here to help lawyers work faster and smarter.
And yet, behind the hype, there’s a quiet reality setting in for many legal teams. These tools promise a lot, but are they really solving the problems lawyers care most about?
What’s Actually Under the Hood
Most Legal AI startups today are not building foundational models from scratch. Instead, they’re layering legal interfaces on top of large language models (LLMs) like OpenAI’s GPT, Anthropic’s Claude, or Mistral. This is a fast, cost-effective way to bring products to market, and it’s helped drive much of the innovation we’re seeing.
But it comes with trade-offs.
These models are general-purpose by design. They’ve been trained on massive amounts of public data, from Wikipedia to Reddit to open-source legal documents. That breadth gives them impressive fluency, but it doesn’t give them depth in any one firm’s approach, any jurisdictional nuances, or any client’s preferences.
So, when AI tools built on these models try to tackle legal tasks, they’re often doing it from the outside in. They can spot patterns in language, but not always in context. They can generate summaries and suggestions, but not necessarily the ones that reflect your standards, risk profile, or terms.
Standard Problems, Standard Results
The logic behind many of these tools is to identify “common” legal problems and build standard solutions. Contract review is the classic use case. If every NDA follows a similar format, the thinking goes, why not use AI to mark up the red flags?
The problem is, legal work isn’t just about spotting boilerplate. It’s about knowing why something matters, how your client feels about it, and what the commercial context demands.
What looks like a red flag in one situation might be entirely acceptable in another.
The result? AI that technically works, but practically underwhelms. It generates outputs lawyers still have to review, second-guess, and often rewrite. The risk of hallucinations or missed nuance means the tools create a new layer of work, not less.
Why This Isn’t Enough
When I ask most lawyers what they want from AI, the answer isn’t a clever draft. It’s a faster path to the right answer. That means tools that know how their firm works. Tools that understand the difference between “acceptable” and “preferred.” Tools that surface relevant client history without anyone needing to search for it.
This is where the current generation of Legal AI falls short. It’s not that they’re bad tools. It’s that they’re not built for the specific ways law is practiced inside individual firms. And they require the lawyer to do more, not less, to make them usable.
The Adoption Paradox
There’s a reason why many legal tech pilots fail to make it past a few enthusiastic users. Lawyers are busy. They won’t adopt tools that force them to change how they work, switch between interfaces, or second-guess the results.
And yet, this is exactly what most AI tools ask them to do. The tool might be smart, but if it requires a new workflow, or if the outputs still need to be checked, it’s just one more thing on the to-do list.
The paradox is that the more powerful the tool, the more dangerous its mistakes. So lawyers can end up spending more time reviewing AI outputs than they would have spent doing the task themselves.
Getting to the Right Model
Another challenge with many Legal AI tools is that they only tackle one isolated part of the workflow: reviewing a clause, suggesting a redline, or generating a draft.
The problem is legal work doesn’t happen in isolation. Every review, every mark-up, every negotiation, every compliance check sits within a broader context.
If the AI doesn’t understand that bigger picture, it can’t make the right decisions at each step.
That’s why solving just one piece of the workflow isn’t enough.
What works instead is a model where AI is built into the legal workflow, not bolted on top.
That’s what we’ve done at Avantia. Our AI agent, Ava, isn’t limited to a single interaction – it understands the entire workflow. It draws on structured client data, historical preferences, deal context, and past outcomes to inform every decision point along the way. Final outputs are faster and better aligned because the AI understands from the start what needs to be achieved, not just what tasks it's been asked to complete.
When AI is built around the entire legal process, not just a single task, you don’t just create better individual tools. You transform the whole workflow, driving faster, safer, more predictable outcomes, the way clients and lawyers actually want.
That’s the difference. Legal AI shouldn’t add steps. It should remove them.
Where We’re Going Next
This wave of Legal AI innovation is exciting, but it’s only the beginning. The next generation of tools won’t just help lawyers write. They’ll help them decide. But to get there, they’ll need to be embedded in the way legal teams already operate. And they’ll need access to the right data, not just the right models.
In the next post in this series, we’ll dig into why high-quality, firm-specific data is the key to unlocking real legal AI, and why most vendors underestimate how hard that is to get right.