Dan Hunter posted recently about a quick test you can do to tell if your legal department is really AI-first: ask to expand the function, and if the reply is “hire two paralegals” rather than “build an AI solution” then you are not AI-first.
Comparing tech adoption against paralegal recruitment was a common challenge in the pre-GenAI era of legal ops and legal tech. It served as a loose benchmark when considering funding and ROI, and in an effort to be totally pompous, I referred to it as the “Paralegal Parity Theory”.
The Theory
Technology spend in a legal team is almost always competing against a paralegal.
C(Tech) + M < C(P × n) - V(A)
Where:
C(Tech) = total technology cost: licence and implementation
M = ongoing maintenance cost: human time to administer, monitor, validate outputs, fix issues, and keep the system effective
n = number of paralegals required to match the technology’s output at required scale
C(P × n) = cost of those n paralegals
V(A) = value of adjacent work those paralegals would also perform
Technology spend is only justified when its total cost, including the ongoing human effort required to keep it running, comes in below what you would pay for the paralegals who could do the same work, minus the value of everything else those paralegals would do on top.
A Worked Example
Your legal team is considering document automation for a standard suite of commercial contracts. Licence: £40,000 a year. Implementation: £30,000 in year one. Maintaining it, fixing edge cases, validating outputs: another £15,000 annually in lawyer time. Total year one cost: £85,000.
A UK graduate paralegal costs around £25,000 a year, handles the same contracts, and also picks up NDAs, supplier onboarding, admin, and whatever else lands on the pile. Call that adjacent work worth £15,000 of equivalent output. Net paralegal cost: £10,000.
The paralegal wins. Year two and beyond, the paralegal still wins. Only when volume requires ten paralegals to match the technology’s output does the comparison flip decisively. But volume rarely acts alone. You also need the task to be simple enough to automate robustly, and the need for speed to be a genuine driver rather than a nice-to-have. Most in-house legal teams do not have all three at once. Which is why the paralegal kept winning the argument in the room.
Why This Held Up
In the UK the comparator has always been particularly sharp. Graduate-level hires working toward qualification are available at modest cost, capable, motivated, and trying to impress to secure training opportunities. They handle edge cases no deterministic system was ever configured for, improve without a change request, and do not require a security review before you hand them a task.
It was also worth nothing that a paralegal compounds. Every week they absorb context, learn your counterparties, and pick up the institutional knowledge that makes good legal judgment possible. That learning generalises across tasks and sticks. Current AI systems do not do this in any meaningful sense. The knowledge accumulated in one deployment does not necessarily carry over. There is no equivalent of the second-year paralegal who already knows how your procurement team operates.
What GenAI Changes
GenAI shifts things meaningfully. A well-configured large language model handles variation that would have broken a deterministic system, drafts across document types, and pivots between tasks in a way that starts to compress the adjacent work gap. Deployment costs less. Implementation is lighter.
The external pressure has shifted too. Technology adoption used to be internally driven. Now it is coming from the C-suite, from boards, from industry peers. Being seen to adopt AI carries its own imperative independent of the economics.
Where Agentic AI Comes In
Agentic AI is where the theory faces its most serious challenge. An agent handling workflows end to end, across multiple systems, with meaningful autonomy, starts to look less like a tool and more like a colleague. The status dynamic of many direct reports may invert too. A roster of deployed agents may become the new signal of modern management, replacing headcount as the metric that confers influence inside an organisation.
But from where I sit, agentic AI has not yet consistently cleared the bar in an enterprise setting. The technology may be capable in isolation, but safe, secure, autonomous deployment in a legal context remains ahead of most teams. Although tantalisingly close for some. The security posture is not yet stable enough for most organisations to grant agents the access and permissions they would need to operate fully. Until that changes, agents remain constrained to more narrow, supervised tasks rather than the end-to-end workflows that would genuinely challenge paralegal economics.
Similarly, the ongoing human attention required to keep AI reliable and governed is a cost that gets understated in the sales process and is discovered late in the deployment. It belongs in the equation.
But the Inflection Point is Coming
The theory may be approaching its own turning point. If your lawyers are already running with capable AI assistance (tools that accompany a lawyer in their existing workflows), the question before approving a paralegal hire or new tool adoption is no longer “could a paralegal do this?” It is “have we genuinely exhausted what the lawyers using AI can do first?”
That feels like it could be the new test. And that moment feels closer than it ever has.



