Legal AI’s Future Is Railroads, But Speeding Up Canals Still Makes Sense For Now
Exploring the difference between creating faster LLM-driven workflows and designing coordinated systems for agentic AI.
Earlier this week, I read Sangeet Paul Choudary’s thought-provoking essay The Problem with Agentic AI in 2025 and it reshaped some of my thinking on agentic AI in legal.
In a twist for a piece about AI, he opened by talking about canals. In the early nineteenth century, canals were the infrastructure marvel of their time. They connected towns, cut costs, and expanded trade opportunities. When railroads were created, many people saw them as just faster canals; a more efficient way to move the same goods along similar routes.
But railroads didn’t just move things faster; they changed how everything connected. Trains running hundreds of miles forced towns to standardise time zones, to coordinate schedules, to build new systems of governance. Railroads created national markets because they required, and enabled, coordination on a scale that canals never did.
Choudary argues that the current thinking around the application of “agentic AI” risks repeating the same mistake. He worries that many experts still think like canal engineers. They see agents as a more powerful way to automate tasks instead of recognising that the real opportunity lies in re-architecting how work is coordinated across systems.
That idea struck a nerve because I’ve been wrestling with understanding the material difference between super-charging existing workflows by adding LLM-powered features vs implementing true agentic AI.
His article has helped me think more deeply about what AI could really mean for Legal workflows, and made me consider if my recent mindset has been more canal builder than railroad revolutionary.
The Canal Phase of Legal Automation
For the past decade, legal operations and legal technology have been in what you could fairly call the canal era.
The big themes have been process mapping, efficiency, and automation. We’ve built workflows that look like neat flowcharts: intake → review → approval → filing. We’ve looked for bottlenecks we could remove and repetitive steps we could script away.
Most of those projects used rules-based logic, Robotic Process Automation (RPA), or structured workflow tools. They worked well when the inputs were tidy and definable: standard forms, structured data, known document types.
But the reality of legal work has always been that most inputs aren’t tidy. Emails, contracts, negotiation notes, regulatory updates are all full of nuance and ambiguity. Every time the process hit an unstructured input or an exception, the automation failed or wasn’t suitable, and the task bounced back to a human.
After a while, the returns on that type of automation start to flatten out. The easy wins get captured, and the demand to handle wider and more nuanced use cases adds strain to existing builds with little extra return. In essence, we built the canals and started to make them as efficient as possible as far as their inherent constraints would allow, but at heart, the underlying systems still run on human coordination; legal engineers maintain and tinker to keep things online and moving, and lawyers plugging the gaps.
The LLM Bolt-On Era
Now we’ve hit what I see as the LLM bolt-on era of automation.
Large Language Models (LLMs) have come along to solve two of classic automation’s biggest headaches: they can read unstructured inputs and write fluent outputs. This opens up a whole new layer of possibilities for legal teams.
Indeed many of us are now experimenting with, piloting or integrating these powerful new capabilities into our existing workflows, processes and tech stacks, to figure out just how far this latest leap can take us. The pattern for many of these augmented ‘classic’ legal workflows looks something like this:
An LLM at the front end handles natural-language intake or classification.
Rules-based systems or RPA steps handle structured execution.
Another LLM connection works with the final output (a summary, an email, a draft response).
Often these builds are an improvement on what went before. The AI features are becoming another weapon in the arsenal of legal engineers and process builders, allowing them to automate more complex scenarios. They can deliver quick wins and help teams get comfortable with AI without taking unacceptable ‘hands-off-the-wheel’ type risk. And it works up to a point. Intake can be faster and better. Routine tasks can feel smoother and more complete. There’s progress.
But it’s still a canal.
The workflow underneath is the same linear, deterministic sequence we’ve always used. Each step still depends on the previous one completing successfully. If the context changes halfway through or something doesn’t quite fit, the system doesn’t adapt; it fails and grinds to a halt.
Why Linear Systems Don’t Scale Intelligence
Up until reading Choudary’s essay, I was struggling to conceptualize the difference between a polished and well-built, LLM-powered linear system vs a truly agentic system. In my eyes, they broadly do the same thing or at least seemingly achieve the same outcomes.
In many cases, the former will be more than sufficient for many usecases and able to deliver great results, but I think the major difference between the two lies in how these systems handle change.
A linear automation, even one augmented by AI, follows a fixed route. It’s efficient when everything goes as planned, but brittle when it doesn’t. It can’t reorganise itself on the fly or coordinate with other systems in real time.
Agentic systems, by contrast, are designed to coordinate. Multiple agents with specific roles can perceive context, communicate, and adapt their behaviour as conditions shift. They can make local decisions that align with overall policy rather than waiting for a human to rewrite the workflow.
It’s the difference between an assembly line and an air-traffic-control tower. The first optimises repetitive sequences; the second orchestrates interactions and controls sequences according to a set of rules and regulations.
Legal practice heavily operates in the space of exceptions, context, and negotiation, so the ability to flex beyond rigid linear constructs is essential if we’re going to get close to seeing the returns those hyping AI have promised.
Why We’re Not There Yet
From what I’m seeing and hearing, we’re not ready for a fully agentic world in legal just yet. Several practical and cultural constraints keep us closer to canals for now.
1. Context Windows Are Still Narrow
Even the best models struggle to hold the full context of a complex legal matter. They can’t yet maintain long-term memory across multiple documents, conversations, and versions. That limits how deeply an agent can reason about a case or policy end-to-end.
2. Systems Access Is Fragmented
Agents need visibility across CLMs, matter systems, document repositories, finance tools, and sometimes external data sources. Most of those are still locked behind legacy architecture, inconsistent permissions, and siloed data models.
3. Knowledge Isn’t Machine-Readable
Legal SMEs know their domain deeply, but most of that knowledge lives in either narrative form: playbooks, PDFs, decks, advice notes, or as tacit knowledge within the minds of the legal practitioners themselves. Until more is expressed as structured data or policy logic, agents won’t have the material they need to act fully or responsibly.
4. Governance Maturity and Reliability
Few teams have clear frameworks for defining what an AI agent is allowed to decide, when it must escalate, or how accountability is assigned. Without that, the “autonomy” aspect of agentic AI looks more like “risk” for many.
That risk element is further compounded by mixed reliability. Even where models outperform humans in a range of test environments, teams are not yet culturally or psychologically ready to deploy systems that they know can make mistakes for reasons they cannot explain.
5. Ongoing Security Development and Risks
Agentic AI opens new attack surfaces: prompt injection, data leakage, and model manipulation, etc. This is still an immature technology, and safeguards are uneven across tools.
For legal teams handling privileged or regulated data, that makes extra vigilance essential until security standards and testing practices mature.
Given these realities, the sensible path for most legal teams is evolutionary, not revolutionary.
The Transitional Architecture: Smarter Canals
For the next few years, the most productive approach is likely to be a hybrid one. Use LLMs to make existing automations less brittle and more human-friendly.
Examples already emerging include:
LLM-assisted intake: reading free-form requests and categorising them correctly.
Policy interpretation: summarising relevant rules or precedents for a workflow step.
Quality control: flagging inconsistencies or omissions before submission.
Human-like output: converting structured data into readable emails, reports, or filings.
These use cases don’t demand continuous multi-agent coordination. They fit neatly into current workflows but solve painful edge cases and the messy bits between structured steps.
That’s valuable. It builds confidence, trains teams, and surfaces where coordination really breaks down. Those pain points will eventually tell us where to invest in full agentic redesign.
The Shift from Execution to Coordination
I see the future coordinated agentic system, once the hurdles above can be overcome, as one where multiple specialised agents will operate simultaneously, each with partial context but shared governance.
An intake agent interprets the request and identifies relevant policies.
A risk agent evaluates exposures and thresholds.
A clause agent retrieves or proposes language options.
A compliance agent checks alignment with regulations.
A governance agent ensures actions stay within approved parameters and escalates when needed.
But they won’t work in a straight line. They will interact and adapt based on live data. The human lawyer or business user will see one coordinated output rather than a string of disconnected actions.
The system’s primary value will lie not in the speed of individual steps, although that will be fast, but in how coherently it manages decisions across steps.
Governance as the New Infrastructure
All of that coordination will depend on a new kind of infrastructure: governance as a product.
Howard Yu, commenting on Choudary’s essay, captured this well:
“The real work is to treat governance as a product. Who sets the guardrails for agents, what data contracts bind functions, how exceptions escalate, which decision rights move to the edge.”
For legal teams that’s going to require a great deal of introspection and design, supported by legal ops, to consider and design the policies, the controls, the accountability, and escalations that will serve as the framework for the delivery of legal services by the team’s agents.
Governance will have to move from being a document that describes what should happen (if that even exists in most teams) to a layer of logic that defines what can happen.
That’ll involve things like:
Policy registries written in a structured form that agents can reference directly.
Identity and permissions systems that define agent roles and trust levels.
Real-time audit trails showing why decisions were made.
Feedback loops that allow continuous learning from exceptions.
In this model, compliance isn’t an afterthought; it’s an essential foundation of the operating system.
Measuring the Right Things
This change will alter what we come to measure to assess performance.
If, like now, execution is the focus, you measure speed, cost, and accuracy.
If coordination is the focus, you also start to consider:
Coordination quality: how many matters resolve without manual escalation.
Governance fidelity: how closely agent decisions align with policy intent.
Adaptiveness: how quickly the system learns from new matter types or rule changes.
Those are the measures that will distinguish canal systems from railroad ones.
Building Toward the Rails
So what might the journey from hybrid to fully agentic look like in practice?
Map Coordination Failures
Identify where work currently stalls. Hand-offs, rework, missing context.Create Shared Context Layers
Start integrating data across systems so agents (and humans) operate from the same source of truth.Codify Policies as Data
Create and collate playbooks, approval limits, and compliance checks into structured formats.Pilot Multi-Agent Experiments
Use contained, low-risk scenarios like NDA processing to test how agents coordinate.Iterate Governance
Treat every exception as feedback for improving policies and boundaries.Educate the Organisation
Legal SMEs will need to learn not just how to use AI, but how to teach it. Lawyers expressing their expertise as rules, signals, and training data. Not locking it in word files, emails, or as tacit knowledge.Develop capability in evaluation (evals).
Understanding how to test and interpret model behaviour. Teams will need a deeper, more scientific grasp of how different models perform, drift, and evolve (especially if they plan to build or fine-tune internally).
Each of these elements builds part of the rail network before you try to freely run trains across it. The practicalities and formats of the above will vary by organisation and technology stack.
The Cultural Shift
There’s also a mindset change involved.
Legal has long prided itself on precision and control. That’s a strength, but it can also lead to over-engineering workflows for predictability rather than adaptability.
The agentic model requires a bit of the opposite: defining guardrails, then letting systems coordinate within them. It’s less about designing perfect processes and more about designing resilient networks.
That can feel uncomfortable. But it’s also closer to how legal reasoning works in reality: principles, precedents, judgment rather than rigid scripts.
Why This Matters
The next frontier won’t be about squeezing more time out of individual tasks, although plenty of that will happen along the way; it’ll be more about building systems that can grow in capability without needing to be rebuilt each time the context shifts.
It’s a move away from trying to hardcode intelligence into linear processes, where every new rule or exception forces a redesign, into systems that learn and coordinate in ways that make them more like people. The intelligence compounds so that they can take on new situations, apply what they’ve already learned, and improve the whole network’s performance without rewriting the underlying flow.
It’s a move from local optimisation to systemic excellence.
The Realistic Horizon
It’s important to stay grounded. At the time of writing, it appears that the technology isn’t yet mature enough for most legal teams to go fully agentic.
There’s also a deeper reason many teams will stay in hybrid mode for now: predictability. Much of legal practice currently depends on determinism: knowing that a clause, citation, or decision will appear the same way every time. Lawyers don’t just want accuracy; they want repeatability.
When a system produces a contract, lawyers need to recognise it, trust it, and be confident that they can defend/explain it later. Agentic AI, by nature, introduces a layer of probabilistic behaviour.
That’s why the generation of “smarter canals” holds such appeal. They’re faster, more capable, but ultimately grounded in enough deterministic logic that people can easily audit, recognise, and understand the outputs.
Until agentic systems can demonstrate predictable reliability at scale, most legal teams will prefer hybrid designs: more flexible than ever, but still tightly controlled.
And that’s fine. Every iteration teaches us where flexibility adds value and where predictability is essential. Each experiment lays the groundwork technically, culturally, and psychologically for the potential moment when governance, reliability, and trust may finally make full coordination possible.



strong agree on #2: (Systems Access Is Fragmented) -- part of the problem is, anyone who stores valuable data is increasingly locking it up and launching/selling their own AI agents, so I'm not sure there's a smooth integration experience for new legaltech ai products forthcoming