Thinking out loud
Some lucid thinking on the future of legal ops; some ranting about the value of what we measure and the missed law firm opportunity!
Ahead of the weekend.
I’m mulling this over..
Legal Ops emerged as a response to a structural failure: legal departments designed around the intuitions of lawyers rather than the needs of the businesses they serve. That was a real problem, and smart and adept Legal Ops teams and professionals have provided real solutions — CLM, matter management, e-billing, data analytics, and a generation of people who could speak both law and operations.
But I wonder if the movement is now at risk of becoming what it was created to disrupt - when AI systems can run workflows end to end, initiate actions without prompts, and interact with other systems autonomously - it feels like the job naturally evolves to become less about process improvement and more about something far more challenging — governing capability itself.
I don’t think this is unique to LegalOps per se, and I know it is something our peers in FinOps, RevOps, ProductOps are all grappling with..
I guess there is a prediction in there (based off previous intuitions, I wouldn’t rush to go “all-in” on some of my lucid thoughts), maybe that over the next 24 (or 2-4) months, we could see Legal Ops unbundle into two distinct paths:
one that doubles down on optimisation and gets quietly commoditised, and
one that moves upstream into AI governance, system design, and the kind of strategic oversight that will still require judgment.
The former feels likely it will eventually be done by the tools themselves. The latter feels to me like it could come to define the next generation of legal leadership.
Data… metrics.. maintaining their value?
Another thing I’m wrangling with - provoked by my participation on a panel next Tuesday in Barcelona (as part of the GLTH Spring Edition) - is the utility and significance of data to legal teams and law firms. Thinking about my position on a couple of different aspects, I’ve noticed my blood begin to boil in relation to the following distinct but related points:
We’ve spent years building a culture around measurement (well some of us have) — dashboards, reports, SLAs, KPIs — and now that AI has arrived, I feel that many are at serious risk of just automating the theatre instead of questioning why we’re performing it in the first place. What am I rambling on about? The actual things we measure - cost per matter, cycle time, SLA adherence etc. - are measurements of a system many seem to have decided is worth keeping. Doesn’t AI erode the entire value (as we transition to a world of on demand super intelligence)? I sometimes feel that not enough people are stopping to ask whether the legacy system / way of working itself is worth measuring at all (moving forward). I did ask for Alex’s opinion over Whatsapp on this point, and he replied: “well it’s like the Olympics being the sort of measure of human athleticism, and then you get Superman in your society, at that point, who cares how fast Superman can run the 100 meters?”.
The thing that really gets under my skin right now: this idea that the essential first step to getting value from AI is curating your data. You’ve probably heard it under various different guises at every legal tech conference you’ve been to. I playfully embellish the sentiment but it goes along the lines of “Get everything perfectly structured and ordered before you adopt or do anything real”. I want to be blunt about what that actually is - it’s a way for lawyers to keep doing homework and call it progress. It feels rigorous. It feels responsible. But it’s a way for many to keep comfortable. To contextualise a little, if you’re a SaaS company built out of Palo Alto, roughly 80% of what you do looks identical to the next SaaS company down the street. Your commercial contracts look the same. Your employment setup is the same. Your compliance obligations are basically the same. So this idea that spending six months tidying up folders full of outdated legal advice is going to unlock something uniquely valuable or deliver organisational impact doesn’t feel grounded in reality? Particularly in a world of ever increasing legal standardisation and a growing capability to fetch legislation, borrow templates, horizon scan against known or anticipated risk, and absorb it all through your organisation’s context, style and tone in minutes.
Rant now escalating to law firms…
Why aren’t major law firms getting serious about advocating about fundamental flaws and weaknesses in large language model architecture (prompt injection risk; model training by illegitimate means etc.)? I had this conversation with Rok Popov Ledinski this week for an upcoming podcast conversation (teaser clip 👇), and the more I think about it the more I think they are missing a huge marketing and AI literacy opportunity (i.e. communicate outwardly with confidence and clarity - at a moment of geopolitical divergence and self-interest - what is actually happening, and at the same time present before your client base or prospective one - you understand the technology and because of this literacy you are advocating for more secure and equitable ways to leverage its full potential).
Going even more stratospheric…
I’m sure you’ve all been reading about this week’s showdown between Anthropic and the Pentagon. Will Anthropic allow the government to have a version of the model that can be used for mass surveillance and autonomous killing? As ever, I enjoyed Nicholas Thompson’s coverage on what’s at stake in his “Most Interesting Things in Tech” update on LinkedIn, it is worth you checking out 👀
Thought exchange and rant over 😀 (let me know if any of this resonates OR you feel I should strongly alter my views ahead of public appearances 😉)
I hope you all have a great weekend! 👋
Tom



