0:00
/
Transcript

AI Governance: Ethics, Agents & the Human Question

A General Counsel, an Enterprise AI governance lead, and a “big law” firm Partner walk into a podcast… and unanimously agree on what matters.

A quick reminder: Law://WhatsNext is our vehicle to explore through dialogue (or occasional reflection) how leading lawyers, educators and technologists are using emerging tech to evolve how we practice and administer legal services. No hype; just practical conversations.


🎙️ We step aside for this one. No hosts, no scripts — just three people who spend their working lives making AI governance actually function inside large, complex organisations.

Our friend Catie Sheret (General Counsel at Cambridge University Press & Assessment) hosts a rich three-way conversation with Oliver Patel (Head of Enterprise AI Governance at AstraZeneca) and Peter Lee (Partner at Simmons & Simmons). Three very different vantage points — converging on the same question: how do you actually make AI governance work in practice?


Listen Now

Available here or on Spotify, Apple Podcasts, or wherever you enjoy your podcasts.


What You’ll Learn

AI governance ≠ compliance theatre — Oliver makes a passionate case that governance is fundamentally change management. You can’t scale a model where a committee reviews every use case — not when anyone in the organisation can spin up AI capabilities in seconds. The real work is in education, literacy and empowering people to make good decisions autonomously. And he’d really like you to stop calling ethical principles “compliance theatre” — the signalling effect of those documents on shaping behaviour across a 100,000 person company is more powerful than people give it credit for.

The golden thread — Peter describes the most beautiful thing about well-designed AI governance: you can trace a single line from an organisation’s corporate philosophy and board-level purpose all the way down to the tools people use at their desks. When it works, it’s elegant.

Context is everything — Catie makes the point that standard frameworks (your OECD guidelines, your ISO 42001) give you the skeleton, but the organisation has to do the hard, sometimes abstract work of figuring out what AI ethics means for them specifically. At Cambridge, that’s content IP. At AstraZeneca, it’s medical ethics with very human consequences.

The agentic mindset shift — The conversation really heats up when they get to agentic AI. Peter frames it as a shift from asking “can we trust the output?” to “what actions can this system initiate, and under what constraints?” Oliver goes further and is admirably blunt: the core purpose of agentic AI is to take the human out of the loop. No matter what anyone says, that’s what’s happening. The question for organisations is: which domains should remain fundamentally human? Where is ethical, moral, strategic or financial judgment non-delegable — no matter how capable the agent becomes?

Why philosophers have never mattered more — Peter observes that the biggest existential threat to the profession right now is the impact of AI on critical thinking. Oliver, a trained philosopher himself, makes the connection between human flourishing and the governance function: if your AI literacy programme isn’t also helping people feel agency and purpose in their work, you’re missing the point.


About Our Guests

Catie Sheret is General Counsel at Cambridge University Press & Assessment — the department of the University of Cambridge responsible for research publishing, educational content publishing, and assessments worldwide. She’s led the organisation’s AI governance programme for over two and a half years and recently brought it under the legal function. A friend of the show who proves here she could host her own podcast any day of the week.

Oliver Patel is Head of Enterprise AI Governance at AstraZeneca, where he’s led AI governance across the nearly 100,000-person organisation for over three years. A philosopher and public policy specialist by background, Oliver is also an educator, having taught approximately 15 cohorts of AI governance professionals through the IAPP. His book, Fundamentals of AI Governance, is publishing this year and he also has an extremely popular Substack 👇

Peter Lee is a Partner at Simmons & Simmons and heads their AI Governance Advisory practice. A marine biologist and soldier before becoming a technology lawyer, Peter founded Wavelength — the world’s first legal engineering business — ten years ago, which was acquired by Simmons in 2019. In his spare time if he’s not on the guitar, he’s likely to be conducting research into responsible AI at the University of Cambridge.


Book Recommendations from the Episode

Every good podcast episode needs a book recommendation. In this one we get three:

📖 Richard Susskind’s How to Think About AI — Catie’s pick. A framework for getting into the right mindset before you tackle the big ethical questions.

📖 Jenny O’Dell’s How to Do Nothing — In a conversation about AI governance, Oliver recommends a book about stepping away from technology entirely. His argument: if you cultivate the parts of your life that aren’t mediated by screens — creativity, relationships, nature, the ability to be bored — you’ll probably be a more thoughtful and effective person in a work context too.

📖 Governing the Machine by Ray Eitel-Porter, Paul Dongha, Miriam Vogel — Peter’s recommendation. Balances the big philosophical questions with practical frameworks and ideas you can actually use.


We hope you enjoy this conversation as much as we did. It’s a masterclass in how to think about governance as something that enables rather than constrains — hosted with warmth and real expertise by Catie.

Discussion about this video

User's avatar

Ready for more?