Translating AI Models into Business Value From Governance to Deployment - Thomas Holmes of Akur8 (27 min)
- Release date: 2026-01-21
- Listen on Spotify: Open episode
- Episode description:
Today's guest is Thomas Holmes, Chief Actuary, North America at Akur8. Holmes works at the intersection of actuarial pricing, AI governance, and operationalizing analytics inside regulated insurance environments. Thomas joins Emerj Editorial Director Matthew DeMello to discuss how insurers move from AI experimentation to enterprise deployment — particularly when models must satisfy regulators, executives, and the realities of day-to-day actuarial and IT workflows. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the 'AI in Business' podcast! This episode is sponsored by Akur8. Learn how brands work with Emerj and other Emerj Media options at emerj.com/ad1.
Summary
- 🔄 Stakeholder Alignment: Regulators and executives converge on needing clear AI understanding; translation layers bridge actuarial, IT, and leadership languages for seamless adoption.
- 🔍 Transparency First: In pricing, explainable models prevent rate shocks and churn, trumping black boxes even where regulations allow complexity.
- 🛡️ Opinionated Guardrails: Structured frameworks with checks enable scaling from experiments to enterprise, embedding governance and risk detection.
- ⚖️ Build vs Buy ROI: Buy standard tools to cut maintenance debt; build only competitive edges, rigorously vetting vendors for long-term fit.
- 📊 Single Truth Source: One unified rate object eliminates rework, errors, and versioning chaos across actuarial analysis and IT production.
Insights
How can insurers align regulators and executives through ‘translation layers’ in AI deployment?
Time: 3:53 – 5:31
Category: AI Governance & Laws, AI-Driven Innovation EconomyAnswer: Regulators seek understanding of AI processes, rationale, and correctness, mirroring executives’ needs plus business fit. Tools that transform actuarial outputs into formats resonant with IT and leadership reduce communication barriers across diverse teams. (Start at 3:53)
Why do transparent AI models outperform black boxes in insurance pricing despite regulatory flexibility?
Time: 7:42 – 11:01
Category: AI Governance & Laws, AI Bias & FairnessAnswer: Even with Europe’s lax pricing rules allowing complex models, transparency prevents unexplained rate hikes, regulatory scrutiny, customer churn, and retention losses. High-consequence decisions demand local explainability over aggregate predictions. (Start at 7:42)
What signals readiness to scale AI from experimentation to enterprise-wide deployment in insurance?
Time: 11:46 – 14:23
Category: AI Governance & LawsAnswer: Post-experiment, leaders identify risks and outcomes, then implement opinionated frameworks with guardrails for standardized processes. This ensures governance, early issue detection, and scalability without flexibility loss. (Start at 11:46)
How do opinionated frameworks customize AI governance to industry-specific needs like actuarial soundness?
Time: 14:44 – 16:03
Category: AI Governance & LawsAnswer: Tailored to insurance, they define variable selection, model types, and explainability standards, ensuring regulatory compliance and transparency. This replaces generic checklists with practical, evolvable processes. (Start at 14:44)
How should insurers decide between building in-house AI tools and buying vendor solutions?
Time: 16:32 – 19:01
Category: AI Investment Trends, AI-Driven Innovation EconomyAnswer: Prioritize ROI by building only proprietary, differentiating features; buy commoditized tools like Excel alternatives to avoid actuarial time waste, technical debt, and maintenance costs. Evaluate vendors for support and evolution alignment. (Start at 16:32)
Why is a single source of truth essential for bridging actuarial modeling and IT implementation?
Time: 19:59 – 23:19
Category: AI-Driven Innovation Economy, AI Governance & LawsAnswer: Multiple fragmented sources (Excel, Python, engines) cause reimplementation errors and rework; one truth synchronizes re-rating, production, and governance, simplifying versioning and IT-actuarial handoffs. (Start at 19:59)