AI Adoption and Skepticism in Regulated Industries - with Ylan Kazi of Blue Cross Blue Shield of North Dakota (21 min)
ai-for-personalized-medicine ai-governance-laws ai-human-identity ai-in-everyday-life ai-moral-decision-making
- Release date: 2026-01-27
- Listen on Spotify: Open episode
- Episode description:
Today's guest is Ylan Kazi, Chief Data and AI Officer, Blue Cross Blue Shield of North Dakota. Ylan joins Emerj Client Narrative & Content Strategy Lead Nick Gertsch to explore balancing AI innovation with risk governance in regulated healthcare sectors. Ylan also shares practical takeaways like forming cross-functional teams for realistic policy development, applying an AI Hippocratic Oath by starting with low-risk use cases to refine processes, and leveraging AI to improve patient experiences through lab result explanations and wait time predictions. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the 'AI in Business' podcast!
Summary
- ⚖️ Realistic Risk Framing: Compare AI risks to medical errors and driving, using safeguards like governance to enable innovation without paralysis.
- 🤝 Cross-Functional Governance: Involve legal, innovators, and frontline staff to craft practical policies balancing safety and real-world AI deployment.
- 🚀 Low-Risk Experimentation: Begin with manageable use cases to iron out issues, standardize processes, and build toward high-impact applications.
- 🛡️ AI Hippocratic Oath: Embrace ‘do no harm’ by acting thoughtfully—inaction harms more than governed progress in patient care.
- 👥 Patient-Driven Adoption: Leverage cultural AI familiarity, like ChatGPT for lab explanations, to enhance experiences and meet rising expectations.
Insights
Why should healthcare leaders tolerate calculated AI risks when medical errors already harm hundreds of thousands annually?
Time: 3:19 – 5:23
Category: AI Governance & LawsAnswer: Medical errors cause significant harm yet medicine continues with safeguards; AI can follow suit to deliver benefits while minimizing risks through governance. This realistic framing counters excessive risk aversion. (Start at 3:19)
How does everyday driving reveal the ideal balance of risk-taking and safeguards for AI adoption?
Time: 6:17 – 7:06
Category: AI in Everyday LifeAnswer: Driving is risky but managed with seatbelts, speed limits, and signs; AI needs similar governance guardrails without over-constraining innovation to unlock patient benefits. This analogy shifts mindsets from fear to practical caution. (Start at 6:17)
Why do we accept human errors and lies more readily than AI’s 95% accuracy?
Time: 7:18 – 8:17
Category: AI & Human IdentityAnswer: Society holds AI to unrealistically high standards compared to fallible humans, who err or deceive yet collaborate effectively. Recognizing this double standard encourages fair AI evaluation and adoption. (Start at 7:18)
How can cross-functional teams prevent impractical AI policies in regulated sectors?
Time: 9:59 – 11:44
Category: AI Governance & LawsAnswer: Including legal, finance, innovators, and frontline staff ensures policies are realistic and balanced, avoiding overly academic rules from safety experts without implementation experience. This fosters innovation alongside compliance. (Start at 9:59)
What does an ‘AI Hippocratic Oath’ teach about experimentation in healthcare?
Time: 12:24 – 14:15
Category: AI & Moral Decision-MakingAnswer: Like ‘do no harm’ in medicine, it urges action over inaction—starting with low-risk use cases to learn, standardize processes, and scale safely without fearing minor errors. Quantifying risks distinguishes manageable downsides from catastrophes. (Start at 12:24)
How is patient-led AI use like ChatGPT for lab results driving healthcare transformation?
Time: 15:29 – 17:11
Category: AI for Personalized MedicineAnswer: Patients organically use LLMs to simplify clinical notes, demanding better experiences akin to retail; providers can embrace this for explanations, wait time predictions, and affordability to reduce friction. (Start at 15:29)