Funding Agentic AI in HR Without Losing Control - with Carey Smith of Blue Cross and Blue Shield (15 min)
- Release date: 2026-03-06
- Listen on Spotify: Open episode
- Episode description:
Today's guest is Carey Smith, Former President and CIO of XcelerateHealth and Chief Technology Innovation Officer (CTIO) of Blue Cross and Blue Shield of Minnesota. XcelerateHealth is a health-tech startup and business unit of Blue Cross and Blue Shield of Minnesota, focused on AI-driven digital products to transform healthcare insurance experiences. Carey joins Emerj's Nick Gertsch to discuss how leaders can structure talent and workforce AI so decisions are consistent, reviewable, and aligned with organizational controls. Smith also shares practical steps for tightening decision rights, improving data readiness, and designing workflows where AI accelerates hiring and mobility without increasing risk. This episode is sponsored by Eightfold AI. Learn how brands work with Emerj and other Emerj Media options at go.emerj.com/partner Want to share your AI adoption story with executive peers? Click emerj.com/expert for more information and to be a potential future guest on the 'AI in Business' podcast!
Summary
- 🔒 Governance First: Prioritize defining decision rights, bias thresholds, and audit mechanisms before deploying AI tools to make talent systems defensible and trusted.
- 🎯 Narrow Use Cases: Start with low-risk, high-impact areas like skills adjacency, internal mobility, and workforce planning to build momentum and value quickly.
- 🤝 Human-in-the-Loop: Agentic AI augments human judgment as a force multiplier, providing auditable insights while humans make final decisions within policy guardrails.
- ⚠️ Mitigate Bias Risks: Address black box issues, fragmented data, and regulatory scrutiny to avoid legal liabilities and workforce skepticism in talent decisions.
- 📈 Scale Beyond Pilots: Shift from experimental demos to integrated architectures with unified data and compliance, ensuring AI drives performance without existential risks.
Insights
How can enterprises bridge the black box accountability gap in AI for talent decisions?
Time: 3:17 – 4:05
Category: AI Bias & FairnessAnswer: Leaders treat AI as a magic wand for efficiency, but bias in talent decisions creates legal and cultural liabilities due to fragmented HR data, regulatory scrutiny, and lack of explainability, leading to workforce skepticism. (Start at 3:17)
How does human-in-the-loop transform agentic AI into a force multiplier for HR?
Time: 6:47 – 8:18
Category: AI in Workforce DisruptionAnswer: Agentic AI should surface insights, provide auditable reasoning trails, and augment human judgment rather than replace it, functioning like a chief workforce analyst within policy guardrails. (Start at 6:47)
What shifts enterprises from AI pilots to scalable, defensible systems?
Time: 9:21 – 11:16
Category: AI Governance & LawsAnswer: Stop endless piloting and architect governance-first with narrow use cases, human oversight, and compliance audits to ensure scalability without increasing organizational risk. (Start at 9:21)
Why must governance precede AI tools in scaling talent management systems?
Time: 9:49 – 10:26
Category: AI Governance & LawsAnswer: A governance-first framework defines decision rights, bias thresholds, explainability standards, and audit mechanisms before deployment to mitigate regulatory and reputational risks. (Start at 9:49)
Why integrate HR data silos to build trustworthy AI systems?
Time: 10:26 – 10:34
Category: AI Bias & FairnessAnswer: Fragmented data across systems hinders accurate AI decisions; a single source of truth enables reliable insights while reducing bias and compliance risks. (Start at 10:26)
What low-risk use cases unlock high-value AI applications in HR?
Time: 10:34 – 11:01
Category: AI in Workforce DisruptionAnswer: Prioritize workforce planning, internal mobility, and skills adjacency mapping over complex hiring decisions, as these offer strategic value with lower regulatory exposure. (Start at 10:34)