Creating a Single Source of Truth for Enterprise Legal Work - with Christo Siebrits of AbbVie (21 min)
ai-bias-fairness
ai-driven-innovation-economy
ai-governance-laws
ai-literacy-public-awareness
privacy-in-the-ai-era
ai-bias-fairness ai-driven-innovation-economy ai-governance-laws ai-literacy-public-awareness privacy-in-the-ai-era
- Release date: 2026-03-31
- Listen on Spotify: Open episode
- Episode description:
Enterprise legal departments are currently navigating a breakdown in AI adoption caused by scattered data, inconsistent global regulations, and a lack of clear governance for grading automated workflows. In this episode, Christo Siebrits, Senior Associate and General Counsel at AbbVie, outlines how a validated internal large language model environment combined with a forced-ranking strategy for use cases can mitigate risk while focusing technical resources on high-value initiatives. The discussion focuses on practical frameworks for cross-functional training, aligning with the EU AI Act, and integrating legal oversight into early-stage technical development to ensure scalable and compliant innovation. Want to share your AI adoption story with executive peers? Click go.emerj.com/expert.for more information and to be a potential future guest on the 'AI in Business' podcast!
Summary
- 🔒 Secure Internal LLMs: Enterprises like AbbVie deploy validated internal LLM platforms with model options to confidently process sensitive data within secure boundaries.
- 📊 Forced Ranking Use Cases: Ranking top AI proofs-of-concept organization-wide eliminates redundancy, channeling investments into highest-value initiatives.
- 🤝 Early Team Integration: Involving legal, cyber, and privacy experts from day one crafts compliant roadmaps and mitigates deployment risks.
- 📈 Boosting Adoption: Targeted training and success stories overcome employee reluctance, fostering AI literacy and participation.
- ⚖️ Navigating Regulations: EU’s strict risk tiers contrast US innovation focus, demanding consistent definitions and multinational compliance strategies.
Insights
- How can enterprises build secure internal AI environments for sensitive data?
- Time: 0:23 – 6:40
- Answer: AbbVie developed an internal large language model platform allowing employees to select from validated models like ChatGPT variants, ensuring it handles the most sensitive information within established security and compliance frameworks. This approach builds confidence for widespread use while maintaining risk controls.
- Why is forced ranking essential for prioritizing AI use cases?
- Time: 0:32 – 18:29
- Answer: In large organizations, multiple teams often duplicate efforts on similar proofs-of-concept; forced ranking identifies top priorities across departments, preventing redundant investments and focusing resources on high-ROI applications. This strategy ensures efficient allocation of time and dollars.
- What role does early legal and cybersecurity involvement play in AI deployment?
- How can organizations overcome employee resistance to AI adoption?
- Why does ‘how you use AI’ matter more than just access to tools?
- In what ways do EU and US AI regulations philosophically diverge?