From Demos to Defensible in Financial Services Copyright & Compliance for Enterprise AI - Naveen Kumar of TD Bank (19 min)
- Release date: 2026-02-10
- Listen on Spotify: Open episode
- Episode description:
Today's guest is Naveen Kumar, Head of AI Governance at TD Bank. With extensive experience in AI risk management and governance, he provides actionable strategies for secure AI scaling in regulated environments. Naveen joins Emerj Editorial Director Matthew DeMello to discuss foundational challenges blocking AI adoption in banking, including data leakage, prompt injection, shadow AI, and hallucinations. Naveen also shares practical takeaways, such as role-based AI guardrails for data access, safe sandboxes for experimentation, hybrid deployments to protect sensitive data, and treating AI agents as de-risked employees with human oversight for compliance and ROI. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the 'AI in Business' podcast!
Summary
- ⚠️ Core AI Risks: Copyright reproduction, compliance violations, data leakage to vendors, and poor auditability plague banking AI adoption, shifting risks from experimentation to exposure.
- 🛡️ Guardrails & Oversight: Implement output filters, human-in-the-loop reviews, and role-based policies to prevent non-compliant content and treat AI like de-risked employees.
- 📊 Data Transparency: Use end-to-end tracking pipelines like GPS for data sources, ensuring visibility into internal, licensed, or public inputs for every AI output.
- 🏠 Safe Experimentation: Deploy sandboxes for shadow AI prevention and hybrid models keeping sensitive data internal while using cloud for low-risk tasks.
- 📚 Build Literacy & Audit: Train teams on AI usage like driver’s ed, adopt real-time monitoring, and log everything for regulators to enable productive, defensible AI.
Insights
How does AI ‘know too much’ and accidentally reproduce copyrighted content in banking outputs?
Time: 3:38 – 4:16
Category: AI & Intellectual PropertyAnswer: AI risks generating outputs that mirror phrasing from copyrighted internal documents, training data, or online sources, akin to a child reciting book paragraphs. This creates legal issues for teams like marketing drafting reports from proprietary market research. It underscores the need for vigilance in high-stakes financial environments. (Start at 3:38)
What compliance pitfalls arise when AI violates regulations or privacy policies unintentionally?
Time: 4:18 – 4:31
Category: AI Governance & LawsAnswer: AI can produce outputs breaching regulatory, privacy, or internal policies, even accidentally, posing significant firm-wide risks. Examples include non-compliant content generation that regulators scrutinize. This shifts focus from experimentation to exposure in banking workflows. (Start at 4:18)
Why is feeding proprietary data to AI vendors a hidden licensing nightmare for banks?
Time: 4:44 – 5:02
Category: Privacy in the AI EraAnswer: Using confidential internal reports to fine-tune vendor models creates copies of proprietary data accessible to others, leading to legal headaches. Banks must track data usage meticulously to avoid such exposures. This highlights data handling as a core governance challenge. (Start at 4:44)
How does AI’s lack of traceability create auditability headaches for regulators and legal teams?
Time: 5:04 – 5:29
Category: AI Governance & LawsAnswer: AI-generated content often can’t be traced to exact sources, frustrating auditors demanding origins of reports. This ‘I don’t remember’ issue amplifies risks in regulated sectors like banking. Robust logging is essential for defensibility. (Start at 5:04)
Can transparent data pipelines act like GPS trackers to make AI safe and productive in enterprises?
Time: 7:20 – 7:47
Category: AI Governance & LawsAnswer: Tracking data end-to-end—internal, licensed, public—ensures visibility into what AI accesses for every output. This prevents copyright issues and builds regulatory trust. It’s a foundational step toward defensible AI adoption. (Start at 7:20)
Why are output guardrails and human-in-the-loop reviews non-negotiable for compliant AI in banking?
Time: 7:49 – 8:34
Category: AI Governance & LawsAnswer: Guardrails flag copyrighted or non-compliant phrases in AI suggestions, while humans validate outputs before external use. This hybrid approach balances innovation with safety, avoiding self-driving AI pitfalls. Most companies prioritize this over full autonomy. (Start at 7:49)
How can sandboxes and AI literacy training turn ‘teenage AI’ rebellion into safe enterprise innovation?
Time: 11:19 – 12:29
Category: AI in Workforce DisruptionAnswer: Safe sandboxes allow experimentation without sensitive data exposure, countering shadow AI from unapproved tools. Training builds literacy on copyright and compliance, like driver’s ed for AI users. Monitoring tools flag risks in real-time. (Start at 11:19)
Does customer consent for data purposes extend to AI repurposing, and how can transparency avoid privacy breaches?
Time: 13:34 – 16:46
Category: Privacy in the AI EraAnswer: Banks must ensure data from consents (e.g., home equity lines) isn’t repurposed for unrelated AI tasks like marketing without approval, risking breaches. Clear user agreements and ongoing transparency foster trust, paralleling healthcare models. Avoiding vague ‘cookies’ opt-ins is key. (Start at 13:34)