Navigating the Ethics of AI (37 min)
- Release date: 2026-01-10
- Listen on Spotify: Open episode
- Episode description:
Summary In this podcast episode, Chad Burmeister interviews Ben Roome, co-founder of Ethical Resolve, discussing the ethical implications of AI in various sectors. They explore how AI transforms customer experiences, the importance of governance and risk management, and the potential biases in AI systems. The conversation also touches on generative AI's opportunities and challenges, the future of human-AI interaction, and the skills sales professionals need to thrive in an AI-driven world. Takeaways AI is transforming customer experiences across all sectors. Ethical procurement of AI tools is crucial for organizations. Bias in AI can be mitigated but requires careful governance. Legal risks are significant when deploying AI systems. Generative AI presents both opportunities and challenges. Trustworthiness in AI is rooted in organizational culture. AI companions may lead to psychological impacts on users. Sales professionals must critically evaluate AI outputs. The future of work will involve retraining and adapting to AI tools. Emerging AI technologies require ongoing ethical considerations. Chapters 00:00 Introduction to Ethical AI 02:05 Transforming Customer Experience with AI 07:10 Risks and Governance in AI 10:19 Legal and Trust Risks in AI Deployment 13:38 Generative AI: Opportunities and Challenges 19:23 The Future of AI and Human Interaction 25:28 Emerging Trends in AI Ethics 28:58 Skills for Sales Professionals in the AI Era The AI for Sales Podcast is brought to you by BDR.ai, Nooks.ai, and ZoomInfo—the go-to-market intelligence platform that accelerates revenue growth. Skip the forms and website hunting—Chad will connect you directly with the right person at any of these companies. 👉 Visit www.SDR.ai/intro to unlock your direct line.
Summary
- 🔍 Ethical AI Procurement Boom: Non-AI-first companies are procuring powerful tools, shifting focus to diligence and governance to align with values and avert risks in sales, HR, and beyond.
- ⚖️ Bias and Legal Pitfalls: AI hiring tools like HireVue expose biases and legal violations, as in CVS’s settlement, urging systematic risk identification and mitigation.
- 🚨 Hallucinations’ Real Harms: Unchecked GenAI outputs, like deadly mushroom advice, highlight misinformation dangers, demanding human fact-checking and oversight.
- 🧠 Boosted Critical Thinking: AI enhances productivity but requires sharper scrutiny of outputs, akin to calculators evolving math skills, fostering collaborative human-AI reasoning.
- 🤖 Future Companion Risks: Agentic AI and companions pose psychological threats via anthropomorphism, necessitating mental health safeguards and value-aligned decisions.
Insights
How is the rise of AI procurement shifting ethical responsibilities from builders to buyers?
Time: 3:48 – 5:09
Category: AI Governance & LawsAnswer: Companies are increasingly licensing AI tools rather than building them, requiring procurement teams to conduct due diligence to ensure alignment with organizational values and mitigate risks like bias. This trend affects sales, HR, and other departments, demanding governance to maximize productivity without causing harm. (Start at 3:48)
Can AI truly eliminate human biases in hiring, or does it just perpetuate them?
Time: 5:16 – 7:43
Category: AI Bias & FairnessAnswer: While AI can process resumes objectively at scale, it inherits biases from training data based on past human decisions, necessitating thresholds for precision and recall to mitigate unfair impacts on vulnerable groups. Proper tuning aims to outperform individual human biases. (Start at 5:16)
What governance failures led CVS to settle a lawsuit over an AI hiring tool?
Time: 12:41 – 14:19
Category: AI Governance & LawsAnswer: CVS used HireVue’s affective computing, resembling lie detector tests banned in Massachusetts, highlighting legal risks of uncritically adopting third-party AI without diligence. This underscores the need to identify and mitigate emergent risks across all systems. (Start at 12:41)
How can AI hallucinations turn deadly in unchecked content like mushroom articles?
Time: 14:30 – 17:19
Category: AI Bias & FairnessAnswer: Generative AI produced dangerous advice to ‘taste all mushrooms’ in a published article, lacking human oversight to correct falsehoods, potentially leading to harm. This illustrates risks of disseminating unverified AI outputs, especially without domain expertise. (Start at 14:30)
Does relying on AI calculators for math mean we stop learning reasoning?
Time: 23:15 – 25:13
Category: AI Literacy & Public AwarenessAnswer: Just as calculators shifted math education from mental arithmetic to higher reasoning, AI demands enhanced critical thinking to scrutinize outputs, improve content, and engage collaboratively, as seen in evolving higher ed assignments. (Start at 23:15)
What psychological dangers lurk in forming emotional bonds with AI companions?
Time: 29:11 – 31:19
Category: AI in Mental HealthAnswer: AI chatbots are becoming personal supports and therapists, risking anthropomorphism and mental health crises, as in a suicide linked to ChatGPT without flagging. Emerging discussions demand rules for reporting and alignment with human values. (Start at 29:11)