#209: Claude Mythos, Project Glasswing, Claude Code Leak, OpenAI Raises $122B & the End of Middle Management (1h 46m)
ai-driven-innovation-economy
ai-global-economic-shifts
ai-governance-laws
ai-in-cybersecurity
ai-in-everyday-life
ai-in-workforce-disruption
ai-intellectual-property
ai-moral-decision-making
existential-ai-risks
post-work-ai-society
privacy-in-the-ai-era
ai-driven-innovation-economy ai-global-economic-shifts ai-governance-laws ai-in-cybersecurity ai-in-everyday-life ai-in-workforce-disruption ai-intellectual-property ai-moral-decision-making existential-ai-risks post-work-ai-society privacy-in-the-ai-era
- Release date: 2026-04-14
- Listen on Spotify: Open episode
- Episode description:
An Anthropic AI model powerful enough to trigger emergency government briefings. A source code leak. A $122 billion OpenAI funding round. A Ronan Farrow exposé. Physical attacks on Sam Altman. Paul and Mike are back with two weeks of AI news and the analysis you need to make sense of it all. Show Notes: Access the show notes and show links here Timestamps: 00:00:00 — Intro 00:05:44 — Claude Mythos and Project Glasswing 00:32:03 — Claude Code Leak + Anthropic Subscription Shakeup 00:42:35 — Major OpenAI Updates 00:59:30 — AI for Writers Summit 01:01:41 — Mercor Breach 01:06:25 — Karpathy's LLM Knowledge Bases Go Viral 01:10:20 — AI and Jobs Update 01:19:34 — AI and Politics Update 01:25:32 — HubSpot Shifts to Outcome-Based AI Pricing 01:30:51 — SmarterX AI Use Case Spotlight 01:36:25 — AI Academy Spotlight 01:40:23 — AI Product and Funding Updates This episode is brought to you by AI Academy by SmarterX. AI Academy is your gateway to personalized AI learning for professionals and teams. Discover our new on-demand courses, live classes, certifications, and a smarter way to master AI. Learn more here. Visit our website Receive our weekly newsletter Join our community: Slack Community LinkedIn Twitter Instagram Facebook YouTube Looking for content and resources? Register for a free webinar Come to our next Marketing AI Conference Enroll in our AI Academy
Summary
- 🔒 Cybersecurity Wake-Up Call: Claude Mythos’s hacking prowess exposed AI’s dual role as defender and threat, prompting elite access programs but highlighting urgent needs for containment and industry collaboration.
- ⚖️ Power Concentration Risks: Withholding advanced models from public release risks centralizing AI control among big tech and governments, potentially widening societal divides and enabling scaled fraud.
- 💼 Workforce Transformation Accelerates: AI-driven job cuts are mounting, with economists revising views; companies like Block are flattening hierarchies, demanding new skills rubrics and policy safeguards like wealth funds.
- 📜 Policy Push for Resilience: OpenAI’s blueprint and California’s order signal a ‘New Deal’ for AI, emphasizing public stakes, portable benefits, and safety to navigate economic shifts and public anxiety.
- 🧠 Emerging AI Capabilities: From emotional emulation in models to dynamic second brains, AI is evolving toward autonomous, proactive systems, necessitating caution in integration and ethical oversight.
Insights
- How might the non-release of powerful AI models like Mythos lead to dangerous power centralization in the hands of big tech and governments?
- Time: 0:00 – 0:20
- Answer: Anthropic withheld Mythos from public release due to its hacking prowess, limiting access to select entities like Apple, Amazon, and banks via Project Glasswing. This could exacerbate inequalities by concentrating AI control among powerful corporations and institutions, sidelining smaller players and individuals. The discussion warns of broader implications for software security, fraud scaling, and societal equity in an AI-driven world.
- What if frontier AI models like Claude Mythos could autonomously discover thousands of zero-day vulnerabilities, revolutionizing cybersecurity but also risking unprecedented cyberweapons?
- Time: 5:44 – 8:37
- Answer: Anthropic’s Claude Mythos demonstrated exceptional ability to identify and exploit vulnerabilities in major operating systems and software, prompting emergency meetings with government and bank leaders. This capability highlights AI’s potential to enhance defensive patching through initiatives like Project Glasswing, but it also raises alarms about misuse by bad actors, potentially centralizing power among a few elite companies. The transcript underscores the urgency for industry-wide consortia to manage these risks responsibly.
- Could AI’s ability to emulate human emotions make it even harder to align models with human values, amplifying risks in autonomous systems?
- What happens when rapid AI company growth, like Anthropic’s, leads to operational mishaps such as source code leaks, accelerating open-source threats?
- Can policy proposals like OpenAI’s ‘Intelligence Age’ framework truly democratize AI benefits through public wealth funds and portable benefits?
- What if LLMs evolve into persistent ‘second brains’ for knowledge workers, transforming how we maintain and interact with personal wikis?
- How is AI accelerating job displacements, forcing economists to rethink long-held views on technology’s net positive employment effects?
- How might state-level AI regulations, like California’s executive order, set precedents for federal governance amid growing existential risks?