#203: Anthropic vs. Pentagon Round 3, NYT AI vs. Humans Writing Test, Atlassian’s AI-Era Layoffs & Grammarly’s Expert Cloning Scandal (1h 41m)
ai-antitrust-concerns ai-bias-fairness ai-driven-innovation-economy ai-generated-content-in-academia ai-governance-laws ai-in-art-music-creation ai-in-cybersecurity ai-in-everyday-life ai-in-national-security ai-in-workforce-disruption ai-intellectual-property ai-investment-trends ai-literacy-public-awareness ai-monetization-strategies ai-storytelling-media ai-surveillance-privacy ai-utopias-vs-dystopias cultural-creativity-with-ai existential-ai-risks post-work-ai-society
- Release date: 2026-03-17
- Listen on Spotify: Open episode
- Episode description:
Anthropic has filed two federal lawsuits to block the Pentagon's supply chain risk designation and the back-and-forth on X between the Pentagon CTO and AI policy experts is revealing what this fight is really about. Paul and Mike unpack the politics, the implications, and why a deal is inevitable. Then: 86,000 people took the NYT's AI writing quiz and most preferred the machine. Paul shares his human-to-machine writing scale and asks the question that actually matters: not whether AI can write, but when should we let it? Plus Atlassian's 1,600 AI-driven layoffs, Amazon's AI-caused outages, McKinsey's chatbot getting hacked in two hours, and more. Show Notes: Access the show notes and show links here Click here to take this week's AI Pulse. Timestamps: 00:00:00 — Intro 00:03:11 — AI Pulse Survey Results 00:07:48 — Anthropic vs. Pentagon Round 3 00:30:02 — New York Times Releases Controversial "AI Writing Quality" Quiz 00:46:18 — Atlassian Layoffs and Job Loss Dashboard 00:58:49 — Adobe CEO Stepping Down 01:07:14 — Amazon AI-Related Outages and Engineering Struggles 01:14:28 — McKinsey AI Chatbot Hacked 01:19:49 — AI Politics Update 01:24:06 — Grammarly AI "Expert Review" Controversy 01:30:51 — Andrej Karpathy's Autoresearch Agent 01:34:47 — AI Product and Funding Updates This week’s episode is sponsored by our 2026 State of AI Report. This year, we’re going beyond marketing-specific research to uncover how AI is being adopted and utilized across the organization, and we need your help to create the most comprehensive report yet. It’s a quick seven-minute lift. In return, you’ll get the full report for free when it drops, plus a chance to win or extend a 12-month SmarterX AI Mastery Membership. Go to smarterx.ai/survey to share your input. That’s smarterx.ai/survey Visit our website Receive our weekly newsletter Join our community: Slack Community LinkedIn Twitter Instagram Facebook YouTube Looking for content and resources? Register for a free webinar Come to our next Marketing AI Conference Enroll in our AI Academy
Summary
- ⚔️ Anthropic-Pentagon Standoff: Escalating lawsuits and amicus briefs expose political rifts over AI safety in defense, risking US edge against China while Palantir confirms ongoing Claude use.
- 📝 AI Writing Beats Humans?: NYT quiz reveals 54% prefer AI prose mimicking literary giants, fueling debates on creativity and when humans should wield the pen.
- 💼 AI Layoffs Surge: Atlassian axes 10% for ‘AI-era’; 76k global job losses tracked, with entry-level youth hit hardest amid agent rise.
- 🔒 AI Risks Explode: Amazon outages from bad AI advice, McKinsey hack via APIs highlight governance failures in rushed adoption.
- 🛡️ Urgent Contingency Calls: Hosts demand planning for joblessness, energy strains; groups like Windfall Trust push safety nets as disruption looms.
Insights
Should businesses plan contingencies for mass AI joblessness now?
Time: 0:12 – 0:34
Category: Post-Work AI Society, AI Governance & Laws, AI Utopias vs. DystopiasAnswer: Hosts urge scenario planning for AI’s economic disruption, criticizing overconfident assurances from leaders; Windfall Trust and Brookings advocate safety nets and planning like for national security threats, as unemployment for recent grads hits 42.5% underemployment. Ignoring mid-to-worst-case outcomes risks societal harm amid accelerating automation. (Start at 0:12)
Are multi-model strategies becoming the norm for AI users?
Time: 3:30 – 6:23
Category: AI in Everyday Life, AI Literacy & Public Awareness, AI Monetization StrategiesAnswer: Podcast poll shows 92% of listeners use 2+ AI models regularly (60% 2-3, 32% 4+), with 67% reconsidering tools post-Anthropic-Pentagon; 54% report wide-open AI access at work. This reflects diversification amid politics, capabilities driving choices over single-vendor lock-in. (Start at 3:30)
Will political clashes between AI labs and governments undermine national security efforts?
Time: 7:36 – 30:02
Category: AI Governance & Laws, AI in National Security, AI & Antitrust ConcernsAnswer: Anthropic’s lawsuits against the Pentagon’s supply chain risk label highlight tensions over AI safety guardrails, with support from Microsoft, AI researchers, and former officials warning of weakened US AI capabilities against adversaries like China. This precedent could force labs to remove safety features, risking unreliable models in defense while exposing political motivations over technical ones. (Start at 7:36)
Can AI writing fool readers into preferring it over literary masters?
Time: 30:04 – 46:18
Category: AI in Art & Music Creation, AI-Generated Content in Academia, AI & Storytelling/MediaAnswer: The New York Times quiz showed 54% of 86,000 readers preferring AI-generated passages mimicking authors like Cormac McCarthy and Ursula K. Le Guin, sparking debate on methodology but signaling closure of the quality gap at sentence level. This challenges notions of human creativity’s uniqueness, pushing writers to redefine when AI assists versus replaces authentic voice. (Start at 30:04)
Are AI-driven layoffs signaling the end of traditional software jobs?
Time: 46:18 – 58:02
Category: AI in Workforce Disruption, Post-Work AI Society, AI Investment TrendsAnswer: Atlassian cut 10% of its workforce (1,600 jobs, mostly engineers) explicitly for the ‘AI-era,’ despite strong growth, topping jobloss.ai’s 76,800 AI-linked global losses; ServiceNow’s CEO predicts mid-30s% unemployment for new grads as agents displace entry-level roles. This marks a shift where profitable firms restructure for AI efficiency, pressuring legacy SaaS models. (Start at 46:18)
Is over-reliance on AI causing enterprise failures like Amazon’s outages?
Time: 67:14 – 79:50
Category: AI in Cybersecurity, AI Surveillance & Privacy, AI Governance & LawsAnswer: Amazon’s four high-severity incidents, including a 6-hour checkout outage, stemmed from engineers following flawed AI advice from outdated wikis, amid pressure for 80% AI tool adoption; similar risks seen in McKinsey’s AI chatbot hack exposing millions of chats via open APIs. This underscores governance gaps in rushed AI integration, amplifying human errors. (Start at 67:14)
Will AI clones of experts without permission erode trust in tools like Grammarly?
Time: 84:05 – 90:49
Category: AI & Intellectual Property, AI Bias & Fairness, Cultural Creativity with AIAnswer: Grammarly’s ‘Expert Review’ mimicked figures like Stephen King using public data without consent, prompting a lawsuit and swift shutdown; backlash highlights ‘take first, apologize later’ ethics in AI, damaging brands built on augmentation. This raises IP and likeness rights issues as firms exploit reputations commercially. (Start at 84:05)
Could autonomous AI agents accelerate innovation across all domains?
Time: 90:56 – 94:48
Category: AI-Driven Innovation Economy, AI in Workforce Disruption, Existential AI RisksAnswer: Andrej Karpathy’s Auto Research script autonomously improved a language model by 11% via 20 changes in 48 hours, looping hypotheses, code, tests; this previews agentic R&D in marketing, business, beyond coding. Yet ‘intelligence brownouts’ from model downtime reveal over-reliance risks. (Start at 90:56)