AI Podcast Insights 13 Mar 2026 to 19 Mar 2026
Non-Technical Podcast Preview
Week: 2026-03-13 to 2026-03-19
Non-Technical Podcast Preview (2026-03-13 to 2026-03-19)
This is a preview of the weekly OPML package: episode descriptions, quick summaries, and insight prompts (questions only).
AI + a16z
What’s Missing Between LLMs and AGI – Vishal Misra & Martin Casado (48 min)
- Release date: 2026-03-17
- Listen on Spotify: Open episode
- Episode description:
Vishal Misra returns to explain his latest research on how LLMs actually work under the hood. He walks through experiments showing that transformers update their predictions in a precise, mathematically predictable way as they process new information, explains why this still doesn’t mean they’re conscious, and describes what’s actually required for AGI: the ability to keep learning after training and the move from pattern matching to understanding cause and effect. Resources: Follow Vishal Misra on X: https://x.com/vishalmisra Follow Martin Casado on X: https://x.com/martin_casado Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts. Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Summary
- 🔍 Matrix Model of LLMs: LLMs approximate a vast, sparse matrix of prompt-to-next-token probabilities, enabling efficient generation from training data.
- 📈 Bayesian Updating Proven: In-context learning is mathematically precise Bayesian inference, validated empirically and via ‘Bayesian wind tunnel’ experiments across architectures.
- 🧠 No Consciousness, Just Prediction: LLMs lack inner life or agency, mimicking behaviors from training data while excelling at correlations but not causation.
- 🚧 AGI Barriers: Plasticity & Causality: Frozen weights prevent continual learning; correlation-based models can’t simulate or invent like humans (e.g., Einstein test).
- 🔮 Future: Causal + Continual AI: Next breakthroughs need architectures blending LLMs with lifelong learning and causal reasoning for true generality.
Insights
- Are LLMs unconsciously performing Bayesian inference during in-context learning?
- Why can’t current LLMs achieve true AGI like inventing relativity?
- Do LLMs lack consciousness because they prioritize next-token prediction over survival?
- What separates human intelligence from LLMs beyond Bayesian updating?
- Will scaling LLMs alone deliver AGI, or do we need causal architectures?
- How did a novel RAG system for cricket stats reveal LLM black-box magic?
- Can tools like TokenProbe demystify LLM probability shifts?
OpenClaw: Why the Internet Isn’t Built for AI Agents (47 min)
- Release date: 2026-03-19
- Listen on Spotify: Open episode
- Episode description:
Yoko Li, Guido Appenzeller, and Joel de la Garza discuss OpenClaw, the open source personal AI assistant that’s forcing a rethink of how identity, permissions, and security work on the internet. They cover why setting up Gmail integration took seven hours, what happens when an agent asks for domain-wide access to every email in your company, and why consumer websites like DoorDash and Amazon have no incentive to make their services agent-friendly. Resources: Follow Yoko Li on X: https://twitter.com/stuffyokodraws Follow Guido Appenzeller on X: https://twitter.com/appenz Follow Joel de la Garza on LinkedIn: https://www.linkedin.com/in/3448827723723234/ Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts. Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Summary
- 🚀 OpenClaw Unleashed: OpenClaw is an extensible open-source AI agent for personal tasks like email, calendars, and cat tracking, capable of self-coding integrations but hard to set up.
- 🔒 Security Genie: The agent’s power outpaces containment; risks include over-privileged access and social engineering, shifting limits from capability to secure ‘bottling’.
- 🔌 Integration Nightmares: Consumer sites like DoorDash block agents with no APIs or bot detection, demanding new proxies, agent accounts, and bot-welcome UIs.
- 🎛️ UI Revolution: Natural language replaces drag-and-drop RPA; agents abstract cron jobs and offer visibility, blending autonomy with human-in-loop for decisions.
- 💼 Adopt or Perish: Executives must embrace agents despite discomfort, as ignoring them risks Barnes & Nobling; opportunities abound in proxies, vaults, and enterprise sandboxes.
Insights
- What if AI agents like OpenClaw are limited not by their intelligence, but by our ability to securely contain them?
- How vulnerable are self-extending AI agents to social engineering attacks?
- Will incumbents like Amazon and DoorDash build agent-friendly APIs, or will new startups dominate?
- What does the future UI for AI agents look like—natural language dreams or persistent human oversight?
- Can dedicated hardware and fine-grained permissions make enterprise AI agents safe enough for high-value tasks?
- How might AI agents finally enforce best security practices humans ignore?
- Are separate agent identities and proxies the key to unlocking everyday automation?
AI For Humans: Weekly AI News, Tools & Trends
AI Can Improve Itself Now. We’re Sure That’s Fine. (59 min)
- Release date: 2026-03-13
- Listen on Spotify: Open episode
- Episode description:
AI just learned how to make itself smarter. That’s not a hypothetical anymore. Recursive self-learning is here, and it’s changing everything about how AI develops. This week on AI For Humans, we break down Andrej Karpathy’s new AutoResearch project and what recursive self-improvement actually means for the rest of us. Plus, Anthropic’s massive Time magazine profile reveals just how fast Claude is writing its own code, Meta quietly acquired an AI agent social network called MoltBook, Replit drops V4, Perplexity launches computer use, Gemini finally shows up in Google Docs and Maps, Cloudflare does a full 180 on web scraping, Figure’s robot cleans an entire living room, and there’s a robot horse. We’re sure that’s fine. AI IS IMPROVING ITSELF AND WE’RE JUST SITTING HERE WATCHING. #ai #artificialintelligence #aiforhumans Come to our Discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links // Karpathy’s AutoResearch: Recursive Self-Learning https://x.com/karpathy/status/2031135152349524125?s=20 AutoResearch GitHub Repository https://github.com/karpathy/autoresearch Sam Altman on Multi-Day and Multi-Week AI Agent Work https://youtu.be/sTnl8O_BuuE?si=xaWYyqYbVJYzOvYZ HBR: When Using AI Leads to Brain Fry https://hbr.org/2026/03/when-using-ai-leads-to-brain-fry Anthropic’s Big Time Magazine Profile: Claude, the Pentagon, and Disruption https://time.com/article/2026/03/11/anthropic-claude-disruptive-company-pentagon/ Claude’s Rapid Shipping Pace https://x.com/claudeai/status/2032124273587077133?s=20 Paperclip Open Sourced: AI-Powered Company Management https://x.com/dotta/status/2029239759428780116?s=20 Meta Acquires MoltBook AI Agent Social Network https://www.axios.com/2026/03/10/meta-facebook-moltbook-agent-social-network Replit V4 Launch https://x.com/amasad/status/2031755113694679094?s=20 Perplexity Computer Use https://x.com/perplexity_ai/status/2031790180521427166?s=20 Claude Code Makes Videos Now https://x.com/josephdviviano/status/2031196768424132881?s=20 Gavin’s Claude Code Video Experiment https://x.com/gavinpurcell/status/2031487595717226955?s=20 Gavin’s Claude Code Bio Video https://x.com/gavinpurcell/status/2031620238689898770?s=20 Gemini Comes to Google Docs and More https://x.com/OfficialLoganK/status/2031374503599567113?s=20 Gemini in Google Maps: Ask Maps with Immersive Navigation https://blog.google/products-and-platforms/products/maps/ask-maps-immersive-navigation/ Gemini Embeddings https://x.com/OfficialLoganK/status/2031411916489298156?s=20 Runway Characters https://x.com/runwayml/status/2031028120971571687?s=20 Cloudflare Launches /Crawl So All Sites Can Be Scraped https://x.com/CloudflareDev/status/2031488099725754821?s=20 Figure Robot Does Full Autonomous Living Room Cleanup https://x.com/Figure_robot/status/2031038981333565949?s=20 Deep Robotics Robot Horse https://x.com/DeepRobotics_CN/status/2031910951465992535?s=20 Real-Time Skeletal Visualization with Three.js https://x.com/nick_bisesi/status/2031728629592289591?s=20 Taking Halo ISO and Getting It to Play on Mac https://x.com/JasonBotterill/status/2031855986303254926?s=20 AI Tennis Prediction https://x.com/phosphenq/status/2031400355167117498 Green Code YouTube Channel: AI Explainers https://www.youtube.com/@Green-Code LotR x Pawn Stars AI Video Mashup https://www.reddit.com/r/aivideo/comments/1rqgolw/wrong_universe_lotr_vs_pawn_stars_ai_mashup/
Summary
- 🚀 Self-Improving AI Now: Recursive self-learning is live in labs and demos like Karpathy’s, accelerating from hours to weeks-long tasks.
- 🤖 Agentic Revolution: Tools like Claude Code and Paperclip let non-coders build and manage agent fleets for custom software.
- 💼 Agent Ecosystems Emerging: Meta’s Maltbook buy and Cloudflare’s scrape API signal infrastructure for AI-to-AI worlds.
- 🎨 Creative AI Surges: Agents craft glitch-art videos and games, blending code, video, and humor autonomously.
- 🏠 Robots Enter Homes: Figure’s autonomous cleanup and robot horses show physical AI advancing toward everyday utility.
Insights
- Is recursive self-improvement already happening in AI labs today?
- Can AI agents soon handle multi-week software tasks like senior employees?
- Will everyday people soon build custom software businesses with AI agents?
- Is Meta positioning itself to dominate AI agent-to-agent communication?
- Why is Cloudflare now helping bots scrape the web it once protected?
- Are household robots finally ready to clean your living room autonomously?
- Can AI agents create glitch-art videos expressing their ‘inner life’?
NVIDIA’s Jensen Huang Wants It All (GTC 2026) (22 min)
- Release date: 2026-03-17
- Listen on Spotify: Open episode
- Episode description:
Jensen Huang just stood on stage and said $1 trillion. He wasn’t joking. NVIDIA’s GTC 2026 keynote was a masterclass in flexing, and we’re breaking down every layer of the cake. We walk through Jensen Huang’s massive GTC 2026 keynote, from NVIDIA’s $1 trillion business projection to the inference inflection point that’s reshaping the entire AI industry. We dig into DLSS 5 and why AI-powered neural rendering is about to change gaming forever (sorry, gamers), NVIDIA’s deep integration with OpenClaw and the launch of NemoClaw for enterprise agents, chips in space, and what it all means when every company becomes an agentic-as-a-service company. Plus the Dwarkesh podcast with Dylan Patel on the real bottlenecks in compute that nobody’s talking about. JENSEN HUANG SAID ONE TRILLION DOLLARS AND DIDN’T BLINK. WE BLINKED. PS, we’re now coming to you TWICE a week (both a little shorter). Come to our Discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links // NVIDIA GTC 2026 Full Keynote with Jensen Huang https://www.youtube.com/live/jw_o0xr8MWU?si=VZAIG3E7vuUCwz6N DLSS 5: Breakthrough in Visual Fidelity for Games https://www.nvidia.com/en-us/geforce/news/dlss5-breakthrough-in-visual-fidelityfor-games/ DLSS 5 Official Trailer https://youtu.be/dJACkKbN-Eo?si=fIJvsV52—bOyTr Digital Foundry Deep Dive on DLSS 5 https://youtu.be/4ZlwTtgbgVA?si=g8TMgNlOWknKnqHo Good Til’ Cancelled: The GTC Game https://x.com/SAlexashenko/status/2033585849586331985?s=20 Dwarkesh Podcast: Dylan Patel on Compute Bottlenecks and Chips https://youtu.be/mDG_Hx3BSUE?si=YnLEIVhsaCpdVQgi
Summary
- 💰 Trillion-Dollar Ambition: NVIDIA projects $1T+ business by 2026, dominating AI across industries from healthcare to robotics.
- 🎮 DLSS 5 Gaming Revolution: AI upscaling adds hyper-real details to games, sparking debate but promising low-poly to realistic transformations.
- 🔓 NemoClaw Boosts Open Source: NVIDIA integrates OpenClaw for secure enterprise use, accelerating open AI on their GPUs.
- 🤖 Simulation-Powered Robots: Robots like Olaf train in virtual worlds for real tasks, blending cuteness with future scalability.
- 📈 Inference Drives GPU Boom: Shift to productive AI inference fuels endless demand for NVIDIA hardware, win-win for all models.
Insights
- How is NVIDIA poised to generate over a trillion dollars in business by 2026 across every major industry?
- What does the surge in AI inference demand mean for the future of GPU usage?
- Will DLSS 5’s AI upscaling transform gaming despite backlash from purist gamers?
- Why does branding graphics tech as ‘AI’ provoke such strong negative reactions from gamers?
- How is NVIDIA’s NemoClaw integration supercharging OpenClaw for enterprise adoption?
- Can simulated environments like those for the Olaf robot fast-track real-world AI robotics?
AI and I
Kate Lee on Taste, Hiring, and Running Editorial at Every (57 min)
- Release date: 2026-03-18
- Listen on Spotify: Open episode
- Episode description:
Kate Lee has spent her career working with words—first as a literary agent, then in roles at Medium, WeWork, and Stripe. As Every’s editor in chief, she’s been the quiet force behind the newsletter for more than three years.Lately, something has shifted in Kate’s work. After years of watching her colleague Dan Shipper evangelize AI from the front lines, Katie has started rewiring how she works and is integrating more and more AI tools into her workflow.We had Kate on to talk about her career path from book deals to tech startups, what it really means to run a newsletter as a small team in the age of AI, and what she thinks the bottleneck to automating copyediting is. Plus: the story of pulling off reviews of two major model releases in 24 hours, and how she’s using her AI-powered browser to help her hire.To hear more from Dan Shipper:Subscribe to Every: https://every.to/subscribeFollow him on X: https://twitter.com/danshipperTimestamps0:01 – Introduction and Kate’s early career as a literary agent4:45 – From book publishing to tech: Medium, WeWork, and Stripe Press12:00 – How Kate joined Every and what made the role click27:00 – What it’s like to be a knowledge worker at the frontier of AI31:00 – The “aha” moment: using AI to manage hundreds of applicants36:24 – How Every’s editorial team uses AI to enforce standards and train taste45:06 – Publishing two reviews of major model releases on the same day51:39 – What automating copy editing requiresLinks to resources mentioned in the episode:Proof: https://www.proofeditor.ai/
Summary
- 📝 Custom AI Editors: Trained on style guides and successes, AI lifts editorial floors for consistency across teams, reducing manual fixes.
- 🤖 Agentic Ops Boost: AI browsers automate Notion hiring and settings, freeing knowledge workers from tedious admin.
- 💡 Aha Moments Drive Adoption: Practical wins in ops and reliability shift skeptics to AI enthusiasts.
- 🔄 Feedback Loops Train AI: Weekly reviews of hits/misses refine custom tools for tailored content suggestions.
- 🚀 Small Team Superpowers: AI enables rapid, multimedia breaking coverage rivaling big newsrooms.
Insights
- How can AI enforce consistent editorial standards across diverse writers and editors?
- What administrative tasks can AI agents automate to free up knowledge workers?
- Why do savvy knowledge workers adopt AI only after practical ‘aha’ moments?
- How do small media teams train AI on proprietary data for better content creation?
- What new skills define editorial leadership in AI-augmented newsrooms?
- How is AI enabling small teams to rival larger media operations?
Agents of Scale
How Wistia started shipping nearly 10x feature releases per year (38 min)
- Release date: 2026-03-19
- Listen on Spotify: Open episode
- Episode description:
Most companies are scrambling to figure out AI. Wistia did the hard part first — a total culture reset that made a 188-person company operate with the efficiency of a 30-person startup. Then AI poured gasoline on it.Chris Savage co-founded Wistia nearly 20 years ago, grew it to serve hundreds of thousands of businesses, and took on $17 million in debt to buy out investors and stay independent. He joins Wade Foster to unpack what it actually takes to rewire a company’s operating system — and why doubling headcount didn’t make them ship any faster.One mandate changed everything: ship value to customers every two weeks. Features that had been sitting on six-month roadmaps launched in two weeks. Wistia went from 12 major product updates a year to over 100 — same team size. Chris explains why the bottleneck in software is shifting to taste, how Wistia’s new agentic video editor Remix is turning 45-minute sales calls into 3-minute shareable highlights, and what the “ChatGPT moment for video” means for trust in the workplace.Plus: Wade and Chris riff on Block’s AI-driven layoffs.Linked ResourcesChris Savage on the Economics of AI Avatars (Sacra)Wistia “Complete Control” — AI-Generated Ad Campaign Deep Dive”The ChatGPT Moment for Video” — Chris Savage on LinkedInWistiaChris Savage on LinkedIn
Summary
- 🚀 Cultural Velocity Shift: Abandoning rigid roadmaps for bi-weekly shipping unlocked 10x product updates, proving culture drives speed over headcount.
- ⚡ AI Bottleneck Migration: AI coding zeros code costs, moving constraints to taste, vision, and rapid customer validation via prototypes.
- 🎥 Authentic Video Renaissance: Post-AI gen, trust favors human-centric edits like Remix, blending real footage with agents for pro workflows.
- 💰 Usage-First Monetization: Evolving from seats to AI-output billing (e.g., remixes) aligns pricing with value in all-in-one platforms.
- 🌟 Bootstrapped Longevity Edge: Self-funding enables patient quality focus, outlasting VC churn amid AI acceleration and productivity booms.
Insights
- What if doubling your engineering team didn’t speed up shipping, but a cultural shift to bi-weekly customer value did?
- How is AI coding transforming engineering bottlenecks from scarce resources to human taste and vision?
- In the post-Sora era, why will video trust hinge more on human authenticity than production polish?
- How are AI agents unlocking trapped value in enterprise video like sales demos and meetings?
- Why does bootstrapped patience outperform VC-fueled speed in the AI product race?
- Will AI spark a Cambrian explosion of independent, expertise-embedded software companies?
- Are AI productivity gains prompting headcount cuts or ambitious reallocation?
Lenny’s Podcast: Product | Career | Growth
The tactical playbook for getting 20-40% more comp (without sounding greedy) | Jacob Warwick (Executive Negotiator) (1h 55m)
- Release date: 2026-03-15
- Listen on Spotify: Open episode
- Episode description:
Jacob Warwick is an executive negotiation coach who helps senior operators negotiate better salary, equity, titles, and severance packages. He has worked with leaders across tech and Hollywood, was previously a founder and CEO himself, and has helped clients secure millions in additional compensation. His approach focuses on collaboration over confrontation, understanding motivations, and treating job searches like enterprise sales processes.We discuss:Why a simple “What’s the chance there’s a little more here?” often unlocks a 20% bumpWhy Jacob sees 40% average movement when negotiations are run wellWhen negotiation actually starts (hint: it’s much earlier than you think)Why information + timing create powerThe biggest mistakes people make when negotiatingHow to navigate the important “What’s your comp expectation?” question without anchoring too lowWhy the best interviews feel more like discovery calls than interrogations—Brought to you by:Orkes—The enterprise platform for reliable applications and agentic workflowsMercury—Radically different bankingOmni—AI analytics your customers can trust—Episode transcript: https://www.lennysnewsletter.com/p/the-tactical-playbook-for-getting-more-comp—Archive of all Lenny’s Podcast transcripts: https://www.dropbox.com/scl/fo/yxi4s2w998p1gvtpu4193/AMdNPR8AOw0lMklwtnC0TrQ?rlkey=j06x0nipoti519e0xgm23zsn9&st=ahz0fj11&dl=0—Where to find Jacob Warwick:• Substack: https://www.execsandthecity.com• YouTube: https://www.youtube.com/@ExecsandtheCity• Website: https://www.thinkwarwick.com• Complete Job Search Course: https://www.execsandthecity.com/p/complete-job-search-course—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Jacob Warwick(04:12) How much comp people leave on the table(07:52) Why you shouldn’t feel greedy asking for more(09:45) What founders should know about negotiation(13:03) How Jacob works behind the scenes(15:35) The biggest mistakes people make when negotiating(19:30) Home-field advantage and controlling the conversation(23:02) The step-by-step approach to negotiating an offer(30:17) Jacob’s passion and why these tips don’t work on kids(32:04) Who should speak first about compensation(35:36) Understanding power(39:52) Breaking out of salary bands by focusing on pain points(45:45) Brief summary(47:20) Selling the vacation: How to visualize success(50:07) Controlling the narrative and planting seeds(59:01) Jacob’s role as hype man(01:01:05) Positioning yourself like a product(01:02:49) Making the process frictionless for hiring managers(01:06:20) Flipping the interview to extract information(01:12:17) Five tactical tips for negotiating comp(01:21:45) What to do when negotiations fall apart(01:25:05) Why negotiation is different for every individual(01:28:55) Why outcomes aren’t predetermined(01:32:52) Wild Hollywood negotiation stories(01:37:35) The first step you should take after getting an offer(01:40:30) Jacob’s personal mission(01:44:42) Lightning round and final thoughts—Referenced:• The ultimate guide to negotiating your comp: https://www.lennysnewsletter.com/p/the-ultimate-guide-to-negotiating• Sam Altman on X: https://x.com/sama• Tom Brady on X: https://x.com/TomBrady• Career Huddle: Interview & Negotiation Master Class with Jacob Warwick: https://www.youtube.com/watch?v=TgjWTiSj8E8…References continued at: https://www.lennysnewsletter.com/p/the-tactical-playbook-for-getting-more-comp—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed.
Summary
- 💰 Always Negotiate Simply: A gentle ‘What’s the chance for more?’ nets 20%+ bumps routinely, proving companies expect and budget for it without greed.
- 📹 Ditch Email for Live Talks: Video/in-person controls tone and body language, avoiding misreads from bad timing that kill deals.
- 🔍 Sell Future Value, Not Past: Probe pains, visualize success (‘sell the vacation’), treat interviews as sales discovery for massive uplifts.
- ⏳ Slow Down to Win Big: Delay responses for scarcity and intel; haste risks, patience builds 40%+ average gains.
- 🤝 Collaborate to Expand Pie: Frame as ‘we’ with incentives; attracts/retains AI-era talent amid exponential innovation curves.
Insights
- What’s the simplest way to boost a job offer by 20% without seeming greedy?
- Why is negotiating compensation over email a common and costly mistake?
- When should you reveal your salary expectations in an interview process?
- How can treating job interviews like enterprise sales unlock higher comp?
- In the era of AI-driven exponential growth, why must companies get creative with comp to retain top talent?
- How does slowing down the negotiation process create leverage?
- Why focus on collaboration over confrontation in high-stakes tech deals?
The AI for Sales Podcast
Augmenting Human Experience with AI (29 min)
- Release date: 2026-03-14
- Listen on Spotify: Open episode
- Episode description:
Summary In this episode of the AI for Sales podcast, host Chad Burmeister speaks with Chirag Kulkarni, co-founder and CEO of Hobbes, about the evolving role of AI in sales and customer experience. They discuss how AI is transforming customer interactions, the balance between automation and human touch, and the misconceptions surrounding AI’s capabilities. The conversation also touches on emerging technologies, ethical considerations, and the importance of emotional intelligence in sales. Takeaways AI is shifting from automation to augmenting human capabilities. Customers expect software to simplify their tasks. The human element in sales remains crucial despite AI advancements. AI can resolve issues faster, enhancing customer experience. Finding the right balance between AI and human interaction is essential. Misconceptions about AI’s intelligence can lead to unrealistic expectations. Humans possess emotional intelligence that AI cannot replicate. Local AI models are set to revolutionize the industry. Transparency in AI interactions fosters trust with customers. Ethical considerations in AI are becoming increasingly important. Chapters 00:00 Introduction to AI in Sales 02:03 The Evolution of Customer Experience with AI 05:28 AI’s Role in Onboarding and Customer Retention 09:10 Misconceptions About AI and Its Capabilities 12:48 Balancing AI Automation with Human Touch 16:38 Emerging Technologies in AI 20:09 Ethical Considerations in AI The AI for Sales Podcast is brought to you by BDR.ai, Nooks.ai, and ZoomInfo—the go-to-market intelligence platform that accelerates revenue growth. Skip the forms and website hunting—Chad will connect you directly with the right person at any of these companies. 👉 Visit www.SDR.ai/intro to unlock your direct line.
Summary
- 🚀 AI Speeds Onboarding: AI copilots like Hobbs deliver personalized, high-touch onboarding scalably, fixing unprofitable customer acquisition by automating integrations and reducing churn without human effort.
- 🤝 Humans Build Trust: Sales thrives on human relationships and EQ, which AI can’t replicate; balance AI for efficiency with personal touch to maintain customer connection and brand authenticity.
- ❌ Busting AI Myths: Agents aren’t superhuman replacements—they excel in prompts but lack nuance, consequences, and neuroplasticity, making humans superior for critical sales decisions.
- 🔮 Future Tech Horizons: Local models enable edge computing for speed and cost savings, while fine-tuning customizes AI for sales tasks, signaling shifts as cloud subsidies fade.
- ⚖️ Ethics First: Transparency in AI interactions is key to trust; regulations requiring disclosure in sales calls protect users while fostering responsible innovation.
Insights
- How is AI enabling high-touch onboarding at scale without human intervention?
- Why do customers still prefer humans for building sales relationships despite AI’s speed?
- What misconceptions about AI agents are hindering effective sales strategies?
- How does emotional intelligence give humans an edge in AI-augmented sales?
- Why are local models and fine-tuning poised to revolutionize AI in sales tools?
- What transparency rules are essential for ethical AI use in sales interactions?
The Artificial Intelligence Show
#203: Anthropic vs. Pentagon Round 3, NYT AI vs. Humans Writing Test, Atlassian’s AI-Era Layoffs & Grammarly’s Expert Cloning Scandal (1h 41m)
- Release date: 2026-03-17
- Listen on Spotify: Open episode
- Episode description:
Anthropic has filed two federal lawsuits to block the Pentagon’s supply chain risk designation and the back-and-forth on X between the Pentagon CTO and AI policy experts is revealing what this fight is really about. Paul and Mike unpack the politics, the implications, and why a deal is inevitable. Then: 86,000 people took the NYT’s AI writing quiz and most preferred the machine. Paul shares his human-to-machine writing scale and asks the question that actually matters: not whether AI can write, but when should we let it? Plus Atlassian’s 1,600 AI-driven layoffs, Amazon’s AI-caused outages, McKinsey’s chatbot getting hacked in two hours, and more. Show Notes: Access the show notes and show links here Click here to take this week’s AI Pulse. Timestamps: 00:00:00 — Intro 00:03:11 — AI Pulse Survey Results 00:07:48 — Anthropic vs. Pentagon Round 3 00:30:02 — New York Times Releases Controversial “AI Writing Quality” Quiz 00:46:18 — Atlassian Layoffs and Job Loss Dashboard 00:58:49 — Adobe CEO Stepping Down 01:07:14 — Amazon AI-Related Outages and Engineering Struggles 01:14:28 — McKinsey AI Chatbot Hacked 01:19:49 — AI Politics Update 01:24:06 — Grammarly AI “Expert Review” Controversy 01:30:51 — Andrej Karpathy’s Autoresearch Agent 01:34:47 — AI Product and Funding Updates This week’s episode is sponsored by our 2026 State of AI Report. This year, we’re going beyond marketing-specific research to uncover how AI is being adopted and utilized across the organization, and we need your help to create the most comprehensive report yet. It’s a quick seven-minute lift. In return, you’ll get the full report for free when it drops, plus a chance to win or extend a 12-month SmarterX AI Mastery Membership. Go to smarterx.ai/survey to share your input. That’s smarterx.ai/survey Visit our website Receive our weekly newsletter Join our community: Slack Community LinkedIn Twitter Instagram Facebook YouTube Looking for content and resources? Register for a free webinar Come to our next Marketing AI Conference Enroll in our AI Academy
Summary
- ⚔️ Anthropic-Pentagon Standoff: Escalating lawsuits and amicus briefs expose political rifts over AI safety in defense, risking US edge against China while Palantir confirms ongoing Claude use.
- 📝 AI Writing Beats Humans?: NYT quiz reveals 54% prefer AI prose mimicking literary giants, fueling debates on creativity and when humans should wield the pen.
- 💼 AI Layoffs Surge: Atlassian axes 10% for ‘AI-era’; 76k global job losses tracked, with entry-level youth hit hardest amid agent rise.
- 🔒 AI Risks Explode: Amazon outages from bad AI advice, McKinsey hack via APIs highlight governance failures in rushed adoption.
- 🛡️ Urgent Contingency Calls: Hosts demand planning for joblessness, energy strains; groups like Windfall Trust push safety nets as disruption looms.
Insights
- Will political clashes between AI labs and governments undermine national security efforts?
- Can AI writing fool readers into preferring it over literary masters?
- Are AI-driven layoffs signaling the end of traditional software jobs?
- Should businesses plan contingencies for mass AI joblessness now?
- Is over-reliance on AI causing enterprise failures like Amazon’s outages?
- Will AI clones of experts without permission erode trust in tools like Grammarly?
- Could autonomous AI agents accelerate innovation across all domains?
- Are multi-model strategies becoming the norm for AI users?
#204: AI Answers – What Should Stay Human, AI Pricing vs. Labor Cost, Leapfrogging Digitalisation, Getting Legal On Board & Do Reasoning Models Actually Reason? (59 min)
- Release date: 2026-03-19
- Listen on Spotify: Open episode
- Episode description:
Billable hours are in the past, human creativity gets its strongest case yet, and Paul explains what happens when ten AI agents start collaborating like a marketing team. Paul and Cathy tackle 16 real questions on career pivots into AI, the risks of over-reliance on productivity gains, enterprise training personalization, labor replacement pricing, whether AI actually reasons, and what leaders should do with the time AI is giving back. 00:00:00 — Intro 00:05:05 — How do you transition into AI without a coding background? 00:06:03 — What are the best AI skills to learn while job searching? 00:08:56 — Should consultants bill for time spent experimenting with AI? 00:11:44 — How do we make sure AI productivity isn’t quietly weakening our thinking? 00:14:17 — What’s the best reframe for creatives who see AI as a threat? 00:19:04 — How do you wrangle a Wild West AI free-for-all at your company? 00:20:45 — How do you personalize AI training at the enterprise level? 00:23:41 — How do you get legal stakeholders to enable AI adoption instead of blocking it? 00:28:06 — How will AI adoption pick up in traditional industries like manufacturing? 00:31:24 — Can companies behind on digitalisation leapfrog ahead with AI? 00:34:33 — Will AI companies eventually price based on the labor they replace? 00:37:55 — What is a swarm of agents and why does it matter? 00:43:34 — Do reasoning models actually reason or just predict the next word? 00:46:54 — Should AI companies be regulated to preserve diversity of thought? 00:49:34 — If AI can solve advanced math, why can’t it solve technological unemployment? 00:52:40 — How do we make sure AI gives us time back instead of just more work? Show Notes: Access the show notes and show links here This episode is brought to you by Google Cloud: Google Cloud is the new way to the cloud, providing AI, infrastructure, developer, data, security, and collaboration tools built for today and tomorrow. Google Cloud offers a powerful, fully integrated and optimized AI stack with its own planet-scale infrastructure, custom-built chips, generative AI models and development platform, as well as AI-powered applications, to help organizations transform. Customers in more than 200 countries and territories turn to Google Cloud as their trusted technology partner. Learn more about Google Cloud here: https://cloud.google.com/ Visit our website Receive our weekly newsletter Join our community: Slack Community LinkedIn Twitter Instagram Facebook YouTube Looking for content and resources? Register for a free webinar Come to our next Marketing AI Conference Enroll in our AI Academy
Summary
- 🚀 Democratized AI Access: Non-coders can thrive in AI via prompting and no-code tools, opening roles in strategy and marketing without traditional coding skills.
- 💼 Value Over Hours: Ditch billable hours for outcome-based pricing; AI efficiency demands focus on results, benefiting clients and providers alike.
- 🧠 Guard Human Skills: Balance AI speed with training to prevent skill atrophy, especially for juniors; intentional leadership is key amid productivity pressures.
- 🎨 Human Creativity Endures: Stories, imperfections, and authenticity give human work enduring appeal over flawless AI outputs, sparking a creative renaissance.
- 🏢 Structured Adoption Wins: C-suite-driven guardrails, surveys, and stakeholder alignment tame Wild West AI use, enabling safe, scalable transformation.
Insights
- How can non-coders like marketers transition into high-impact AI roles?
- What skills make job seekers stand out in an AI-driven job market?
- Why is billing by the hour obsolete in the AI era?
- How can businesses avoid overreliance on AI eroding critical thinking?
- What reframes AI as an ally rather than threat for creatives?
- How to tame a ‘Wild West’ of unchecked AI tool use in organizations?
- What’s the key to scalable, personalized AI training?
- How to turn legal teams into AI innovation enablers?
- Will ‘swarms’ of AI agents redefine team structures by year’s end?
- Should leaders intentionally reclaim time from AI productivity gains?
The Next Wave – AI and The Future of Technology
Meta Replacing Creators? + Sam Altman’s Mistake & 3 Big AI Updates (1h 26m)
- Release date: 2026-03-17
- Listen on Spotify: Open episode
- Episode description:
Get Matt’s favorite AI tools: https://clickhubspot.com/hfnb Episode 101: Are AI social media agents replacing real creators on platforms like Meta? Matt Wolfe (https://x.com/mreflow) and Joe Fier (https://www.youtube.com/@joefier) dive deep into this week’s major AI releases and the evolving role of autonomous agents. This episode explores cutting-edge updates from OpenAI, Anthropic, and Canva—including interactive visual explainers and Canva’s Magic Layers—and how these tools transform user experience and content creation. The hosts also unpack Meta’s acquisition of Maltbook, the viral social media network for AI agents, and discuss the broader implications for creators, businesses, and marketers. Plus, with clips from Sam Altman and Jensen Huang, the conversation turns to the future of intelligence as a commodity, the rise of agent-optimized marketing, and the changing value of intuition versus IQ. Robots, new prompt formulas, and the “deader internet” theory round out this lively, unpredictable episode. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) AI Tools and Tech Trends (06:39) Obsessed with AI Tools (10:20) Interactive Cone Visualization Comparison (17:16) Key Players and Stakes Overview (22:47) Transistors & Abstraction in Computing (29:17) “Photoshop Layers vs. AI Output (31:33) Customizable Screen Display Options (39:17) AI Usage Metering Explained (44:12) OpenAI’s Competitive AI Position (50:54) AI’s Role vs Human Intuition (53:37) Intuition Over IQ for Wealth (59:31) AI Bots Go Viral Briefly (01:06:40) Rise of Autonomous Spending Agents (01:08:27) Agent Influence in AI Commerce (01:15:43) Humanoid Robots: Practical or Optimal? (01:20:49) OpenAI’s Monetization Strategy Explained (01:24:44) Subscribe for More Fun! — Mentions: Joe Fier: https://www.youtube.com/@joefier Manus: https://manus.im/ Claude: https://claude.ai/ Gemini: https://gemini.google.com/app Cursor: https://cursor.com/ Canva: https://www.canva.com/ Maltbook: https://maltbook.com/ Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt’s Stuff: • Future Tools – https://futuretools.beehiiv.com/ • Blog – https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan’s Stuff: Newsletter: https://news.lore.com/ Blog – https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano
Summary
- 🔍 Visual Explainers Evolve: ChatGPT’s pre-built and Claude’s generated interactive visuals make math and concepts hands-on, with key prompts turning chats into diagrams.
- 🎨 Magic Layers Unlock Edits: Canva decomposes AI images into movable layers, revolutionizing thumbnails and infographics without Photoshop skills.
- 💡 AI as Metered Utility: Altman predicts token-based billing like electricity, spurring local AI alternatives amid commoditized intelligence.
- 🧠 Intuition Trumps IQ: Huang redefines smartness as EQ and foresight, as AI handles technical tasks, valuing human ‘vibe’ over raw intellect.
- 🤖 Agents and Robots Advance: Meta’s Maltbook buy hints at agent-driven social; cleaning bots like Helix tidy homes, blurring human-AI lines.
Insights
- How are interactive visuals transforming math and concept learning in AI chats?
- What prompting formula unlocks interactive diagrams from any AI conversation?
- Can Magic Layers make AI-generated images fully editable like Photoshop?
- Will AI intelligence become a metered utility like electricity?
- Is human ‘smartness’ shifting from IQ to EQ and intuition in an AI era?
- What does Meta’s Maltbook acquisition signal for AI agents on social media?
- Are humanoid robots the best for household chores or purpose-built alternatives?
Y Combinator Startup Podcast
Building A Global AI Startup From India (40 min)
- Release date: 2026-03-16
- Listen on Spotify: Open episode
- Episode description:
In this episode of The Lightcone, we talk with Mukund and Madhav Jha, the founders of Emergent – an AI platform that lets anyone build and ship production-ready software. In just eight months, users have created more than 7 million apps on Emergent, with the number doubling in just the last 45 days. We discuss how they built one of the most powerful AI coding agents, why they focused on non-technical users and what it’s like building in India for a global audience.Apply to Y Combinator: https://www.ycombinator.com/applyChapters:00:00 – Intro01:06 – What Is Emergent?01:18 – Founder Backstory02:09 – From AI Testing to General Coding Agents02:52 – Getting Ahead of the Market04:18 – The Pivot to Non-Technical Users05:22 – Why Second Movers Can Win in AI09:04 – Building for Production, Not Just Prototypes18:21 – Live Demo: Building Apps with Emergent24:40 – How Emergent Hires and Runs a Lean Team29:04 – Is SaaS Dead? The Rise of Personalized Software34:04 – The Future: Niche Apps, Solo Builders and AI Agency
Summary
- 🚀 Explosive Growth: Emergent hit 7M apps in 8 months post-YC, with 80% non-technical users from 190 countries building business tools like CRMs and niche apps.
- 🔧 Production-First Pivot: From testing agents to #1 SWE-Bench coding, Emergent prioritizes full-stack production-readiness with own infra, multi-agents, and memory—beating prototypes.
- 💡 Domain Expert Unlock: Non-coders bypass dev shops, slashing costs 100x and enabling ‘niche-of-niches’ like psychology-equestrian apps, empowering solopreneurs globally.
- 📉 SaaS Disruption: Custom AI clones kill generic SaaS; 20% apps are agentic, signaling shift to personalized, autonomous software over rigid workflows.
- 🌐 Agent Future: Swarms, 24h horizons, and verifiers herald Cambrian software explosion, expanding markets amid Jevons paradox of more tools fueling more ideas.
Insights
- How is Emergent empowering non-technical domain experts to build and ship production-ready apps that power real businesses?
- What pivotal insight allowed Emergent to automate full software engineering cycles?
- Why does Emergent’s long-term memory from user trajectories outperform static agent skills?
- How is second-mover advantage reshaping AI no-code platforms like Emergent?
- Is traditional SaaS facing extinction from customizable AI agentic software?
- How are AI tools accelerating a solopreneur revolution and niche-of-niches innovation?
- What future does agent swarms and 24-hour horizons promise for software creation?
