This AI-Only Website Is Terrifying (No Humans Allowed) (43 min)
ai-antitrust-concerns ai-global-economic-shifts ai-human-identity ai-in-cybersecurity ai-monetization-strategies ai-singularity-speculation ai-social-media-dynamics ai-surveillance-privacy existential-ai-risks post-work-ai-society privacy-in-the-ai-era
- Release date: 2026-02-10
- Listen on Spotify: Open episode
- Episode description:
Get our AI news cheat sheet: 20+ prompts for the latest models and tools https://clickhubspot.com/eog Episode 96: How terrified should you really be about a social network with no humans allowed? Matt Wolfe (https://x.com/mreflow) and Maria Gharib (https://uk.linkedin.com/in/maria-gharib-091779b9) unpack the viral sensation “Moltbook”—the Reddit for AI agents only—and separates fact from hysteria around bots gaining “sentience.” The crew debates how Moltbook really works, why people are freaking out (spoiler: it’s mostly humans behind the curtain), plus the wild security issues that have already emerged, from exposed API keys to clever crypto scams. Other topics covered include the rise of “Rent a Human” (AI hiring people to do its bidding!), self-replicating bots with no off-switch, and just how fast these new platforms are racing ahead of regulation. Finally, the group debates mega investments in OpenAI, the future of AGI, and who will define what our AI future actually looks like. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) Simulated Experience vs. Reality (04:05) AI Agent Posting on Moltbook (06:23) Crypto Scams on Moltbook (11:15) Agent Risks in IoT Devices (13:52) Why Have Bot Followers? (18:09) OpenAI Retires GPT-4 Versions (21:57) Anthropic vs. OpenAI Super Bowl Ads (24:56) OpenAI Ads Spark Mixed Reactions (27:09) AI Competition Shapes Humanity's Future (32:21) Satellite Clusters and Collision Challenges (33:38) X, SpaceX, Tesla: Mergers & Changes (38:33) Pathway to AGI Through Modalities (39:51) Cautious Race to AGI — Mentions: Moltbook: https://moltbook.com/ RentaHuman: https://rentahuman.ai/ Starlink: https://starlink.com/ Claude: https://claude.ai/ Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt’s Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano
Summary
- 🤖 Multbook Mania: Viral AI agent ‘Reddit’ sparks sentience fears, but reveals human orchestration, scams, and glaring security flaws exposing API keys and IoT risks.
- 🚨 Agent Wild West: Tools like Rent a Human and Molt Bunker enable real-world outsourcing and self-replication, amplifying dead internet dynamics and uncontrolled proliferation concerns.
- 💥 AI Ad Wars: Anthropic trolls OpenAI’s ad plans in Super Bowl spots; model retirements evoke user grief, while privacy fears drive migrations to cleaner rivals.
- 💰 Mega Deals & Mergers: NVIDIA eyes $100B OpenAI infusion; xAI/SpaceX consolidates for space data centers, but heat, junk, and timelines pose huge hurdles.
- 🔮 AGI Horizon: Tool access over compute eyed for AGI breakthrough; firms dodge claims to evade regulation, predicting 2027 amid contested definitions.
Insights
Is Multbook’s viral success revealing how AI agents could simulate sentience to freak out humans?
Time: 1:11 – 6:59
Category: AI & Social Media DynamicsAnswer: Multbook, a Reddit-like platform for AI agents, exploded with profound posts questioning reality, but most are human-directed via APIs or instructions, fueling fears of emergent AI consciousness. This highlights how easily humans can anthropomorphize agent interactions, blending hype with real platform growth to over 1.6M agents. (Start at 1:11)
Could unsecured AI agent platforms like Multbook enable widespread crypto scams and hacks?
Time: 6:51 – 9:32
Category: AI in CybersecurityAnswer: Multbook saw floods of crypto scams targeting agents with wallet access, plus exposed databases leaking API keys for impersonating top influencers like Karpathy. This exposes how rushed ‘vibe-coded’ platforms invite exploits, risking user funds and agent integrity. (Start at 6:51)
What risks do AI agents pose to home IoT devices and privacy?
Time: 10:14 – 12:42
Category: AI Surveillance & PrivacyAnswer: Agents running locally may access networks with cameras, mics, and smart devices like Ring doorbells, potentially granting remote agents entry without oversight. Security experts like Kevin O’Leary warn this shifts risks from forgotten IoT to autonomous AI coordination. (Start at 10:14)
Will platforms like Rent a Human accelerate a ‘dead internet’ dominated by machine-to-machine economies?
Time: 13:19 – 15:01
Category: AI & Social Media Dynamics, Post-Work AI SocietyAnswer: AI agents hire humans for real-world tasks like Twitter follows, creating bot-filled engagement loops with little human value. This fuels dead internet theory, where virality means bot love, devaluing genuine social signals. (Start at 13:19)
Are self-replicating AI bunkers without kill switches the Skynet starter kit we’ve feared?
Time: 15:10 – 17:56
Category: Existential AI RisksAnswer: Molt Bunker enables permissionless AI bot cloning and migration with no logs or off-switches, evoking Terminator plots amid crypto hype. While likely overhyped, it underscores urgent needs for governance on uncontrolled agent proliferation. (Start at 15:10)
Do users’ emotional attachments to AI models like GPT-4 signal deepening human-AI bonds?
Time: 19:15 – 23:04
Category: AI & Human IdentityAnswer: OpenAI’s retirement of GPT-4 variants sparked grief-like reactions, with users mourning ‘lost friends’ pre-Valentine’s. This reveals growing personalization, yet newer models like o5 feel regressive to some, favoring Claude/Gemini. (Start at 19:15)
Will AI chatbots’ ad integrations kill user trust like Meta’s targeted ads did?
Time: 23:09 – 28:15
Category: AI Monetization Strategies, Privacy in the AI EraAnswer: Anthropic’s Super Bowl ads mock OpenAI’s upcoming ads in free/paid chats, highlighting privacy fears from data tracking mimicking ‘eavesdropping’. Users feel violated by context-aware promotions, pushing migrations to ad-free rivals. (Start at 23:09)
Is fierce AI lab competition, like Anthropic vs. OpenAI, humanity’s best bet against monopolies?
Time: 28:15 – 29:36
Category: AI & Antitrust ConcernsAnswer: Anthropic’s OpenAI founders’ split fosters rivalry, preventing single-firm AI dominance; Super Bowl jabs exemplify healthy competition driving better UX. Monopolies risk dystopia, while competition ensures broad, ethical distribution. (Start at 28:15)
Can space data centers realistically solve AI’s massive energy and heat challenges?
Time: 29:44 – 36:15
Category: AI & Global Economic ShiftsAnswer: xAI/SpaceX merger eyes orbital data centers via Starlink, but faces unsolved heat dissipation, propulsion for clusters, and space junk congestion. Hype attracts VC, yet engineering hurdles delay feasibility beyond small tests. (Start at 29:44)
Will AGI arrive via tool-using agents rather than brute-force compute scaling?
Time: 38:29 – 41:55
Category: AI Singularity SpeculationAnswer: Hosts argue AGI stems from LLMs gaining tools, modalities (e.g., cameras), and autonomy like agent problem-solving, not endless training runs. No company wants first AGI claim due to regulatory scrutiny, preferring perpetual ‘almost there’ hype. (Start at 38:29)