How to Build the Future: Demis Hassabis (41 min)
ai-driven-innovation-economy
ai-for-personalized-medicine
ai-human-identity
ai-in-everyday-life
ai-in-gaming-virtual-worlds
ai-in-workforce-disruption
ai-moral-decision-making
ai-singularity-speculation
cultural-creativity-with-ai
ai-driven-innovation-economy ai-for-personalized-medicine ai-human-identity ai-in-everyday-life ai-in-gaming-virtual-worlds ai-in-workforce-disruption ai-moral-decision-making ai-singularity-speculation cultural-creativity-with-ai
- Release date: 2026-04-29
- Listen on Spotify: Open episode
- Episode description:
Demis Hassabis has had one of the most extraordinary careers in tech. He started as a chess prodigy and video game designer at 17 before getting a PhD in neuroscience and going on to found DeepMind. His lab cracked Go, solved protein structure prediction with AlphaFold, and then gave it away free to every scientist on earth. That work won him the 2024 Nobel Prize in Chemistry. Today he leads Google DeepMind, pushing toward the same goal he set as a teenager: AGI. On this special live episode of How to Build the Future, he sat down with YC's Garry Tan to talk about what still needs to happen to get us to AGI, his advice for founders on how to stay ahead of the curve and what the next big scientific breakthroughs might be. Chapters:00:00 — Intro00:46 — Demis Hassabis: From Chess Prodigy to DeepMind01:48 — What’s Missing Before We Get To AGI?03:36 — Why Memory Is Still Unsolved06:14 — How AlphaGo Shaped Gemini08:06 — Why Smaller Models Are Getting So Powerful10:46 — The 1000x Engineer12:40 — Continual Learning and the Future of Agents13:32 — Why AI Still Fails at Basic Reasoning15:33 — Are Agents Overhyped or Just Getting Started?18:31 — Can AI Become Truly Creative?20:26 — Open Models, Gemma, and Local AI22:26 — Why Gemini Was Built Multimodal24:08 — What Happens When Inference Gets Cheap?25:24 — From AlphaFold to the Virtual Cells28:24 — AI as the Ultimate Tool for Science30:43 — Advice for Founders33:30 — The AlphaFold Breakthrough Pattern35:20 — Can AI Make Real Scientific Discoveries?37:59 — What to Build Before AGI ArrivesApply to Y Combinator: https://www.ycombinator.com/applyWork at a startup: https://www.ycombinator.com/jobs
Summary
- 🧠 Path to AGI: Current AI paradigms form the core of AGI architecture, but continual learning, memory, and reasoning need innovations; agents are key for active problem-solving, with timelines around 2030.
- 🔬 AI in Science: AlphaFold exemplifies AI’s transformative role in biology, with virtual cells 10 years away; patterns like combinatorial search enable breakthroughs in materials, math, and drug discovery.
- ⚡ Efficiency Gains: Distillation packs frontier power into small models for edge devices and billions of users, boosting productivity 1000x while prioritizing privacy and speed in robotics and assistants.
- 🤖 Agent Potential: Agents are just starting, enabling prototypes and workflows but needing creativity for hits; multimodal Gemini advances real-world applications like gaming and physical AI.
- 🚀 Deep Tech Advice: Pursue interdisciplinary hard problems combining AI with atoms-world domains; consider AGI’s mid-journey impact, building defensible tools that general models can leverage.
Insights
- What key capabilities like continual learning and long-term reasoning are still missing for achieving AGI?
- Time: 0:00 – 3:25
- Answer: Demis Hassabis highlights that while current paradigms like large-scale pre-training and RLHF form the foundation, unsolved challenges in continual learning, memory, and consistent reasoning are essential for AGI. These gaps suggest that scaling existing techniques may require one or two major innovations to bridge them, impacting the path to general intelligence.
- How can neuroscience-inspired techniques like experience replay enhance AI memory and learning?
- Time: 3:47 – 5:04
- Answer: Drawing from his PhD work on the hippocampus, Hassabis explains how the brain consolidates memories during sleep, a concept borrowed in DeepMind’s early DQN for Atari games via experience replay. Current AI relies on brute-force context windows, but innovations in efficient memory retrieval could mimic biological processes for better adaptability.
- Why are reinforcement learning and search methods from AlphaGo still relevant to modern foundation models like Gemini?
- How does model distillation enable efficient deployment of powerful AI across billions of users?
- What limitations in current reasoning prevent AI from achieving consistent, human-like intelligence?
- Are AI agents overhyped, or just at the dawn of transformative potential?
- Can AI truly invent novel concepts, like creating the game of Go from a high-level description?
- How will multimodal models like Gemini revolutionize robotics and real-world assistants?
- When might we achieve virtual cells for drug discovery, and what challenges remain?
- What patterns make scientific domains ripe for AI breakthroughs like AlphaFold?