The AI Model Built for What LLMs Can’t Do (54 min)
ai-driven-innovation-economy
ai-human-identity
ai-in-workforce-disruption
ai-investment-trends
ai-moral-decision-making
ai-driven-innovation-economy ai-human-identity ai-in-workforce-disruption ai-investment-trends ai-moral-decision-making
- Release date: 2026-04-15
- Listen on Spotify: Open episode
- Episode description:
Most AI companies are racing to build bigger LLMs. Eve Bodnia thinks that's the wrong approach.Eve is the founder and CEO of Logical Intelligence, which is developing an alternative to the transformer-based models dominating the industry. Her argument: LLMs’ architecture makes them fundamentally unsuited for some mission-critical tasks. A system that generates output one token at a time, with no ability to inspect its own reasoning mid-process or guarantee its results, shouldn't be trusted to design chips, analyze financial data, or even fly a plane. Her alternative is the energy-based model (EBM), a form of AI rooted in the physics principle of energy minimization, not language prediction. Rather than guessing the next probable word, an EBM maps every possible outcome across a mathematical landscape, where likely states settle into valleys and improbable ones sit on peaks. Dan Shipper talked with Bodnia for AI & I about why she believes LLM progress is plateauing, what it means for AI to actually understand data rather than just pattern-match across it, and how her team is building toward formally verified code generated in plain English—no C++ required.If you found this episode interesting, please like, subscribe, comment, and share!Head to http://granola.ai/every and get 3 months free with the code EVERYTo hear more from Dan Shipper:Subscribe to Every: https://every.to/subscribe Follow him on X: https://twitter.com/danshipper Timestamps: 00:00:51 - Introduction00:02:09 - Why correctness and verifiability matter in AI00:09:33 - What an energy-based model is00:14:21 - How EBMs construct energy landscapes to understand data00:19:00 - Why modeling intelligence through language alone is a flawed approach00:26:54 - What it means for a model to "understand" data00:37:21 - How EBMs solve the vibe coding problem and enable formally verified code00:43:21 - Why LLM progress is plateauing00:49:54 - Mission-critical industries haven't adopted LLMs, and how EBMs could fill that gap
Summary
- 🔋 Energy Minimization Core: EBMs draw from physics principles to minimize energy landscapes, enabling token-free processing that avoids LLM hallucinations and supports verifiable AI outputs in critical systems.
- 🛡️ Verifiability Edge: Unlike black-box LLMs, EBMs allow internal inspection and external proof checks, ensuring deterministic results for tasks like code generation and autonomous navigation.
- ⚡ Efficiency Gains: By handling sparse data and non-language tasks directly, EBMs reduce compute costs and inference times, making them suitable for real-time applications in data analysis and engineering.
- 🧠 Latent Knowledge Storage: Latent variables capture underlying rules in energy forms, fostering abstract understanding akin to human cognition, free from language biases that limit LLMs.
- 💰 Investment Challenges: Heavy LLM investments create ecosystem inertia, but EBM-LLM hybrids offer a path to integrate innovations, potentially unlocking B2B AI for privacy-sensitive industries.
Insights
- How can energy-based models revolutionize AI reliability in mission-critical systems like autonomous vehicles?
- Time: 2:29 – 3:18
- Answer: Energy-based models (EBMs) offer internal verifiability and constraint enforcement, preventing hallucinations that plague LLMs in high-stakes environments such as self-driving cars or planes. By minimizing energy landscapes instead of predicting tokens, EBMs ensure deterministic outputs, making them safer for applications where errors could be catastrophic. This shift addresses the growing need for trustworthy AI as automation permeates industries.
- Why might token-free architectures like EBMs outperform LLMs in efficiency for tasks like code generation and data analysis?
- Time: 5:34 – 6:57
- Answer: EBMs avoid the computational expense of autoregressive token prediction, enabling real-time inspection and self-alignment during processing, which reduces costs and speeds up inference. They excel with sparse data by constructing energy landscapes that capture underlying rules, unlike LLMs that require vast training datasets and external verifiers. This makes EBMs ideal for resource-intensive fields like chip design and financial modeling, where LLMs fall short on verifiability and speed.
- In what ways do EBMs mimic human-like understanding through latent variables, surpassing language-dependent reasoning?
- Could integrating EBMs with LLMs bridge the gap between creative generation and verifiable execution in software development?
- How is the AI investment landscape biased toward LLMs, potentially hindering breakthroughs in alternative models like EBMs?