How Should AI Be Regulated? Use vs. Development (47 min)
- Release date: 2026-01-20
- Listen on Spotify: Open episode
- Episode description:
To Regulate AI Effectively, Focus on How It’s UsedA conversation with Martin Casado on learning from past computing platform shifts, understanding marginal risk in AI, and why open source matters for US competitiveness.One of the core pillars of our roadmap for federal AI legislation makes clear AI should not excuse wrongdoing. When people or companies use AI to break the law, existing criminal, civil rights, consumer protection, and antitrust frameworks should still apply. Enforcement agencies should have the resources they need to enforce the law. If existing bodies of law fall short in accounting for certain AI use cases, any new laws should be evidence-based, clearly defining marginal risks and the optimal approach to target harms directly. In this conversation, we go deeper on what that principle means in practice with Martin Casado, general partner at a16z where he leads the firm’s infrastructure practice and invests in advanced AI systems and foundational compute. Martin has lived through multiple platform shifts–as a researcher where he worked on large-scale simulations for the Department of Defense before working with the intelligence community on networking and cybersecurity, a pioneer of software-defined networking at Stanford, and the cofounder and CTO of Nicira, which was acquired by VMware–giving him a rare perspective on how breakthrough technologies are governed as they develop and scale. Martin joins Jai Ramaswamy and Matt Perault to discuss how decades of technology policy can inform addressing harmful uses of AI, defining marginal risk in AI, the importance of open source for long-term competitiveness, and more. Follow Jai Ramaswamy on X: https://twitter.com/jai_ramaswamyFollow Matt Perault on X: https://twitter.com/MattPeraultFollow Martin Casado on X: https://twitter.com/martin_casadoRead the a16z AI Policy Brief here: https://a16zpolicy.substack.com/ Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts. Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Summary
- 🚫 Regulate Uses, Not Development: Target bad behaviors with existing tech-neutral laws to avoid loopholes from unstable AI definitions and rapid evolution.
- 🇨🇳 Uncertainty Boosts Chinese Open-Source: US hesitation lets China dominate open models used by academics/startups, risking innovation leadership and embedded biases.
- 📖 Follow Software Regulation History: Past successes regulating misuse (malware, encryption) without curbing creation provide a proven, balanced model for AI.
- 🆚 Incumbents Gain, Startups Suffer: Compliance burdens and uncertainty slow startups’ funding/hiring, advantaging resourced big tech.
- ⚖️ Demand Evidence-Based Balance: Fill legal gaps after assessing marginal risks via robust discourse, avoiding precautionary overreach like EU setbacks.
Insights
How is regulatory uncertainty handing open-source AI leadership to China?
Time: 0:14 – 0:26
Category: AI-Driven Innovation EconomyAnswer: US companies hesitate to release strong open-source models due to legal risks like copyright lawsuits and unclear regs, pushing hobbyists, academics, and startups to Chinese alternatives. This chills innovation ecosystems critical for future breakthroughs, as seen with Linux. Already, 80% of AI startups pitch with Chinese open-source models. (Start at 0:14)
Why should AI regulation target harmful uses rather than model development?
Time: 0:47 – 1:04
Category: AI Governance & LawsAnswer: Regulating development is prone to loopholes due to the lack of a stable AI definition and rapid evolution, while existing laws already address bad behaviors like malware transmission or discrimination. Historical software precedents, such as the Computer Fraud and Abuse Act, focus on misuse, enabling innovation while curbing harms. This approach avoids obsolescence and fosters effective policy. (Start at 0:47)
What historical software regulation lessons apply to AI policy?
Time: 4:14 – 7:37
Category: AI Governance & LawsAnswer: Past approaches regulated bad uses (e.g., malware distribution, not creation; encryption without backdoors) rather than invention, balancing innovation and safety. This prevented stifling fields like e-commerce while addressing cybercrime through evolved enforcement tools. AI demands similar evidence-based, use-focused rules over speculative development bans. (Start at 4:14)
How has imbalanced discourse skewed AI policy debates?
Time: 8:47 – 10:53
Category: AI Governance & LawsAnswer: Unlike robust past discussions on internet/crypto (with pro-innovation academia/VCs), recent AI talks lack voices, with some VCs oddly anti-open-source. This leads to precautionary overreach without marginal risk evidence, as experts like Dawn Song note open research questions. Restoring equilibrium requires all stakeholders for evidence-based policy. (Start at 8:47)
What geopolitical risks arise from China’s open-source AI dominance?
Time: 34:31 – 37:06
Category: AI & Global Economic ShiftsAnswer: Chinese models embed values (e.g., Tiananmen biases), shaping global info perception and soft power as AI interfaces computing. Adoption creates network effects like VHS over Betamax, giving release cadence advantages. US uncertainty lets China penetrate hobbyists/startups, risking long-term leadership. (Start at 34:31)
Why do development-focused AI regs favor incumbents over startups?
Time: 39:03 – 41:51
Category: AI Investment TrendsAnswer: Uncertain rules like FLOPS thresholds burden resource-poor startups with compliance, halting funding, hiring, and customer adoption, while big tech absorbs costs. VCs pull term sheets amid 1,200 state bills and federal flux, killing early-stage innovation. Use-based regs allow startups to build freely under existing laws. (Start at 39:03)
Why trust existing laws to handle AI harms initially?
Time: 42:29 – 44:56
Category: AI Governance & LawsAnswer: General, tech-neutral laws already prohibit discrimination, fraud, and unfair practices using AI, with no ‘AI pass’ exemption. Gaps can be filled via evidence of marginal risks, avoiding obsolete AI-specific rules. Enforcement has adapted successfully to cyber evolution. (Start at 42:29)