PodcastsTechnologyFuture of Life Institute Podcast

Future of Life Institute Podcast

Future of Life Institute
Future of Life Institute Podcast
Latest episode

489 episodes

  • Future of Life Institute Podcast

    How to Rebuild the Social Contract After AGI (with Deric Cheng)

    27/01/2026 | 1h 4 mins.
    Deric Cheng is Director of Research at the Windfall Trust. He joins the podcast to discuss how AI could reshape the social contract and global economy. The conversation examines labor displacement, superstar firms, and extreme wealth concentration, and asks how policy can keep workers empowered. We discuss resilient job types, new tax and welfare systems, global coordination, and a long-term vision where economic security is decoupled from work.
    LINKS:
    Deric Cheng personal website
    AGI Social Contract project site
    Guiding society through the AI economic transition
    CHAPTERS:
    (00:00) Episode Preview
    (01:01) Introducing Derek and AGI
    (04:09) Automation, power, and inequality
    (08:55) Inequality, unrest, and time
    (13:46) Bridging futurists and economists
    (20:35) Future of work scenarios
    (27:22) Jobs resisting AI automation
    (36:57) Luxury, land, and inequality
    (43:32) Designing and testing solutions
    (51:23) Taxation in an AI economy
    (59:10) Envisioning a post-AGI society
    PRODUCED BY:
    https://aipodcast.ing
    SOCIAL LINKS:
    Website: https://podcast.futureoflife.org
    Twitter (FLI): https://x.com/FLI_org
    Twitter (Gus): https://x.com/gusdocker
    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/
    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/
    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978
    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
  • Future of Life Institute Podcast

    How AI Can Help Humanity Reason Better (with Oly Sourbut)

    20/01/2026 | 1h 17 mins.
    Oly Sourbut is a researcher at the Future of Life Foundation. He joins the podcast to discuss AI for human reasoning. We examine tools that use AI to strengthen human judgment, from collective fact-checking and scenario planning to standards for honest AI reasoning and better coordination. We also discuss how we can keep humans central as AI scales, and what it would take to build trustworthy, society-wide sensemaking.
    LINKS:
    FLF organization site
    Oly Sourbut personal site
    CHAPTERS:
    (00:00) Episode Preview
    (01:03) FLF and human reasoning
    (08:21) Agents and epistemic virtues
    (22:16) Human use and atrophy
    (35:41) Abstraction and legible AI
    (47:03) Demand, trust and Wikipedia
    (57:21) Map of human reasoning
    (01:04:30) Negotiation, institutions and vision
    (01:15:42) How to get involved
    PRODUCED BY:
    https://aipodcast.ing
    SOCIAL LINKS:
    Website: https://podcast.futureoflife.org
    Twitter (FLI): https://x.com/FLI_org
    Twitter (Gus): https://x.com/gusdocker
    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/
    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/
    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978
    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
  • Future of Life Institute Podcast

    How to Avoid Two AI Catastrophes: Domination and Chaos (with Nora Ammann)

    07/01/2026 | 1h 20 mins.
    Nora Ammann is a technical specialist at the Advanced Research and Invention Agency in the UK. She joins the podcast to discuss how to steer a slow AI takeoff toward resilient and cooperative futures. We examine risks of rogue AI and runaway competition, and how scalable oversight, formal guarantees and secure code could support AI-enabled R&D and critical infrastructure. Nora also explains AI-supported bargaining and public goods for stability.
    LINKS:
    Nora Ammann site
    ARIA safeguarded AI program page
    AI Resilience official site
    Gradual Disempowerment website
    CHAPTERS:
    (00:00) Episode Preview
    (01:00) Slow takeoff expectations
    (08:13) Domination versus chaos
    (17:18) Human-AI coalitions vision
    (28:14) Scaling oversight and agents
    (38:45) Formal specs and guarantees
    (51:10) Resilience in AI era
    (01:02:21) Defense-favored cyber systems
    (01:10:37) AI-enabled bargaining and trade
    PRODUCED BY:
    https://aipodcast.ing
    SOCIAL LINKS:
    Website: https://podcast.futureoflife.org
    Twitter (FLI): https://x.com/FLI_org
    Twitter (Gus): https://x.com/gusdocker
    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/
    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/
    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978
    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
  • Future of Life Institute Podcast

    How Humans Could Lose Power Without an AI Takeover (with David Duvenaud)

    23/12/2025 | 1h 18 mins.
    David Duvenaud is an associate professor of computer science and statistics at the University of Toronto. He joins the podcast to discuss gradual disempowerment in a post-AGI world. We ask how humans could lose economic and political leverage without a sudden takeover, including how property rights could erode. Duvenaud describes how growth incentives shape culture, why aligning AI to humanity may become unpopular, and what better forecasting and governance might require.
    LINKS:
    David Duvenaud academic homepage
    Gradual Disempowerment
    The Post-AGI Workshop
    Post-AGI Studies Discord
    CHAPTERS:
    (00:00) Episode Preview
    (01:05) Introducing gradual disempowerment
    (06:06) Obsolete labor and UBI
    (14:29) Property, power, and control
    (23:38) Culture shifts toward AIs
    (34:34) States misalign without people
    (44:15) Competition and preservation tradeoffs
    (53:03) Building post-AGI studies
    (01:02:29) Forecasting and coordination tools
    (01:10:26) Human values and futures
    PRODUCED BY:
    https://aipodcast.ing
    SOCIAL LINKS:
    Website: https://podcast.futureoflife.org
    Twitter (FLI): https://x.com/FLI_org
    Twitter (Gus): https://x.com/gusdocker
    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/
    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/
    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978
    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
  • Future of Life Institute Podcast

    Why the AI Race Undermines Safety (with Steven Adler)

    12/12/2025 | 1h 28 mins.
    Stephen Adler is a former safety researcher at OpenAI. He joins the podcast to discuss how to govern increasingly capable AI systems. The conversation covers competitive races between AI companies, limits of current testing and alignment, mental health harms from chatbots, economic shifts from AI labor, and what international rules and audits might be needed before training superintelligent models. 

    LINKS:
    Steven Adler's Substack: https://stevenadler.substack.com

    CHAPTERS:
    (00:00) Episode Preview
    (01:00) Race Dynamics And Safety
    (18:03) Chatbots And Mental Health
    (30:42) Models Outsmart Safety Tests
    (41:01) AI Swarms And Work
    (54:21) Human Bottlenecks And Oversight
    (01:06:23) Animals And Superintelligence
    (01:19:24) Safety Capabilities And Governance

    PRODUCED BY:
    https://aipodcast.ing

    SOCIAL LINKS:
    Website: https://podcast.futureoflife.org
    Twitter (FLI): https://x.com/FLI_org
    Twitter (Gus): https://x.com/gusdocker
    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/
    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/
    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978
    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

More Technology podcasts

About Future of Life Institute Podcast

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Podcast website

Listen to Future of Life Institute Podcast, Hard Fork and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Future of Life Institute Podcast: Podcasts in Family

Social
v8.4.0 | © 2007-2026 radio.de GmbH
Generated: 2/3/2026 - 1:38:53 PM