PodcastsBusinessDoom Debates

Doom Debates

Liron Shapira
Doom Debates
Latest episode

123 episodes

  • Doom Debates

    DOOMER vs. BUILDER — AI Doom Debate with Devin Elliot, Software Engineer & Retired Pro Snowboarder

    17/12/2025 | 1h 17 mins.

    Devin Elliot is a former pro snowboarder turned software engineer who has logged thousands of hours building AI systems. His P(Doom) is a flat ⚫. He argues that worrying about an AI takeover is as irrational as fearing your car will sprout wings and fly away.We spar over the hard limits of current models: Devin insists LLMs are hitting a wall, relying entirely on external software “wrappers” to feign intelligence. I push back, arguing that raw models are already demonstrating native reasoning and algorithmic capabilities.Devin also argues for decentralization by claiming that nuclear proliferation is safer than centralized control.We end on a massive timeline split: I see superintelligence in a decade, while he believes we’re a thousand years away from being able to “grow” computers that are truly intelligence.Timestamps00:00:00 Episode Preview00:01:03 Intro: Snowboarder to Coder00:03:30 "I Do Not Have a P(Doom)"00:06:47 Nuclear Proliferation & Centralized Control00:10:11 The "Spotify Quality" House Analogy00:17:15 Ideal Geopolitics: Decentralized Power00:25:22 Why AI Can't "Fly Away"00:28:20 The Long Addition Test: Native or Tool?00:38:26 Is Non-Determinism a Feature or a Bug?00:52:01 The Impossibility of Mind Uploading00:57:46 "Growing" Computers from Cells01:02:52 Timelines: 10 Years vs. 1,000 Years01:11:40 "Plastic Bag Ghosts" & Builder Intuition01:13:17 Summary of the Debate01:15:30 Closing ThoughtsLinksDevin’s Twitter — https://x.com/devinjelliot---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

  • Doom Debates

    PhD AI Researcher Says P(Doom) is TINY — Debate with Michael Timothy Bennett

    11/12/2025 | 1h 52 mins.

    Dr. Michael Timothy Bennett, Ph.D, is an award-winning young researcher who has developed a new formal framework for understanding intelligence. He has a TINY P(Doom) because he claims superintelligence will be resource-constrained and tend toward cooperation.In this lively debate, I stress-test Michael’s framework and debate whether its theorized constraints will actually hold back superintelligent AI.Timestamps* 00:00 Trailer* 01:41 Introducing Michael Timothy Bennett* 04:33 What’s Your P(Doom)?™* 10:51 Michael’s Thesis on Intelligence: “Abstraction Layers”, “Adaptation”, “Resource Efficiency”* 25:36 Debate: Is Einstein Smarter Than a Rock?* 39:07 “Embodiment”: Michael’s Unconventional Computation Theory vs Standard Computation* 48:28 “W-Maxing”: Michael’s Intelligence Framework vs. a Goal-Oriented Framework* 59:47 Debating AI Doom* 1:09:49 Debating Instrumental Convergence* 1:24:00 Where Do You Get Off The Doom Train™ — Identifying The Cruxes of Disagreement* 1:44:13 Debating AGI Timelines* 1:49:10 Final RecapLinksMichael’s website — https://michaeltimothybennett.comMichael’s Twitter — https://x.com/MiTiBennettMichael’s latest paper, “How To Build Conscious Machines” — https://osf.io/preprints/thesiscommons/wehmg_v1?view_onlyDoom Debates' Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

  • Doom Debates

    Nobel Prizewinner SWAYED by My AI Doom Argument — Prof. Michael Levitt, Stanford University

    05/12/2025 | 1h 11 mins.

    My guest today achieved something EXTREMELY rare and impressive: Coming onto my show with an AI optimist position, then admitting he hadn’t thought of my counterarguments before, and updating his beliefs in realtime! Also, he won the 2013 Nobel Prize in computational biology.I’m thrilled that Prof. Levitt understands the value of raising awareness about imminent extinction risk from superintelligent AI, and the value of debate as a tool to uncover the truth — the dual missions of Doom Debates!Timestamps0:00 — Trailer1:18 — Introducing Michael Levitt4:20 — The Evolution of Computing and AI12:42 — Measuring Intelligence: Humans vs. AI23:11 — The AI Doom Argument: Steering the Future25:01 — Optimism, Pessimism, and Other Existential Risks34:15 — What’s Your P(Doom)™36:16 — Warning Shots and Global Regulation55:28 — Comparing AI Risk to Pandemics and Nuclear War1:01:49 — Wrap-Up1:06:11 — Outro + New AI safety resourceShow NotesMichael Levitt’s Twitter — https://x.com/MLevitt_NP2013-- Get full access to Doom Debates at lironshapira.substack.com/subscribe

  • Doom Debates

    Facing AI Doom, Lessons from Daniel Ellsberg (Pentagon Papers) — Michael Ellsberg

    29/11/2025 | 2h 15 mins.

    Michael Ellsberg, son of the legendary Pentagon Papers leaker Daniel Ellsberg, joins me to discuss the chilling parallels between his father’s nuclear war warnings and today’s race to AGI.We discuss Michael’s 99% probability of doom, his personal experience being “obsoleted” by AI, and the urgent moral duty for insiders to blow the whistle on AI’s outsize risks.Timestamps0:00 Intro1:29 Introducing Michael Ellsberg, His Father Daniel Ellsberg, and The Pentagon Papers5:49 Vietnam War Parallels to AI: Lies and Escalation25:23 The Doomsday Machine & Nuclear Insanity48:49 Mutually Assured Destruction vs. Superintelligence Risk55:10 Evolutionary Dynamics: Replicators and the End of the “Dream Time”1:10:17 What’s Your P(doom)?™1:14:49 Debating P(Doom) Disagreements1:26:18 AI Unemployment Doom1:39:14 Doom Psychology: How to Cope with Existential Risk1:50:56 The “Joyless Singularity”: Aligned AI Might Still Freeze Humanity2:09:00 A Call to Action for AI InsidersShow Notes:Michael Ellsberg’s website — https://www.ellsberg.com/Michael’s Twitter — https://x.com/MichaelEllsbergDaniel Ellsberg’s website — https://www.ellsberg.net/The upcoming book, “Truth and Consequence” — https://geni.us/truthandconsequenceMichael’s AI-related substack “Mammalian Wetware” — https://mammalianwetware.substack.com/Daniel’s debate with Bill Kristol in the run-up to the Iraq war — https://www.youtube.com/watch?v=HyvsDR3xnAg--Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

  • Doom Debates

    Max Tegmark vs. Dean Ball: Should We BAN Superintelligence?

    21/11/2025 | 1h 50 mins.

    Today's Debate: Should we ban the development of artificial superintelligence until scientists agree it is safe and controllable?Arguing FOR banning superintelligence until there’s a scientific consensus that it’ll be done safely and controllably and with strong public buy-in: Max Tegmark. He is an MIT professor, bestselling author, and co-founder of the Future of Life Institute whose research has focused on artificial intelligence for the past 8 years.Arguing AGAINST banning superintelligent AI development: Dean Ball. He is a Senior Fellow at the Foundation for American Innovation who served as a Senior Policy Advisor at the White House Office of Science and Technology Policy under President Trump, where he helped craft America’s AI Action Plan.Two of the leading voices on AI policy engaged in high-quality, high-stakes debate for the benefit of the public!This is why I got into the podcast game — because I believe debate is an essential tool for humanity to reckon with the creation of superhuman thinking machines.Timestamps0:00 - Episode Preview1:41 - Introducing The Debate3:38 - Max Tegmark’s Opening Statement5:20 - Dean Ball’s Opening Statement9:01 - Designing an “FDA for AI” and Safety Standards21:10 - Liability, Tail Risk, and Biosecurity29:11 - Incremental Regulation, Timelines, and AI Capabilities54:01 - Max’s Nightmare Scenario57:36 - The Risks of Recursive Self‑Improvement1:08:24 - What’s Your P(Doom)?™1:13:42 - National Security, China, and the AI Race1:32:35 - Closing Statements1:44:00 - Post‑Debate Recap and Call to ActionShow NotesStatement on Superintelligence released by Max’s organization, the Future of Life Institute — https://superintelligence-statement.org/Dean’s reaction to the Statement on Superintelligence — https://x.com/deanwball/status/1980975802570174831America’s AI Action Plan — https://www.whitehouse.gov/articles/2025/07/white-house-unveils-americas-ai-action-plan/“A Definition of AGI” by Dan Hendrycks, Max Tegmark, et. al. —https://www.agidefinition.ai/Max Tegmark’s Twitter — https://x.com/tegmarkDean Ball’s Twitter — https://x.com/deanwball Get full access to Doom Debates at lironshapira.substack.com/subscribe

More Business podcasts

About Doom Debates

It's time to talk about the end of the world! lironshapira.substack.com
Podcast website

Listen to Doom Debates, The Money Café with Alan Kohler and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.2.0 | © 2007-2025 radio.de GmbH
Generated: 12/18/2025 - 10:42:40 AM