PodcastsBusinessDoom Debates

Doom Debates

Liron Shapira
Doom Debates
Latest episode

127 episodes

  • Doom Debates

    Liron Enters Bannon's War Room to Explain Why AI Could End Humanity

    13/01/2026 | 30 mins.

    I joined Steve Bannon’s War Room Battleground to talk about AI doom.Hosted by Joe Allen, we cover AGI timelines, raising kids with a high p(doom), and why improving our survival odds requires a global wake-up call.00:00:00 — Episode Preview00:01:17 — Joe Allen opens the show and introduces Liron Shapira00:04:06 — Liron: What’s Your P(Doom)?00:05:37 — How Would an AI Take Over?00:07:20 — The Timeline to AGI00:08:17 — Benchmarks & AI Passing the Turing Test00:14:43 — Liron Is Typically a Techno-Optimist00:18:00 — Raising a Family with a High P(Doom)00:23:48 — Mobilizing a Grassroots AI Survival Campaign00:26:45 — Final Message: A Wake-Up Call00:29:23 — Joe Allen’s Closing Message to the War Room PosseLinks: Joe’s Substack — https://substack.com/@joebotJoe’s Twitter — https://x.com/JOEBOTxyzBannon’s War Room Twitter — https://x.com/Bannons_WarRoomWarRoom Battleground EP 922: AI Doom Debates with Liron Shapira on Rumble — https://rumble.com/v742oo4-warroom-battleground-ep-922-ai-doom-debates-with-liron-shapira.htmlDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

  • Doom Debates

    Noah Smith vs. Liron Shapira — Will AI spare our lives AND our jobs?

    05/01/2026 | 1h 55 mins.

    Economist Noah Smith is the author of Noahpinion, one of the most popular Substacks in the world.Far from worrying about human extinction from superintelligent AI, Noah is optimistic AI will create a world where humans still have plentiful, high-paying jobs!In this debate, I stress-test his rosy outlook. Let’s see if Noah can instill us with more confidence about humanity’s rapidly approaching AI future.Timestamps00:00:00 - Episode Preview00:01:41 - Introducing Noah Smith00:03:19 - What’s Your P(Doom)™00:04:40 - Good vs. Bad Transhumanist Outcomes00:15:17 - Catastrophe vs. Total Extinction00:17:15 - Mechanisms of Doom00:27:16 - The AI Persuasion Risk00:36:20 - Instrumental Convergence vs. Peace00:53:08 - The “One AI” Breakout Scenario01:01:18 - The “Stoner AI” Theory01:08:49 - Importance of Reflective Stability01:14:50 - Orthogonality & The Waymo Argument01:21:18 - Comparative Advantage & Jobs01:27:43 - Wealth Distribution & Robot Lords01:34:34 - Supply Curves & Resource Constraints01:43:38 - Policy of Reserving Human Resources01:48:28 - Closing: The Case for OptimismLinksNoah’s Substack — https://noahpinion.blog“Plentiful, high-paying jobs in the age of AI” — https://www.noahpinion.blog/p/plentiful-high-paying-jobs-in-the“My thoughts on AI safety” — https://www.noahpinion.blog/p/my-thoughts-on-ai-safetyNoah’s Twitter — https://x.com/noahpinion---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

  • Doom Debates

    I Debated Beff Jezos and His "e/acc" Army

    30/12/2025 | 3h 52 mins.

    In September of 2023, when OpenAI’s GPT-4 was still a fresh innovation and people were just beginning to wrap their heads around large language models, I was invited to debate Beff Jezos, Bayeslord, and other prominent “effective accelerationists” a.k.a. “e/acc” folks on an X Space.E/acc’s think building artificial superintelligence is unlikely to disempower humanity and doom the future, because that’d be an illegal exception to the rule that accelerating new technology is always the highest-expected-value for humanity.As you know, I disagree — I think doom is extremely likely and imminent possibility.This debate took place 9 months before I started Doom Debates, and was one of the experiences that made me realize debating AI doom was my calling. It’s also the only time Beff Jezos has ever not been too chicken to debate me.Timestamps00:00:00 — Liron’s New Intro00:04:15 — Debate Starts Here: Litigating FOOM00:06:18 — Defining the Recursive Feedback Loop00:15:05 — The Two-Part Doomer Thesis00:26:00 — When Does a Tool Become an Agent?00:44:02 — The Argument for Convergent Architecture00:46:20 — Mathematical Objections: Ergodicity and Eigenvalues01:03:46 — Bayeslord Enters: Why Speed Doesn’t Matter01:12:40 — Beff Jezos Enters: Physical Priors vs. Internet Data01:13:49 — The 5% Probability of Doom by GPT-501:20:09 — Chaos Theory and Prediction Limits01:27:56 — Algorithms vs. Hardware Constraints01:35:20 — Galactic Resources vs. Human Extermination01:54:13 — The Intelligence Bootstrapping Script Scenario02:02:13 — The 10-Megabyte AI Virus Debate02:11:54 — The Nuclear Analogy: Noise Canceling vs. Rubble02:37:39 — Controlling Intelligence: The Roman Empire Analogy02:44:53 — Real-World Latency and API Rate Limits03:03:11 — The Difficulty of the Off Button03:24:47 — Why Liron is “e/acc at Heart”Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

  • Doom Debates

    Doom Debates LIVE Call-In Show! Listener Q&A about AGI, evolution vs. engineering, shoggoths & more

    24/12/2025 | 2h 54 mins.

    AGI timelines, offense/defense balance, evolution vs engineering, how to lower P(Doom), Eliezer Yudkowksy, and much more!Timestamps:00:00 Trailer03:10 Is My P(Doom) Lowering?11:29 First Caller: AI Offense vs Defense Balance16:50 Superintelligence Skepticism25:05 Agency and AI Goals29:06 Communicating AI Risk36:35 Attack vs Defense Equilibrium38:22 Can We Solve Outer Alignment?54:47 What is Your P(Pocket Nukes)?1:00:05 The “Shoggoth” Metaphor Is Outdated1:06:23 Should I Reframe the P(Doom) Question?1:12:22 How YOU Can Make a Difference1:24:43 Can AGI Beat Biology?1:39:22 Agency and Convergent Goals1:59:56 Viewer Poll: What Content Should I Make?2:26:15 AI Warning Shots2:32:12 More Listener Questions: Debate Tactics, Getting a PhD, Specificity2:53:53 Closing ThoughtsLinks:Support PauseAI — https://pauseai.info/Support PauseAI US — https://www.pauseai-us.org/Support LessWrong / Lightcone Infrastructure — LessWrong is fundraising!Support MIRI — MIRI’s 2025 FundraiserAbout the show:Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

  • Doom Debates

    DOOMER vs. BUILDER — AI Doom Debate with Devin Elliot, Software Engineer & Retired Pro Snowboarder

    17/12/2025 | 1h 17 mins.

    Devin Elliot is a former pro snowboarder turned software engineer who has logged thousands of hours building AI systems. His P(Doom) is a flat ⚫. He argues that worrying about an AI takeover is as irrational as fearing your car will sprout wings and fly away.We spar over the hard limits of current models: Devin insists LLMs are hitting a wall, relying entirely on external software “wrappers” to feign intelligence. I push back, arguing that raw models are already demonstrating native reasoning and algorithmic capabilities.Devin also argues for decentralization by claiming that nuclear proliferation is safer than centralized control.We end on a massive timeline split: I see superintelligence in a decade, while he believes we’re a thousand years away from being able to “grow” computers that are truly intelligence.Timestamps00:00:00 Episode Preview00:01:03 Intro: Snowboarder to Coder00:03:30 "I Do Not Have a P(Doom)"00:06:47 Nuclear Proliferation & Centralized Control00:10:11 The "Spotify Quality" House Analogy00:17:15 Ideal Geopolitics: Decentralized Power00:25:22 Why AI Can't "Fly Away"00:28:20 The Long Addition Test: Native or Tool?00:38:26 Is Non-Determinism a Feature or a Bug?00:52:01 The Impossibility of Mind Uploading00:57:46 "Growing" Computers from Cells01:02:52 Timelines: 10 Years vs. 1,000 Years01:11:40 "Plastic Bag Ghosts" & Builder Intuition01:13:17 Summary of the Debate01:15:30 Closing ThoughtsLinksDevin’s Twitter — https://x.com/devinjelliot---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

More Business podcasts

About Doom Debates

It's time to talk about the end of the world! lironshapira.substack.com
Podcast website

Listen to Doom Debates, Straight Talk with Mark Bouris and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.2.2 | © 2007-2026 radio.de GmbH
Generated: 1/18/2026 - 6:05:44 AM