Powered by RND
PodcastsTechnologyDoom Debates

Doom Debates

Liron Shapira
Doom Debates
Latest episode

Available Episodes

5 of 74
  • Gary Marcus vs. Liron Shapira — AI Doom Debate
    Prof. Gary Marcus is a scientist, bestselling author and entrepreneur, well known as one of the most influential voices in AI. He is Professor Emeritus of Psychology and Neuroscience at NYU.  He was founder and CEO of Geometric Intelligence, a machine learning company acquired by Uber in 2016.Gary co-authored the 2019 book, Rebooting AI: Building Artificial Intelligence We Can Trust, and the 2024 book, Taming Silicon Valley: How We Can Ensure That AI Works for Us. He played an important role in the 2023 Senate Judiciary Subcommittee Hearing on Oversight of AI, testifying with Sam Altman.In this episode, Gary and I have a lively debate about whether P(doom) is approximately 50%, or if it’s less than 1%!00:00 Introducing Gary Marcus02:33 Gary’s AI Skepticism09:08 The Human Brain is a Kluge23:16 The 2023 Senate Judiciary Subcommittee Hearing28:46 What’s Your P(Doom)™44:27 AI Timelines51:03 Is Superintelligence Real?01:00:35 Humanity’s Immune System01:12:46 Potential for Recursive Self-Improvement01:26:12 AI Catastrophe Scenarios01:34:09 Defining AI Agency01:37:43 Gary’s AI Predictions01:44:13 The NYTimes Obituary Test01:51:11 Recap and Final Thoughts01:53:35 Liron’s Outro01:55:34 Eliezer Yudkowsky’s New Book!01:59:49 AI Doom Concept of the DayShow NotesGary’s Substack — https://garymarcus.substack.comGary’s Twitter — https://x.com/garymarcusIf Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares — https://ifanyonebuildsit.comCome to the Less Online conference on May 30 - Jun 1, 2025: https://less.onlineHope to see you there!Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    2:04:01
  • Mike Israetel vs. Liron Shapira — AI Doom Debate
     Dr. Mike Israetel, renowned exercise scientist and social media personality, and more recently a low-P(doom) AI futurist, graciously offered to debate me!00:00 Introducing Mike Israetel12:19 What’s Your P(Doom)™30:58 Timelines for Artificial General Intelligence34:49 Superhuman AI Capabilities43:26 AI Reasoning and Creativity47:12 Evil AI Scenario01:08:06 Will the AI Cooperate With Us?01:12:27 AI's Dependence on Human Labor01:18:27 Will AI Keep Us Around to Study Us?01:42:38 AI's Approach to Earth's Resources01:53:22 Global AI Policies and Risks02:03:02 The Quality of Doom Discourse02:09:23 Liron’s OutroShow Notes* Mike’s Instagram — https://www.instagram.com/drmikeisraetel* Mike’s YouTube — https://www.youtube.com/@MikeIsraetelMakingProgressCome to the Less Online conference on May 30 - Jun 1, 2025: https://less.onlineHope to see you there!Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    2:15:10
  • Doom Scenario: Human-Level AI Can't Control Smarter AI
    I want to be transparent about how I’ve updated my mainline AI doom scenario in light of safe & useful LLMs. So here’s where I’m at…00:00 Introduction07:59 The Dangerous Threshold to Runaway Superintelligence18:57 Superhuman Goal Optimization = Infinite Time Horizon21:21 Goal-Completeness by Analogy to Turing-Completeness26:53 Intellidynamics29:13 Goal-Optimization Is Convergent31:15 Early AIs Lose Control of Later AIs34:46 The Superhuman Threshold Is Real38:27 Expecting Rapid FOOM40:20 Rocket Alignment49:59 Stability of Values Under Self-Modification53:13 The Way to Heaven Passes Right By Hell57:32 My Mainline Doom Scenario01:17:46 What Values Does The Goal Optimizer Have?Show NotesMy recent episode with Jim Babcock on this same topic of mainline doom scenarios — https://www.youtube.com/watch?v=FaQjEABZ80gThe Rocket Alignment Problem by Eliezer Yudkowsky — https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problemCome to the Less Online conference on May 30 - Jun 1, 2025: https://less.onlineWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:24:12
  • The Most Likely AI Doom Scenario — with Jim Babcock, LessWrong Team
    What’s the most likely (“mainline”) AI doom scenario? How does the existence of LLMs update the original Yudkowskian version? I invited my friend Jim Babcock to help me answer these questions.Jim is a member of the LessWrong engineering team and its parent organization, Lightcone Infrastructure. I’ve been a longtime fan of his thoughtful takes.This turned out to be a VERY insightful and informative discussion, useful for clarifying my own predictions, and accessible to the show’s audience.00:00 Introducing Jim Babcock01:29 The Evolution of LessWrong Doom Scenarios02:22 LessWrong’s Mission05:49 The Rationalist Community and AI09:37 What’s Your P(Doom)™18:26 What Are Yudkowskians Surprised About?26:48 Moral Philosophy vs. Goal Alignment36:56 Sandboxing and AI Containment42:51 Holding Yudkowskians Accountable58:29 Understanding Next Word Prediction01:00:02 Pre-Training vs Post-Training01:08:06 The Rocket Alignment Problem Analogy01:30:09 FOOM vs. Gradual Disempowerment01:45:19 Recapping the Mainline Doom Scenario01:52:08 Liron’s OutroShow NotesJim’s LessWrong — https://www.lesswrong.com/users/jimrandomhJim’s Twitter — https://x.com/jimrandomhThe Rocket Alignment Problem by Eliezer Yudkowsky — https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problemOptimality is the Tiger and Agents Are Its Teeth — https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality-is-the-tiger-and-agents-are-its-teethDoom Debates episode about the research paper discovering AI's utility function — https://lironshapira.substack.com/p/cais-researchers-discover-ais-preferencesCome to the Less Online conference on May 30 - Jun 1, 2025: https://less.onlineWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:53:28
  • AI Could Give Humans MORE Control — Ozzie Gooen
    Ozzie Gooen is the founder of the Quantified Uncertainty Research Institute (QURI), a nonprofit building software tools for forecasting and policy analysis. I’ve known him through the rationality community since 2008 and we have a lot in common.00:00 Introducing Ozzie02:18 The Rationality Community06:32 What’s Your P(Doom)™08:09 High-Quality Discourse and Social Media14:17 Guesstimate and Squiggle Demos31:57 Prediction Markets and Rationality38:33 Metaforecast Demo41:23 Evaluating Everything with LLMs47:00 Effective Altruism and FTX Scandal56:00 The Repugnant Conclusion Debate01:02:25 AI for Governance and Policy01:12:07 PauseAI Policy Debate01:30:10 Status Quo Bias01:33:31 Decaf Coffee and Caffeine Powder01:34:45 Are You Aspie?01:37:45 Billionaires in Effective Altruism01:48:06 Gradual Disempowerment by AI01:55:36 LessOnline Conference01:57:34 Supporting Ozzie’s WorkShow NotesQuantified Uncertainty Research Institute (QURI) — https://quantifieduncertainty.orgOzzie’s Facebook — https://www.facebook.com/ozzie.gooenOzzie’s Twitter — https://x.com/ozziegooenGuesstimate, a spreadsheet for working with probability ranges — https://www.getguesstimate.comSquiggle, a programming language for building Monte Carlo simulations — https://www.squiggle-language.comMetaforecast, a prediction market aggregator — https://metaforecast.orgOpen Annotate, AI-powered content analysis — https://github.com/quantified-uncertainty/open-annotate/Come to the Less Online conference on May 30 - Jun 1, 2025: https://less.onlineWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:59:13

More Technology podcasts

About Doom Debates

It's time to talk about the end of the world! lironshapira.substack.com
Podcast website

Listen to Doom Debates, Search Engine and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.18.2 | © 2007-2025 radio.de GmbH
Generated: 5/16/2025 - 3:02:35 PM