Powered by RND
PodcastsTechnologyMachine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST)
Latest episode

Available Episodes

5 of 224
  • Superintelligence Strategy (Dan Hendrycks)
    Deep dive with Dan Hendrycks, a leading AI safety researcher and co-author of the "Superintelligence Strategy" paper with former Google CEO Eric Schmidt and Scale AI CEO Alexandr Wang.*** SPONSOR MESSAGESGemini CLI is an open-source AI agent that brings the power of Gemini directly into your terminal - https://github.com/google-gemini/gemini-cliProlific: Quality data. From real people. For faster breakthroughs.https://prolific.com/mlst?utm_campaign=98404559-MLST&utm_source=youtube&utm_medium=podcast&utm_content=script-gen***Hendrycks argues that society is making a fundamental mistake in how it views artificial intelligence. We often compare AI to transformative but ultimately manageable technologies like electricity or the internet. He contends a far better and more realistic analogy is nuclear technology. Like nuclear power, AI has the potential for immense good, but it is also a dual-use technology that carries the risk of unprecedented catastrophe.The Problem with an AI "Manhattan Project":A popular idea is for the U.S. to launch a "Manhattan Project" for AI—a secret, all-out government race to build a superintelligence before rivals like China. Hendrycks argues this strategy is deeply flawed and dangerous for several reasons:- It wouldn’t be secret. You cannot hide a massive, heat-generating data center from satellite surveillance.- It would be destabilizing. A public race would alarm rivals, causing them to start their own desperate, corner-cutting projects, dramatically increasing global risk.- It’s vulnerable to sabotage. An AI project can be crippled in many ways, from cyberattacks that poison its training data to physical attacks on its power plants. This is what the paper refers to as a "maiming attack."This vulnerability leads to the paper's central concept: Mutual Assured AI Malfunction (MAIM). This is the AI-era version of the nuclear-era's Mutual Assured Destruction (MAD). In this dynamic, any nation that makes an aggressive, destabilizing bid for a world-dominating AI must expect its rivals to sabotage the project to ensure their own survival. This deterrence, Hendrycks argues, is already the default reality we live in.A Better Strategy: The Three PillarsInstead of a reckless race, the paper proposes a more stable, three-part strategy modeled on Cold War principles:- Deterrence: Acknowledge the reality of MAIM. The goal should not be to "win" the race to superintelligence, but to deter anyone from starting such a race in the first place through the credible threat of sabotage.- Nonproliferation: Just as we work to keep fissile materials for nuclear bombs out of the hands of terrorists and rogue states, we must control the key inputs for catastrophic AI. The most critical input is advanced AI chips (GPUs). Hendrycks makes the powerful claim that building cutting-edge GPUs is now more difficult than enriching uranium, making this strategy viable.- Competitiveness: The race between nations like the U.S. and China should not be about who builds superintelligence first. Instead, it should be about who can best use existing AI to build a stronger economy, a more effective military, and more resilient supply chains (for example, by manufacturing more chips domestically).Dan says the stakes are high if we fail to manage this transition:- Erosion of Control- Intelligence Recursion- Worthless LaborHendrycks maintains that while the risks are existential, the future is not set. TOC:1 Measuring the Beast [00:00:00]2 Defining the Beast [00:11:34]3 The Core Strategy [00:38:20]4 Ideological Battlegrounds [00:53:12]5 Mechanisms of Control [01:34:45]TRANSCRIPT:https://app.rescript.info/public/share/cOKcz4pWRPjh7BTIgybd7PUr_vChUaY6VQW64No8XMs<truncated, see refs and larger description on YT version>
    --------  
    1:45:38
  • DeepMind Genie 3 [World Exclusive] (Jack Parker Holder, Shlomi Fruchter)
    This episode features Shlomi Fuchter and Jack Parker Holder from Google DeepMind, who are unveiling a new AI called Genie 3. The host, Tim Scarfe, describes it as the most mind-blowing technology he has ever seen. We were invited to their offices to conduct the interview (not sponsored).Imagine you could create a video game world just by describing it. That's what Genie 3 does. It's an AI "world model" that learns how the real world works by watching massive amounts of video. Unlike a normal video game engine (like Unreal or the one for Doom) that needs to be programmed manually, Genie generates a realistic, interactive, 3D world from a simple text prompt.**SPONSOR MESSAGES***Prolific: Quality data. From real people. For faster breakthroughs.https://prolific.com/mlst?utm_campaign=98404559-MLST&utm_source=youtube&utm_medium=podcast&utm_content=script-gen***Here’s a breakdown of what makes it so revolutionary:From Text to a Virtual World: You can type "a drone flying by a beautiful lake" or "a ski slope," and Genie 3 creates that world for you in about three seconds. You can then navigate and interact with it in real-time.It's Consistent: The worlds it creates have a reliable memory. If you look away from an object and then look back, it will still be there, just as it was. The guests explain that this consistency isn't explicitly programmed in; it's a surprising, "emergent" capability of the powerful AI model.A Huge Leap Forward: The previous version, Genie 2, was a major step, but it wasn't fast enough for real-time interaction and was much lower resolution. Genie 3 is 720p, interactive, and photorealistic, running smoothly for several minutes at a time.The Killer App - Training Robots: Beyond entertainment, the team sees Genie 3 as a game-changer for training AI. Instead of training a self-driving car or a robot in the real world (which is slow and dangerous), you can create infinite simulations. You can even prompt rare events to happen, like a deer running across the road, to teach an AI how to handle unexpected situations safely.The Future of Entertainment: this could lead to a "YouTube version 2" or a new form of VR, where users can create and explore endless, interconnected worlds together, like the experience machine from philosophy.While the technology is still a research prototype and not yet available to the public, it represents a monumental step towards creating true artificial worlds from the ground up.Jack Parker Holder [Research Scientist at Google DeepMind in the Open-Endedness Team]https://jparkerholder.github.io/Shlomi Fruchter [Research Director, Google DeepMind]https://shlomifruchter.github.io/TOC:[00:00:00] - Introduction: "The Most Mind-Blowing Technology I've Ever Seen"[00:02:30] - The Evolution from Genie 1 to Genie 2[00:04:30] - Enter Genie 3: Photorealistic, Interactive Worlds from Text[00:07:00] - Promptable World Events & Training Self-Driving Cars[00:14:21] - Guest Introductions: Shlomi Fuchter & Jack Parker Holder[00:15:08] - Core Concepts: What is a "World Model"?[00:19:30] - The Challenge of Consistency in a Generated World[00:21:15] - Context: The Neural Network Doom Simulation[00:25:25] - How Do You Measure the Quality of a World Model?[00:28:09] - The Vision: Using Genie to Train Advanced Robots[00:32:21] - Open-Endedness: Human Skill and Prompting Creativity[00:38:15] - The Future: Is This the Next YouTube or VR?[00:42:18] - The Next Step: Multi-Agent Simulations[00:52:51] - Limitations: Thinking, Computation, and the Sim-to-Real Gap[00:58:07] - Conclusion & The Future of Game EnginesREFS:World Models [David Ha, Jürgen Schmidhuber]https://arxiv.org/abs/1803.10122POEThttps://arxiv.org/abs/1901.01753[Akarsh Kumar, Jeff Clune, Joel Lehman, Kenneth O. Stanley]The Fractured Entangled Representation Hypothesishttps://arxiv.org/pdf/2505.11581TRANSCRIPT:https://app.rescript.info/public/share/Zk5tZXk6mb06yYOFh6nSja7Lg6_qZkgkuXQ-kl5AJqM
    --------  
    58:22
  • Large Language Models and Emergence: A Complex Systems Perspective (Prof. David C. Krakauer)
    Prof. David Krakauer, President of the Santa Fe Institute argues that we are fundamentally confusing knowledge with intelligence, especially when it comes to AI.He defines true intelligence as the ability to do more with less—to solve novel problems with limited information. This is contrasted with current AI models, which he describes as doing less with more; they require astounding amounts of data to perform tasks that don't necessarily demonstrate true understanding or adaptation. He humorously calls this "really shit programming".David challenges the popular notion of "emergence" in Large Language Models (LLMs). He explains that the tech community's definition—seeing a sudden jump in a model's ability to perform a task like three-digit math—is superficial. True emergence, from a complex systems perspective, involves a fundamental change in the system's internal organization, allowing for a new, simpler, and more powerful level of description. He gives the example of moving from tracking individual water molecules to using the elegant laws of fluid dynamics. For LLMs to be truly emergent, we'd need to see them develop new, efficient internal representations, not just get better at memorizing patterns as they scale.Drawing on his background in evolutionary theory, David explains that systems like brains, and later, culture, evolved to process information that changes too quickly for genetic evolution to keep up. He calls culture "evolution at light speed" because it allows us to store our accumulated knowledge externally (in books, tools, etc.) and build upon it without corrupting the original.This leads to his concept of "exbodiment," where we outsource our cognitive load to the world through things like maps, abacuses, or even language itself. We create these external tools, internalize the skills they teach us, improve them, and create a feedback loop that enhances our collective intelligence.However, he ends with a warning. While technology has historically complemented our deficient abilities, modern AI presents a new danger. Because we have an evolutionary drive to conserve energy, we will inevitably outsource our thinking to AI if we can. He fears this is already leading to a "diminution and dilution" of human thought and creativity. Just as our muscles atrophy without use, he argues our brains will too, and we risk becoming mentally dependent on these systems.TOC:[00:00:00] Intelligence: Doing more with less[00:02:10] Why brains evolved: The limits of evolution[00:05:18] Culture as evolution at light speed[00:08:11] True meaning of emergence: "More is Different"[00:10:41] Why LLM capabilities are not true emergence[00:15:10] What real emergence would look like in AI[00:19:24] Symmetry breaking: Physics vs. Life[00:23:30] Two types of emergence: Knowledge In vs. Out[00:26:46] Causality, agency, and coarse-graining[00:32:24] "Exbodiment": Outsourcing thought to objects[00:35:05] Collective intelligence & the boundary of the mind[00:39:45] Mortal vs. Immortal forms of computation[00:42:13] The risk of AI: Atrophy of human thoughtDavid KrakauerPresident and William H. Miller Professor of Complex Systemshttps://www.santafe.edu/people/profile/david-krakauerREFS:Large Language Models and Emergence: A Complex Systems PerspectiveDavid C. Krakauer, John W. Krakauer, Melanie Mitchellhttps://arxiv.org/abs/2506.11135Filmed at the Diverse Intelligences Summer Institute:https://disi.org/
    --------  
    49:48
  • Pushing compute to the limits of physics
    Dr. Maxwell Ramstead grills Guillaume Verdon (AKA “Beff Jezos”) who's the founder of Thermodynamic computing startup Extropic.Guillaume shares his unique path – from dreaming about space travel as a kid to becoming a physicist, then working on quantum computing at Google, to developing a radically new form of computing hardware for machine learning. He explains how he hit roadblocks with traditional physics and computing, leading him to start his company – building "thermodynamic computers." These are based on a new design for super-efficient chips that use the natural chaos of electrons (think noise and heat) to power AI tasks, which promises to speed up AND lower the costs of modern probabilistic techniques like sampling. He is driven by the pursuit of building computers that work more like your brain, which (by the way) runs on a banana and a glass of water! Guillaume talks about his alter ego, Beff Jezos, and the "Effective Accelerationism" (e/acc) movement that he initiated. Its objective is to speed up tech progress in order to “grow civilization” (as measured by energy use and innovation), rather than “slowing down out of fear”. Guillaume argues we need to embrace variance, exploration, and optimism to avoid getting stuck or outpaced by competitors like China. He and Maxwell discuss big ideas like merging humans with AI, decentralizing intelligence, and why boundless growth (with smart constraints) is “key to humanity's future”.REFS:1. John Archibald Wheeler - "It From Bit" Concept00:04:45 - Foundational work proposing that physical reality emerges from information at the quantum levelLearn more: https://cqi.inf.usi.ch/qic/wheeler.pdf 2. AdS/CFT Correspondence (Holographic Principle)00:05:15 - Theoretical physics duality connecting quantum gravity in Anti-de Sitter space with conformal field theoryhttps://en.wikipedia.org/wiki/Holographic_principle 3. Renormalization Group Theory00:06:15 - Mathematical framework for analyzing physical systems across different length scales https://www.damtp.cam.ac.uk/user/dbs26/AQFT/Wilsonchap.pdf 4. Maxwell's Demon and Information Theory00:21:15 - Thought experiment linking information processing to thermodynamics and entropyhttps://plato.stanford.edu/entries/information-entropy/ 5. Landauer's Principle00:29:45 - Fundamental limit establishing minimum energy required for information erasure https://en.wikipedia.org/wiki/Landauer%27s_principle 6. Free Energy Principle and Active Inference01:03:00 - Mathematical framework for understanding self-organizing systems and perception-action loopshttps://www.nature.com/articles/nrn2787 7. Max Tegmark - Information Bottleneck Principle01:07:00 - Connections between information theory and renormalization in machine learninghttps://arxiv.org/abs/1907.07331 8. Fisher's Fundamental Theorem of Natural Selection01:11:45 - Mathematical relationship between genetic variance and evolutionary fitnesshttps://en.wikipedia.org/wiki/Fisher%27s_fundamental_theorem_of_natural_selection 9. Tensor Networks in Quantum Systems00:06:45 - Computational framework for simulating many-body quantum systems https://arxiv.org/abs/1912.10049 10. Quantum Neural Networks00:09:30 - Hybrid quantum-classical models for machine learning applicationshttps://en.wikipedia.org/wiki/Quantum_neural_network 11. Energy-Based Models (EBMs)00:40:00 - Probabilistic framework for unsupervised learning based on energy functionshttps://www.researchgate.net/publication/200744586_A_tutorial_on_energy-based_learning 12. Markov Chain Monte Carlo (MCMC)00:20:00 - Sampling algorithm fundamental to modern AI and statistical physics https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo 13. Metropolis-Hastings Algorithm00:23:00 - Core sampling method for probability distributionshttps://arxiv.org/abs/1504.01896 ***SPONSOR MESSAGE***Google Gemini 2.5 Flash is a state-of-the-art language model in the Gemini app. Sign up at https://gemini.google.com
    --------  
    1:23:32
  • The Fractured Entangled Representation Hypothesis (Kenneth Stanley, Akarsh Kumar)
    Are the AI models you use today imposters?Please watch the intro video we did before this: https://www.youtube.com/watch?v=o1q6Hhz0MAgIn this episode, hosts Dr. Tim Scarfe and Dr. Duggar are joined by AI researcher Prof. Kenneth Stanley and MIT PhD student Akash Kumar to discuss their fascinating paper, "Questioning Representational Optimism in Deep Learning."Imagine you ask two people to draw a perfect skull. One is a brilliant artist who understands anatomy, the other is a machine that just traces the image. Both drawings look identical, but the artist understands what a skull is—they know where the mouth is, how the jaw works, and that it's symmetrical. The machine just has a tangled mess of lines that happens to form the right picture.An AI with an elegant representation, has the building blocks to generate truly new ideas.The Path Is the Goal: As Kenneth Stanley puts it, "it matters not just where you get, but how you got there". Two students can ace a math test, but the one who truly understands the concepts—instead of just memorizing formulas—is the one who will go on to make new discoveries.The show is a mixture of 3 separate recordings we have done, the original Patreon warmup with Tim/Kenneth, the Tim/Keith "Steakhouse" recorded after the main interview, then the main interview with Kenneth/Akarsh/Keith/Tim. Feel free to skip around. We had to edit this in a rush as we are travelling next week but it's reasonably cleaned up. TOC:00:00:00 Intro: Garbage vs. Amazing Representations00:05:42 How Good Representations Form00:11:14 Challenging the "Bitter Lesson"00:18:04 AI Creativity & Representation Types00:22:13 Steakhouse: Critiques & Alternatives00:28:30 Steakhouse: Key Concepts & Goldilocks Zone00:39:42 Steakhouse: A Sober View on AI Risk00:43:46 Steakhouse: The Paradox of Open-Ended Search00:47:58 Main Interview: Paper Intro & Core Concepts00:56:44 Main Interview: Deception and Evolvability01:36:30 Main Interview: Reinterpreting Evolution01:56:16 Main Interview: Impostor Intelligence02:11:15 Main Interview: Recommendations for AI ResearchREFS:Questioning Representational Optimism in Deep Learning:The Fractured Entangled Representation HypothesisAkarsh Kumar, Jeff Clune, Joel Lehman, Kenneth O. Stanleyhttps://arxiv.org/pdf/2505.11581Kenneth O. Stanley, Joel LehmanWhy Greatness Cannot Be Planned: The Myth of the Objectivehttps://amzn.to/44xLaXKOriginal show with Kenneth from 4 years ago:https://www.youtube.com/watch?v=lhYGXYeMq_EKenneth Stanley is SVP Open Endedness at Lila Scienceshttps://x.com/kenneth0stanleyAkarsh Kumar (MIT)https://akarshkumar.com/AND... Kenneth is HIRING (this is an OPPORTUNITY OF A LIFETIME!)Research Engineer: https://job-boards.greenhouse.io/lila/jobs/7890007002Research Scientist: https://job-boards.greenhouse.io/lila/jobs/8012245002TRANSCRIPT:https://app.rescript.info/public/share/W_T7E1OC2Wj49ccqlIOOztg2MJWaaVbovTeyxcFEQdU
    --------  
    2:16:22

More Technology podcasts

About Machine Learning Street Talk (MLST)

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).
Podcast website

Listen to Machine Learning Street Talk (MLST), All-In with Chamath, Jason, Sacks & Friedberg and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.23.1 | © 2007-2025 radio.de GmbH
Generated: 8/18/2025 - 8:55:06 AM