PodcastsTechnologyThe MAD Podcast with Matt Turck

The MAD Podcast with Matt Turck

Matt Turck
The MAD Podcast with Matt Turck
Latest episode

111 episodes

  • The MAD Podcast with Matt Turck

    Everything Gets Rebuilt: The New AI Agent Stack | Harrison Chase, LangChain

    12/03/2026 | 46 mins.
    The era of the simple AI wrapper is officially dead, and the entire software infrastructure layer is being completely rebuilt. Live from the Daytona COMPUTE Conference in San Francisco, Harrison Chase, co-founder and CEO of LangChain, joins the MAD Podcast to explain why this massive shift is happening. As agents evolve from simple prompt-based systems into software that can plan, use tools, write code, manage files, and remember things over time, the real frontier is shifting from the model itself to the stack around the model. In this conversation, we go deep under the hood of this new, post-cloud architecture to deconstruct harnesses, sub-agents, context compaction, observability, memory, and the critical need for secure compute sandboxes. For anyone building in AI today, this episode cuts through the noise to reveal the new infrastructure required to make autonomous agents work in the real world.

    (00:00) Intro - meet Harrison Chase
    (01:32) What changed in agents over the last year
    (03:57) Why coding agents are ahead
    (06:26) Do models commoditize the framework layer?
    (08:27) Harnesses, in plain English
    (10:11) Why system prompts matter so much
    (13:11) The upside — and downside — of subagents
    (15:31) Why a useful agent needs a filesystem
    (18:13) The core primitives of modern agents
    (19:12) Skills: the new primitive
    (20:19) What context compaction actually means
    (23:02) How memory works in agents
    (25:16) One mega-agent or many specialized agents?
    (27:46) Has MCP won?
    (29:38) Why agents need sandboxes
    (32:35) How sandboxes help with security
    (33:32) How Harrison Chase started LangChain
    (37:24) LangChain vs LangGraph vs Deep Agents
    (40:17) Why observability matters more for agents
    (41:48) Evals, no-code, and continuous improvement
    (44:41) What LangChain is building next
    (45:29) Where the real moat in AI lives
  • The MAD Podcast with Matt Turck

    AI That Can Prove It’s Right: Verification as the Missing Layer in AI — Carina Hong

    26/02/2026 | 1h 3 mins.
    What if AI didn’t just sound right — but could prove it? In this episode of the MAD Podcast, Matt Turck sits down with Carina Hong, a 24-year-old former math olympiad competitor and Rhodes Scholar, and the founder/CEO of Axiom Math, to unpack how AxiomProver earned a perfect 12/12 on the Putnam 2025 and why formal verification (via Lean) may be the missing layer for reliable reasoning. Carina argues we’re entering a “math renaissance” where verified reasoning systems can tackle problems that currently take researchers months — and potentially push beyond math into verified code, hardware, and high-stakes software. They go inside the “generation + verification” loop, what it means to build AI that can be trusted, and what this approach could unlock on the road to superintelligent reasoning.

    (00:00) Intro
    (01:25) Why the World Needs an AI Mathematician
    (02:57) Scoring 12/12 on the World's Hardest Math Test (Putnam)
    (04:05) The First AI to Solve Open Research Conjectures
    (06:59) Does AI Solve Math in "Alien" Ways? (The Move 37 Effect)
    (08:59) "Lean": The Programming Language of Proofs Explained
    (10:51) How Axiom's Approach Differs from DeepMind & OpenAI
    (16:06) Formal vs. Informal Reasoning (And Auto-Formalization)
    (17:37) The AI "Reward Hacking" Problem
    (20:18) Building an AI That is 100% Correct, 100% of the Time
    (23:23) Beyond Math: Verified Code & Hardware Verification
    (25:12) The Brutal Reality of Competitive Math Olympiads
    (29:30) From Neuroscience to Stanford Law to Dropout Founder
    (33:57) How Axiom Actually Works Under the Hood (The Architecture)
    (37:51) The Secret to Generating Perfect Synthetic Data
    (40:14) Tokens, Proof Length, and Inference Cost
    (42:58) The "Everest" of Mathematics: Scaling Reasoning Trees
    (46:32) Can an AI Win a Fields Medal?
    (47:25) "Math Renaissance": What Changes if This Works
    (55:47) How Mathematicians React to AI (And Why Proof Certificates Matter)
    (57:30) Becoming a CEO: Dropping Ego and Building Culture
    (1:00:42) Recruiting World-Class Talent & Building the Axiom "Tribe"
  • The MAD Podcast with Matt Turck

    Voice AI’s Big Moment: Why Everything Is Changing Now (ft. Neil Zeghidour, Gradium AI)

    19/02/2026 | 1h 22 mins.
    Voice used to be AI’s forgotten modality — awkward, slow, and fragile. Now it’s everywhere. In this reference episode on all things Voice AI, Matt Turck sits down with Neil Zeghidour, a top AI researcher and CEO of Gradium AI (ex-DeepMind/Google, Meta, Kyutai), to cover voice agents, speech-to-speech models, full-duplex conversation, on-device voice, and voice cloning.
    We unpack what actually changed under the hood — why voice is finally starting to feel natural, and why it may become the default interface for a new generation of AI assistants and devices.
    Neil breaks down today’s dominant “cascaded” voice stack — speech recognition into a text model, then text-to-speech back out — and why it’s popular: it’s modular and easy to customize. But he argues it has two key downsides: chaining models adds latency, and forcing everything through text strips out paralinguistic signals like tone, stress, and emotion. The next wave, he suggests, is combining cascade-like flexibility with the more natural feel of speech-to-speech and full-duplex conversation.
    We go deep on full-duplex interaction (ending awkward turn-taking), the hardest unsolved problems (noisy real-world environments and multi-speaker chaos), and the realities of deploying voice at scale — including why models must be compact and when on-device voice is the right approach.
    Finally, we tackle voice cloning: where it’s genuinely useful, what it means for deepfakes and privacy, and why watermarking isn’t a silver bullet.
    If you care about voice agents, real-time AI, and the next generation of human-computer interaction, this is the episode to bookmark.

    Neil Zeghidour
    LinkedIn - https://www.linkedin.com/in/neil-zeghidour-a838aaa7/
    X/Twitter - https://x.com/neilzegh

    Gradium
    Website - https://gradium.ai
    X/Twitter - https://x.com/GradiumAI

    Matt Turck (Managing Director)
    Blog - https://mattturck.com
    LinkedIn - https://www.linkedin.com/in/turck/
    X/Twitter - https://twitter.com/mattturck

    FirstMark
    Website - https://firstmark.com
    X/Twitter - https://twitter.com/FirstMarkCap

    (00:00) Intro
    (01:21) Voice AI’s big moment — and why we’re still early
    (03:34) Why voice lagged behind text/image/video
    (06:06) The convergence era: transformers for every modality
    (07:40) Beyond Her: always-on assistants, wake words, voice-first devices
    (11:01) Voice vs text: where voice fits (even for coding)
    (12:56) Neil’s origin story: from finance to machine learning
    (18:35) Neural codecs (SoundStream): compression as the unlock
    (22:30) Kyutai: open research, small elite teams, moving fast
    (31:32) Why big labs haven’t “won” voice AI4
    (34:01) On-device voice: where it works, why compact models matter
    (46:37) The last mile: real-world robustness, pronunciation, uptime
    (41:35) Benchmarking voice: why metrics fail, how they actually test
    (47:03) Cascades vs speech-to-speech: trade-offs + what’s next
    (54:05) Hardest frontier: noisy rooms, factories, multi-speaker chaos
    (1:00:50) New languages + dialects: what transfers, what doesn’t
    (1:02:54 Hardware & compute: why voice isn’t a 10,000-GPU game
    (1:07:27) What data do you need to train voice models?
    (1:09:02) Deepfakes + privacy: why watermarking isn’t a solution
    (1:12:30) Voice + vision: multimodality, screen awareness, video+audio
    (1:14:43) Voice cloning vs voice design: where the market goes
    (1:16:32) Paris/Europe AI: talent density, underdog energy, what’s next
  • The MAD Podcast with Matt Turck

    Mistral AI vs. Silicon Valley: The Rise of Sovereign AI

    12/02/2026 | 58 mins.
    While Silicon Valley obsesses over AGI, Timothée Lacroix and the team at Mistral AI are quietly building the industrial and sovereign infrastructure of the future. In his first-ever appearance on a US podcast, the Mistral AI Co-Founder & CTO reveals how the company has evolved from an open-source research lab into a full-stack sovereign AI power—backed by ASML, running on their own massive supercomputing clusters, and deployed in nation-state defense clouds to break the dependency on US hyperscalers.

    Timothée offers a refreshing, engineer-first perspective on why the current AI hype cycle is misleading. He explains why "Sovereign AI" is not just a geopolitical buzzword but a necessity for any enterprise that wants to own its intelligence rather than rent it. He also provides a contrarian reality check on the industry's obsession with autonomous agents, arguing that "trust" matters more than autonomy and explaining why he prefers building robust "workflows" over unpredictable agents.

    We also dive deep into the technical reality of competing with the US giants. Timothée breaks down the architecture of the newly released Mistral 3, the "dense vs. MoE" debate, and the launch of Mistral Compute—their own infrastructure designed to handle the physics of modern AI scaling. This is a conversation about the plumbing, the 18,000-GPU clusters, and the hard engineering required to turn AI from a magic trick into a global industrial asset.

    Timothée Lacroix
    LinkedIn - https://www.linkedin.com/in/timothee-lacroix-59517977/
    Google Scholar - https://scholar.google.com.do/citations?user=tZGS6dIAAAAJ&hl=en&oi=ao

    Mistral AI
    Website - https://mistral.ai
    X/Twitter - https://x.com/MistralAI

    Matt Turck (Managing Director)
    Blog - https://mattturck.com
    LinkedIn - https://www.linkedin.com/in/turck/
    X/Twitter - https://twitter.com/mattturck

    FirstMark
    Website - https://firstmark.com
    X/Twitter - https://twitter.com/FirstMarkCap

    (00:00) — Cold Open
    (01:27) — Mistral vs. The World: From Research Lab to Sovereign Power
    (03:48) — Inside Mistral Compute: Building an 18,000 GPU Cluster
    (08:42) — The Trillion-Dollar Question: Competing Without a Big Tech Parent
    (10:37) — The Reality of Enterprise AI: Escaping "POC Purgatory"
    (15:06) — Why Mistral Hires Forward Deployed Engineers (FDEs)
    (16:57) — The Contrarian Take: Why "Agents" are just "Workflows"
    (19:35) — Trust > Autonomy: The Truth About Agent Reliability
    (21:26) — The Missing Stack: Governance and Versioning for AI
    (26:24) — When Will AI Actually Work? (The 2026 Timeline)
    (30:33) — Beyond Chat: The "Banger" Sovereign Use Cases
    (35:46) — Mistral 3 Architecture: Mixture of Experts vs. Dense
    (43:12) — Synthetic Data & The Post-Training Bottleneck
    (45:12) — Reasoning Models: Why "Thinking" is Just Tool Use
    (46:22) — Launching DevStral 2 and the Vibe CLI
    (50:49) — Engineering Lessons: How to Build Frontier AI Efficiently
    (56:08) — Timothée’s View on AGI & The Future of Intelligence
  • The MAD Podcast with Matt Turck

    Dylan Patel: NVIDIA's New Moat & Why China is "Semiconductor Pilled”

    05/02/2026 | 1h 16 mins.
    Dylan Patel (SemiAnalysis) joins Matt Turck for a deep dive into the AI chip wars — why NVIDIA is shifting from a “one chip can do it all” worldview to a portfolio strategy, how inference is getting specialized, and what that means for CUDA, AMD, and the next wave of specialized silicon startups.

    Then we take the fun tangents: why China is effectively “semiconductor pilled,” how provinces push domestic chips, what Huawei means as a long-term threat vector, and why so much “AI is killing the grid / AI is drinking all the water” discourse misses the point.

    We also tackle the big macro question: capex bubble or inevitable buildout? Dylan’s view is that the entire answer hinges on one variable—continued model progress—and we unpack the second-order effects across data centers, power, and the circular-looking financings (CoreWeave/Oracle/backstops).

    Dylan Patel
    LinkedIn - https://www.linkedin.com/in/dylanpatelsa/
    X/Twitter - https://x.com/dylan522p

    SemiAnalysis
    Website - https://semianalysis.com
    X/Twitter - https://x.com/SemiAnalysis_

    Matt Turck (Managing Director)
    Blog - https://mattturck.com
    LinkedIn - https://www.linkedin.com/in/turck/
    X/Twitter - https://twitter.com/mattturck

    FirstMark
    Website - https://firstmark.com
    X/Twitter - https://twitter.com/FirstMarkCap

    (00:00) - Intro
    (01:16) - Nvidia acquires Groq: A pivot to specialization
    (07:09) - Why AI models might need "wide" compute, not just fast
    (10:06) - Is the CUDA moat dead? (Open source vs. Nvidia)
    (17:49) - The startup landscape: Etched, Cerebras, and 1% odds
    (22:51) - Geopolitics: China's "semiconductor-pilled" culture
    (35:46) - Huawei's vertical integration is terrifying
    (39:28) - The $100B AI revenue reality check
    (41:12) - US Onshoring: Why total self-sufficiency is a fantasy
    (44:55) - Can the US actually build fabs? (The delay problem)
    (48:33) - The CapEx Bubble: Is $500B spending irrational?
    (54:53) - Energy Crisis: Why gas turbines will power AI, not nuclear
    (57:06) - The "AI uses all the water" myth (Hamburger comparison)
    (1:03:40) - Circular Debt? Debunking the Nvidia-CoreWeave risk
    (1:07:24) - Claude Code & the software singularity
    (1:10:23) - The death of the Junior Analyst role
    (1:11:14) - Model predictions: Opus 4.5 and the RL gap
    (1:14:37) - San Francisco Lore: Roommates (Dwarkesh Patel & Sholto Douglas)

More Technology podcasts

About The MAD Podcast with Matt Turck

The MAD Podcast with Matt Turck, is a series of conversations with leaders from across the Machine Learning, AI, & Data landscape hosted by leading AI & data investor and Partner at FirstMark Capital, Matt Turck.
Podcast website

Listen to The MAD Podcast with Matt Turck, All-In with Chamath, Jason, Sacks & Friedberg and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

The MAD Podcast with Matt Turck: Podcasts in Family

Social
v8.7.2 | © 2007-2026 radio.de GmbH
Generated: 3/13/2026 - 12:22:17 PM