Tres Publique: Algorithms in the French Welfare State w/ Soizic Pénicaud
Governments around the world are using predictive systems to manage engagement with even the most vulnerable. Results are mixed.More like this: Algorithmically Cutting Benefits w/ Kevin De LibanLuckily people like Soizic Pénicaud are working to prevent the modern welfare state from becoming a web of punishment of the most marginalised. Soizic has worked on algorithmic transparency both in and outside of a government context, and this week will share her journey from working on incrementally improving these systems (boring, ineffective, hard) — to escaping the slow pace of government and looking at the bigger picture of algorithmic governance, and how it can build better public benefit in France (fun, transformative, and a good challenge).Soizic is working to shift political debates about opaque decision-making algorithms to focus on what they’re really about: the marginalised communities who’s lives are most effected by these systems.Further reading & resources:The Observatory of Public Algorithms and their InventoryThe ongoing court case against the French welfare agency's risk-scoring algorithmMore about SoizicMore on the Transparency of Public Algorithms roadmap from Etalab — the task force Soizic was part ofLa Quadrature du NetFrance’s Digital Inquisition — co-authored by Soizic in collaboration with Lighthouse Reports, 2023AI prototypes for UK welfare system dropped as officials lament ‘false starts’ — The Guardian Jan 2025Learning from Cancelled Systems by Data Justice LabThe Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment — by Nari Johnson et al, featured in FAccT 2024**Subscribe to our newsletter to get more stuff than just a podcast — we host live shows and do other work that you will definitely be interested in!**
--------
52:22
--------
52:22
Straight to Video: From Rodney King to Sora w/ Sam Gregory
Seeing is believing. Right? But what happens when we lose trust in the reproductive media put in front of us?More like this: The Toxic Relationship Between AI and Journalism w/ Nic DawesWe talked to a global expert and leading voice on this issue for the past 20 years, Sam Gregory to get his take. We started way back in 1992 when Rodney King was assaulted by 4 police officers in Los Angeles. Police brutality was (and is) commonplace, but something different happened in this case. Someone used a camcorder and caught it on video. It changed our understanding about the role video could play in accountability. And in the past 30 years, we’ve gone from seeking video as evidence and advocacy, to AI slop threatening to seismically reshape our shared realities.Now apps like Sora provide impersonation-as-entertainment. How did we get here?Further reading & resources:More on the riots following Rodney King’s murder — NPRMore about Sam and WitnessObscuraCam — a privacy-preserving camera app from WITNESS and The Guardian ProjectC2PA: the Coalition for Content Provenance and AuthenticityDeepfakes Rapid Response Force by WITNESSSubscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!Post Production by Sarah Myles
--------
1:00:15
--------
1:00:15
The Toxic Relationship Between AI & Journalism w/ Nic Dawes
What happens when AI models try to fill the gaping hole in the media landscape where journalists should be?More like this: Reanimating Apartheid w/ Nic DawesThis week Alix is joined by Nic Dawes, who until very recently ran the non-profit newsroom The City. In this conversation we explore journalism’s new found toxic relationship with AI and big tech: can journalists meaningfully use AI in their work? If a model summarises a few documents, does that add a new layer of efficiency, or inadvertently oversimplify? And what can we learn from big tech positioning itself as a helpful friend to journalism during the Search era?Beyond the just accurate relaying of facts, journalistic organisations also represent an entire backlog of valuable training data for AI companies. If you don’t have the same resources as the NYT, suing for copyright infringement isn’t an option — so what then? Nic says we have to break out of the false binary of ‘if you can’t beat them, join them!’Further reading & resources:Judge allows ‘New York Times’ copyright case against OpenAI to go forward — NPRGenerative AI and news report 2025: How people think about AI’s role in journalism and society — Reuters InstituteAn example of The City’s investigative reporting: private equity firms buying up property in the Bronx — 2022The Intimacy Dividend — Shuwei FangSam Altman on Twitter announcing that they’ve improved ChatGPT to be mindful of the mental health effects — “We realize this made it less useful/enjoyable to many users who had no mental health problems, but…”**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
--------
41:47
--------
41:47
Unlearning in the AI Era w/ Nabiha Syed at Mozilla Foundation
Mozilla Foundation wants to chart a new path in the AI era. But what is its role now and how can it help reshape the impacts and opportunities of technology for… everyone?More like this: Defying Datafication w/ Abeba BirhaneAlix sat down with Nabiha Syed to chat through her first year as the new leader of Mozilla Foundation. How does she think about strategy in this moment? What role does she want the foundation to play? And crucially, how is she stewarding a community of human-centered technology builders in a time of hyper-scale and unchecked speculation?As Nabiha says, “restraint is a design principle too”.Plug: We’ll be at MozFest this year broadcasting live and connecting with all kinds of folks. If you’re feeling the FOMO, be on the look out for episodes we produce about our time there.Further reading & resources:Watch this episode on YouTubeImaginative Intelligences — a programme of artist assemblies run by Mozilla FoundationNothing Personal — a new counterculture editorial platform from the Mozilla FoundationMore about MozfestNabiha on the Computer Says Maybe live show at the 2025 AI Action SummitNabiha Syed remakes Mozilla Foundation in the era of Trump and AI — The RegisterNabiha on why she joined MF as executive director — MF Blog**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
--------
45:59
--------
45:59
You Seem Lonely. Have a Robot w/ Stevie Chancellor
Loneliness and mental health illnesses are rising in the US, while access to care dwindles — so a lot of people are turning to chatbots. Do chatbots work for therapy?More like this: The Collective Intelligence Project w/ Divya Siddarth and Zarinah AgnewWhy are individuals are confiding in chatbots over qualified human therapists? Stevie Chancellor explains why an LLM can’t replace a therapeutic relationship — but often there’s just no other choice. Turns out the chatbots designed specifically for therapy are even worse than general models like ChatGPT; Stevie shares her ideas on how LLMs could potentially be used — safely — for therapeutic support. This is really helpful primer on how to evaluate chatbots for specific, human-replacing tasks.Further reading & resources:Stevie’s paper on whether replacing therapists with LLMs is even possible (it’s not)See the research on GithubPeople are Losing Their Loved Ones to AI-Fuelled Spiritual Fantasies — Rolling Stone (May 2025)Silicon Valley VC Geoff Lewis becomes convinced that ChatGPT is telling him government secrets from the futureLoneliness considered a public health epidemic according to the APAFTC orders online therapy company BetterHelp to pay damages of $7.8mDelta plans to use AI in ticket pricing draws fire from US lawmakers — Reuters July 2025**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
Technology is changing fast. And it's changing our world even faster. Host Alix Dunn interviews visionaries, researchers, and technologists working in the public interest to help you keep up. Step outside the hype and explore the possibilities, problems, and politics of technology. We publish weekly.