
Agentic AI Security | case studies by Microsoft, OWASP
04/01/2026 | 32 mins.
As promised, Iām back with Tania for a deep dive into the wild world of agentic AI security ā how modern AI agents break, misbehave, or get exploited, and what real case studies are teaching us. Weāre unpacking insights from the Taxonomy of Failure Modes in Agentic AI Systems, the core paper behind todayās discussion, and exploring what these failures look like in practice.We also break down three great resources shaping the conversation right now:Microsoftās Taxonomy of Failure Modes in Agentic AI Systems ā a super clear breakdown of how agent failures emerge across planning, decision-making, and action loops: https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/final/en-us/microsoft-brand/documents/Taxonomy-of-Failure-Mode-in-Agentic-AI-Systems-Whitepaper.pdfOWASPās Agentic AI Threats & Mitigations ā a practical, security-team-friendly guide to common attack paths and how to defend against them: https://genai.owasp.org/resource/agentic-ai-threats-and-mitigations/Unit 42ās Agentic AI Threats report ā real-world examples of adversarial prompting, privilege escalation, and chain-of-trust issues showing up in deployed systems: https://unit42.paloaltonetworks.com/agentic-ai-threats/Join us as we translate the research, sift through whatās real vs. hype, and talk about what teams should be preparing for next šØš”ļø.

a hacky christmas message
23/12/2025 | 3 mins.
A quick end-of-year message to say thanks. Thanks for being part of the channel this year ā whether youāve been watching quietly, sharing, or arguing with me in the comments. I really appreciate it.I hope you have a good Christmas and holiday period, whatever that looks like for you. Take a break if you can. See you in 2026.

Three Black Hat talks at just 18! My interview with Bandana Kaur.
21/12/2025 | 12 mins.
In this episode, Iām joined by Bandana Kaur ā a cybersecurity researcher, speaker, and all-round superstar who somehow managed to do in her teens what most people are still figuring out in their thirties. š¤Bandana is just 18 years old, entirely self-taught in cybersecurity, already working in the field, and recently gave three talks at Black Hat. Yes, three! š±We talk about how she taught herself cybersecurity as a teenager, how she broke into the industry without a traditional pathway, and what itās actually like being young (and very competent) in a field that still struggles with gatekeeping. Bandana shares what she focused on while learning, how she approached opportunities like conference speaking, and what she thinks matters most for people trying to get into security today.This conversation is part career advice, and part reminder that you donāt need permission ā or a perfectly linear path ā to do meaningful work in cybersecurity.Follow Bandana: @hackwithher

Effective Altruism and AI with Good Ancestors CEO Greg Sadler | part 2
14/12/2025 | 31 mins.
Remember that time I invited myself over to Greg's place with my camera? This is part 2 from that great conversation. I'm curious to hear whether you've heard a lot about EA? It's something really big in the AI world but I'm conscious a lot of people outside the bubble haven't heard of it. Let me know in the comments! Check out Greg's work here: https://www.goodancestors.org.au/MIT AI Risk Repository: https://airisk.mit.edu/The Life You Can Save (book): https://www.thelifeyoucansave.org/book/80,000 hours: https://80000hours.org/Learn more about AI capability and impacts: https://bluedot.org/

AI Safety with CEO of Good Ancestors Greg Sadler | part 1
07/12/2025 | 27 mins.
This week I invited myself over to Greg Sadler's place, the CEO of Good Ancestors, about AI safety. I brought sushi but I didn't have lunch so I ate most of it, and then I almost made him late for his next meeting. We specifically chat through AI capabilities, his work in policy, and building a not-profit. Greg is the kind of person who is so smart and cool that I feel like an absolute dummy interviewing him - so I know you're all going to like this episode. Stay tuned for part 2 where we dive into effective altruism and its intersection with AI!Check out Greg's work here: https://www.goodancestors.org.au/MIT AI Risk Repository: https://airisk.mit.edu/The Life You Can Save (book): https://www.thelifeyoucansave.org/book/80,000 hours: https://80000hours.org/Learn more about AI capability and impacts: https://bluedot.org/



The AI Security Podcast