
Why Backups Aren't Enough & Identity Recovery is Key against Ransomware
16/12/2025 | 37 mins.
Think your cloud backups will save you from a ransomware attack? Think again. In this episode, Matt Castriotta (Field CTO at Rubrik) explains why the traditional "I have backups" mindset is dangerous. He distinguishes between Disaster Recovery (business continuity for operational errors) and Cyber Resilience (recovering from a malicious attack where data and identity are untrusted) .Matt speaks about the "dirty secrets" of cloud-native recovery, explaining why S3 versioning and replication are not valid cyber recovery strategies . The conversation shifts to the critical, often overlooked aspect of Identity Recovery. If your Active Directory or Entra ID is compromised, it's "ground zero” and you can't access anything. Matt argues that identity must be treated as the new perimeter and backed up just like any other critical data source .We also explore the impact of AI agents on data integrity, how do you "rewind" an AI agent that hallucinated and corrupted your data? Plus, practical advice on DORA compliance, multi-cloud resiliency, and the "people and process" side of surviving a breach.Guest Socials - Matt's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions:(00:00) Introduction(02:20) Who is Matt Castriotta?(03:20) Defining Cyber Resilience: The Ability to Say "No" to Ransomware(05:00) Why "I Have Backups" is Not Enough(06:45) The Difference Between Disaster Recovery and Cyber Recovery(10:20) Cloud Native Risks: Versioning and Replication Are Not Backups(12:50) DORA Compliance: Multi-Cloud Resiliency & Egress Costs(15:10) The "Shared Responsibility Model" Trap in Cloud(17:45) Identity is the New Perimeter: Why You Must Back It Up(22:30) Identity Recovery: Can You Restore Your Active Directory in Minutes?(25:40) AI and Data: The New "Oil" and "Crown Jewels"(27:20) Rubrik Agent Cloud: Rewinding AI Agent Actions(29:40) Top 3 Priorities for a 2026 Resiliency Program(33:10) Fun Questions: Guitar, Family, and Italian Food

How to secure your AI Agents: A CISOs Journey
09/12/2025 | 54 mins.
Transitioning a mature organization from an API-first model to an AI-first model is no small feat. In this episode, Yash Kosaraju, CISO of Sendbird, shares the story of how they pivoted from a traditional chat API platform to an AI agent platform and how security had to evolve to keep up.Yash spoke about the industry's obsession with "Zero Trust," arguing instead for a practical "Multi-Layer Trust" approach that assumes controls will fail . We dive deep into the specific architecture of securing AI agents, including the concept of a "Trust OS," dealing with new incident response definitions (is a wrong AI answer an incident?), and the critical need to secure the bridge between AI agents and customer environments .This episode is packed with actionable advice for AppSec engineers feeling overwhelmed by the speed of AI. Yash shares how his team embeds security engineers into sprint teams for real-time feedback, the importance of "AI CTFs" for security awareness, and why enabling employees with enterprise-grade AI tools is better than blocking them entirely .Questions asked:Guest Socials - Yash's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:20) Who is Yash Kosaraju? (CISO at Sendbird)(03:30) Sendbird's Pivot: From Chat API to AI Agent Platform(05:00) Balancing Speed and Security in an AI Transition(06:50) Embedding Security Engineers into AI Sprint Teams(08:20) Threats in the AI Agent World (Data & Vendor Risks)(10:50) Blind Spots: "It's Microsoft, so it must be secure"(12:00) Securing AI Agents vs. AI-Embedded Applications(13:15) The Risk of Agents Making Changes in Customer Environments(14:30) Multi-Layer Trust vs. Zero Trust (Marketing vs. Reality) (17:30) Practical Multi-Layer Security: Device, Browser, Identity, MFA(18:25) What is "Trust OS"? A Foundation for Responsible AI(20:45) Balancing Agent Security vs. Endpoint Security(24:15) AI Incident Response: When an AI Gives a Wrong Answer(29:20) Security for Platform Engineers: Enabling vs. Blocking(30:45) Providing Enterprise AI Tools (Gemini, ChatGPT, Cursor) to Employees(32:45) Building a "Security as Enabler" Culture(36:15) What Questions to Ask AI Vendors (Paying with Data?)(39:20) Personal Use of Corporate AI Accounts(43:30) Using AI to Learn AI (Gemini Conversations)(45:00) The Stress on AppSec Engineers: "I Don't Know What I'm Doing"(48:20) The AI CTF: Gamifying Security Training(50:10) Fun Questions: Outdoors, Team Building, and Indian/Korean Food

AI-First Vulnerability Management: Should CISOs Build or Buy?
04/12/2025 | 1h 1 mins.
Thinking of building your own AI security tool? In this episode, Santiago Castiñeira, CTO of Maze, breaks down the realities of the "Build vs. Buy" debate for AI-first vulnerability management.While building a prototype script is easy, scaling it into a maintainable, audit-proof system is a massive undertaking requiring specialized skills often missing in security teams. The "RAG drug" relies too heavily on Retrieval-Augmented Generation for precise technical data like version numbers, which often fails .The conversation gets into the architecture required for a true AI-first system, moving beyond simple chatbots to complex multi-agent workflows that can reason about context and risk . We also cover the critical importance of rigorous "evals" over "vibe checks" to ensure AI reliability, the hidden costs of LLM inference at scale, and why well-crafted agents might soon be indistinguishable from super-intelligence .Guest Socials - Santiago's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:00) Who is Santiago Castiñeira?(02:40) What is "AI-First" Vulnerability Management? (Rules vs. Reasoning)(04:55) The "Build vs. Buy" Debate: Can I Just Use ChatGPT?(07:30) The "Bus Factor" Risk of Internal Tools(08:30) Why MCP (Model Context Protocol) Struggles at Scale(10:15) The Architecture of an AI-First Security System(13:45) The Problem with "Vibe Checks": Why You Need Proper Evals(17:20) Where to Start if You Must Build Internally(19:00) The Hidden Need for Data & Software Engineers in Security Teams(21:50) Managing Prompt Drift and Consistency(27:30) The Challenge of Changing LLM Models (Claude vs. Gemini)(30:20) Rethinking Vulnerability Management Metrics in the AI Era(33:30) Surprises in AI Agent Behavior: "Let's Get Back on Topic"(35:30) The Hidden Cost of AI: Token Usage at Scale(37:15) Multi-Agent Governance: Preventing Rogue Agents(41:15) The Future: Semi-Autonomous Security Fleets(45:30) Why RAG Fails for Precise Technical Data (The "RAG Drug")(47:30) How to Evaluate AI Vendors: Is it AI-First or AI-Sprinkled?(50:20) Common Architectural Mistakes: Vibe Evals & Cost Ignorance(56:00) Unpopular Opinion: Well-Crafted Agents vs. Super Intelligence(58:15) Final Questions: Kids, Argentine Steak, and Closing

SIEM vs. Data Lake: Why We Ditched Traditional Logging?
02/12/2025 | 46 mins.
In this episode, Cliff Crosland, CEO & co-founder of Scanner.dev, shares his candid journey of trying (and initially failing) to build an in-house security data lake to replace an expensive traditional SIEM.Cliff explains the economic breaking point where scaling a SIEM became "more expensive than the entire budget for the engineering team". He details the technical challenges of moving terabytes of logs to S3 and the painful realization that querying them with Amazon Athena was slow and costly for security use cases .This episode is a deep dive into the evolution of logging architecture, from SQL-based legacy tools to the modern "messy" data lake that embraces full-text search on unstructured data. We discuss the "data engineering lift" required to build your own, the promise (and limitations) of Amazon Security Lake, and how AI agents are starting to automate detection engineering and schema management.Guest Socials - Cliff's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:25) Who is Cliff Crosford?(03:00) Why Teams Are Switching from SIEMs to Data Lakes(06:00) The "Black Hole" of S3 Logs: Cliff's First Failed Data Lake(07:30) The Engineering Lift: Do You Need a Data Engineer to Build a Lake?(11:00) Why Amazon Athena Failed for Security Investigations(14:20) The Danger of Dropping Logs to Save Costs(17:00) Misconceptions About Building Your Own Data Lake(19:00) The Evolution of Logging: From SQL to Full-Text Search(21:30) Is Amazon Security Lake the Answer? (OCSF & Custom Logs)(24:40) The Nightmare of Log Normalization & Custom Schemas(28:00) Why Future Tools Must Embrace "Messy" Logs(29:55) How AI Agents Are Automating Detection Engineering(35:45) Using AI to Monitor Schema Changes at Scale(39:45) Build vs. Buy: Does Your Security Team Need Data Engineers?(43:15) Fun Questions: Physics Simulations & Pumpkin Pie

How to Build Trust in an AI SOC for Regulated Environments
18/11/2025 | 42 mins.
How do you establish trust in an AI SOC, especially in a regulated environment? Grant Oviatt, Head of SOC at Prophet Security and a former SOC leader at Mandiant and Red Canary, tackles this head-on as a self-proclaimed "AI skeptic". Grant shared that after 15 years of being "scared to death" by high-false-positive AI, modern LLMs have changed the game .The key to trust lies in two pillars: explainability (is the decision reasonable?) and traceability (can you audit the entire data trail, including all 40-50 queries?) . Grant talks about yje critical architectural components for regulated industries, including single-tenancy , bring-your-own-cloud (BYOC) for data sovereignty , and model portability.In this episode we will be comparing AI SOC to traditional MDRs and talking about real-world "bake-off" results where an AI SOC had 99.3% agreement with a human team on 12,000 alerts but was 11x faster, with an average investigation time of just four minutes .Guest Socials - Grant's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security Podcast(00:00) Introduction(02:00) Who is Grant Oviatt?(02:30) How to Establish Trust in an AI SOC for Regulated Environments(03:45) Explainability vs. Traceability: The Two Pillars of Trust(06:00) The "Hard SOC Life": Pre-AI vs. AI SOC(09:00) From AI Skeptic to AI SOC Founder: What Changed? (10:50) The "Aha!" Moment: Breaking Problems into Bite-Sized Pieces(12:30) What Regulated Bodies Expect from an AI SOC(13:30) Data Management: The Key for Regulated Industries (PII/PHI) (14:40) Why Point-in-Time Queries are Safer than a SIEM (15:10) Bring-Your-Own-Cloud (BYOC) for Financial Services (16:20) Single-Tenant Architecture & No Training on Customer Data (17:40) Bring-Your-Own-Model: The Rise of Model Portability (19:20) AI SOC vs. MDR: Can it Replace Your Provider? (19:50) The 4-Minute Investigation: Speed & Custom Detections (21:20) The Reality of Building Your Own AI SOC (Build vs. Buy)(23:10) Managing Model Drift & Updates(24:30) Why Prophet Avoids MCPs: The Lack of Auditability (26:10) How Far Can AI SOC Go? (Analysis vs. Threat Hunting)(27:40) The Future: From "Human in the Loop" to "Manager in the Loop" (28:20) Do We Still Need a Human in the Loop? (95% Auto-Closed) (29:20) The Red Lines: What AI Shouldn't Automate (Yet) (30:20) The Problem with "Creative" AI Remediation(33:10) What AI SOC is Not Ready For (Risk Appetite)(35:00) Gaining Confidence: The 12,000 Alert Bake-Off (99.3% Agreement) (37:40) Fun Questions: Iron Mans, Texas BBQ & SeafoodThank you to Prophet Security for sponsoring this episode.



Cloud Security Podcast