Powered by RND
PodcastsBusinessPondering AI

Pondering AI

Kimberly Nevala, Strategic Advisor - SAS
Pondering AI
Latest episode

Available Episodes

5 of 84
  • Your Digital Twin Is Not You with Kati Walcott
    Kati Walcott differentiates simulated will from genuine intent, data sharing from data surrender, and agents from agency in a quest to ensure digital sovereignty for all.Kati and Kimberly discuss her journey from molecular genetics to AI engineering; the evolution of an intention economy built on simulated will; the provider ecosystem and monetization as a motive; capturing genuine intent; non-benign aspects of personalization; how a single bad data point can be a health hazard; the 3 styles of digital data; data sharing vs. data surrender; whether digital society represents reality; restoring authorship over our digital selves; pivoting from convenience to governance; why AI is only accountable when your will is enforced; and the urgent need to disrupt feudal economics in AI. Kati Walcott is the Founder and Chief Technology Officer at Synovient. With over 120 international patents, Kati is a visionary tech inventor, author and leader focused on digital representation, rights and citizenship in the Digital Data Economy.Related ResourcesThe False Intention Economy: How AI Systems are Replacing Human Will with Modeled Behavior (LinkedIn Article)A transcript of this episode is here.   
    --------  
    53:17
  • No Community Left Behind with Paula Helm
    Paula Helm articulates an AI vision that goes beyond base performance to include epistemic justice and cultural diversity by focusing on speakers and not language alone. Paula and Kimberly discuss ethics as a science; language as a core element of culture; going beyond superficial diversity; epistemic justice and valuing other’s knowledge; the translation fallacy; indigenous languages as oral goods; centering speakers and communities; linguistic autonomy and economic participation; the Māori view on data ownership; the role of data subjects; enabling cultural understanding, self-determination and expression; the limits of synthetic data; ethical issues as power asymmetries; and reflecting on what AI mirrors back to us.  Paula Helm is an Assistant Professor of Empirical Ethics and Data Science at the University of Amsterdam. Her work sits at the intersection of STS, Media Studies and Ethics. In 2022 Paula was recognized as one of the 100 Most Brilliant Women in AI-Ethics.Related ResourcesGenerating Reality and Silencing Debate: Synthetic Data as Discursive Device (paper) https://journals.sagepub.com/doi/full/10.1177/20539517241249447Diversity and Language Technology (paper): https://link.springer.com/article/10.1007/s10676-023-09742-6A transcript of this episode is here.   
    --------  
    52:06
  • What AI Values with Jordan Loewen-Colón
    Jordan Loewen-Colón values clarity regarding the practical impacts, philosophical implications and work required for AI to serve the public good, not just private gain.Jordan and Kimberly discuss value alignment as an engineering or social problem; understanding ourselves as data personas; the limits of personalization; the perception of agency; how AI shapes our language and desires; flattening of culture and personality; localized models and vernacularization; what LLMs value (so to speak); how tools from calculators to LLMs embody values; whether AI accountability is on anyone’s radar; failures of policy and regulation; positive signals; getting educated and fostering the best AI has to offer.Jordan Loewen-Colón is an Adjunct Associate Professor of AI Ethics and Policy at Smith School of Business | Queen's University. He is also the Co-Founder of the AI Alt Lab which is dedicated to ensuring AI serves the public good and not just private gain.Related ResourcesHBR Research: Do LLMs Have Values? (paper): https://hbr.org/2025/05/research-do-llms-have-values  AI4HF Beyond Surface Collaboration: How AI Enables High-Performing Teams (paper): https://www.aiforhumanflourishing.com/the-framework-papers/relationshipsandcommunication A transcript of this episode is here.
    --------  
    51:41
  • Agentic Insecurities with Keren Katz
    Keren Katz exposes novel risks posed by GenAI and agentic AI while reflecting on unintended malfeasance, surprisingly common insider threats and weak security postures. Keren and Kimberly discuss threats amplified by agentic AI; self-inflicted exposures observed in Fortune 500 companies; normalizing risky behavior; unintentional threats; non-determinism as a risk; users as an attack vector; the OWASP State of Agentic AI and Governance report; ransomware 2025; mapping use cases and user intent; preemptive security postures; agentic behavior analysis; proactive AI/agentic security policies and incident response plans. Keren Katz is Senior Group Manager of Threat Research, Product Management and AI at Tenable, a contributor at both the Open Worldwide Application Security Project (OWASP) and Forbes. Keren is a global leader in AI and cybersecurity, specializing in Generative AI threat detection. Related ResourcesArticle: The Silent Breach: Why Agentic AI Demands New OversightState of Agentic AI Security and Governance (whitepaper): https://genai.owasp.org/resource/state-of-agentic-ai-security-and-governance-1-0/ The LLM Top 10: https://genai.owasp.org/llm-top-10/A transcript of this episode is here.   
    --------  
    49:19
  • To Be or Not to Be Agentic with Maximilian Vogel
    Maximilian Vogel dismisses tales of agentic unicorns, relying instead on human expertise, rational objectives, and rigorous design to deploy enterprise agentic systems.   Maximilian and Kimberly discuss what an agentic system is (emphasis on system); why agency in agentic AI resides with humans; engineering agentic workflows; agentic AI as a mule not a unicorn; establishing confidence and accuracy; codesigning with business/domain experts; why 100% of anything is not the goal; focusing on KPIs not features; tricks to keep models from getting tricked; modeling agentic workflows on human work; live data and human-in-the-loop validation; AI agents as a support team and implications for human work.  Maximilian Vogel is the Co-Founder of BIG PICTURE, a digital transformation boutique specializing in the use of AI for business innovation. Maximilian enables the strategic deployment of safe, secure, and reliable agentic AI systems.Related ResourcesMedium: https://medium.com/@maximilian.vogelA transcript of this episode is here.   
    --------  
    51:19

More Business podcasts

About Pondering AI

How is the use of artificial intelligence (AI) shaping our human experience? Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse. All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.
Podcast website

Listen to Pondering AI, Straight Talk with Mark Bouris and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.0.4 | © 2007-2025 radio.de GmbH
Generated: 12/1/2025 - 1:20:27 PM