Powered by RND
PodcastsTechnologyPondering AI
Listen to Pondering AI in the App
Listen to Pondering AI in the App
(3,100)(247,963)
Save favourites
Alarm
Sleep timer

Pondering AI

Podcast Pondering AI
Kimberly Nevala, Strategic Advisor - SAS
How is the use of artificial intelligence (AI) shaping our human experience? Kimberly Nevala ponders the reality of AI with a diverse group of innovators, ad...
More

Available Episodes

5 of 59
  • Relating to AI with Dr. Marisa Tschopp
    Dr. Marisa Tschopp explores our evolving, often odd, expectations for AI companions while embracing radical empathy, resisting relentless PR and trusting in humanity.  Marisa and Kimberly discuss recent research into AI-based conversational agents, the limits of artificial companionship, implications for mental health therapy, the importance of radical empathy and differentiation, why users defy simplistic categorization, corporate incentives and rampant marketing gags, reasons for optimism, and retaining trust in human connections. A transcript of this episode is here. Dr. Marisa Tschopp is a Psychologist, a Human-AI Interaction Researcher at scip AG and an ardent supporter of Women in AI. Marisa’s research focuses on human-AI relationships, trust in AI, agency, behavioral performance assessment of conversational systems (A-IQ), and gender issues in AI.  Additional Resources:The Impact of Human-AI Relationship Perception on Voice Shopping Intentions in Human Machine Collaboration  Publication How do users perceive their relationship with conversational AI? Publication KI als Freundin: Funktioniert eine Chatbot-Beziehung? TV Show (German, SRF) Friends with AI? It’s complicated! TEDxBoston Talk 
    --------  
    41:55
  • Technical Morality with John Danaher
    John Danaher assesses how AI may reshape ethical and social norms, minds the anticipatory gap in regulation, and applies the MVPP to decide against digitizing himself.  John parlayed an interest in science fiction into researching legal philosophy, emerging technology, and society. Flipping the script on ethical assessment, John identifies six (6) mechanisms by which technology may reshape ethical principles and social norms. John further illustrates the impact AI can have on decision sets and relationships. We then discuss the dilemma articulated by the aptly named anticipatory gap. In which the effort required to regulate nascent tech is proportional to our understanding of its ultimate effects.  Finally, we turn our attention to the rapid rise of digital duplicates. John provides examples and proposes a Minimally Viable Permissibility Principle (MVPP) for evaluating the use of digital duplicates. Emphasizing the difficulty of mitigating the risks posed after a digital duplicate is let loose in the wide, John declines the opportunity to digitally duplicate himself.  John Danaher is a Sr. Lecturer in Ethics at the NUI Galway School of Law. A prolific scholar, he is the author of Automation and Utopia: Human Flourishing in a World Without Work (Harvard University Press, 2019). Papers referenced in this episode include The Ethics of Personalized Digital Duplicates: A Minimal Viability Principle and How Technology Alters Morality and Why It Matters. A transcript of this episode is here. 
    --------  
    46:03
  • Artificial Empathy with Ben Bland
    Ben Bland expressively explores emotive AI’s shaky scientific underpinnings, the gap between reality and perception, popular applications, and critical apprehensions. Ben exposes the scientific contention surrounding human emotion. He talks terms (emotive? empathic? not telepathic!) and outlines a spectrum of emotive applications. We discuss the powerful, often subtle, and sometimes insidious ways emotion can be leveraged. Ben explains the negative effects of perpetual positivity and why drawing clear red lines around the tech is difficult. He also addresses the qualitative sea change brought about by large language models (LLMs), implicit vs explicit design and commercial objectives. Noting that the social and psychological impacts of emotive AI systems have been poorly explored, he muses about the potential to actively evolve your machine’s emotional capability. Ben confronts the challenges of defining standards when the language is tricky, the science is shaky, and applications are proliferating. Lastly, Ben jazzes up empathy as a human superpower. While optimistic about empathic AI’s potential, he counsels proceeding with caution. Ben Bland is an independent consultant in ethical innovation. An active community contributor, Ben is the Chair of the IEEE P7014 Standard for Ethical Considerations in Emulated Empathy in Autonomous and Intelligent Systems and Vice-Chair of IEEE P7014.1 Recommended Practice for Ethical Considerations of Emulated Empathy in Partner-based General-Purpose Artificial Intelligence Systems.A transcript of this episode is here.
    --------  
    46:22
  • RAGging on Graphs with Philip Rathle
    Philip Rathle traverses from knowledge graphs to LLMs and illustrates how loading the dice with GraphRAG enhances deterministic reasoning, explainability and agency.    Philip explains why knowledge graphs are a natural fit for capturing data about real-world systems. Starting with Kevin Bacon, he identifies many ‘graphy’ problems confronting us today. Philip then describes how interconnected systems benefit from the dynamism and data network effects afforded by knowledge graphs. Next, Philip provides a primer on how Retrieval Augmented Generation (RAG) loads the dice for large language models (LLMs). He also differentiates between vector- and graph-based RAG. Along the way, we discuss the nature and locus of reasoning (or lack thereof) in LLM systems. Philip articulates the benefits of GraphRAG including deterministic reasoning, fine-grained access control and explainability. He also ruminates on graphs as a bridge to human agency as graphs can be reasoned on by both humans and machines. Lastly, Philip shares what is happening now and next in GraphRAG applications and beyond. Philip Rathle is the Chief Technology Officer (CTO) at Neo4j. Philip was a key contributor to the development of the GQL standard and recently authored The GraphRAG Manifesto: Adding Knowledge to GenAI (neo4j.com) a go-to resource for all things GraphRAG. A transcript of this episode is here. 
    --------  
    49:33
  • Working with AI with Matthew Scherer
    Matthew Scherer makes the case for bottom-up AI adoption, being OK with not using AI, innovation as a relative good, and transparently safeguarding workers’ rights. Matthew champions a worker-led approach to AI adoption in the workplace. He traverses the slippery slope from safety to surveillance and guards against unnecessarily intrusive solutions. Matthew then illustrates why AI isn’t great at making employment decisions; even in objectively data rich environments such as the NBA. He also addresses the intractable problem of bias in hiring and flawed comparisons between humans and AI. We discuss the unquantifiable dynamics of human interactions and being OK with our inability to automate hiring and firing. Matthew explains how the patchwork of emerging privacy regulations reflects cultural norms towards workers. He invokes the Ford Pinto and the Titan submersible catastrophe when challenging the concept of innovation as an intrinsic good. Matthew then makes the case for transparency as a gateway to enforcing existing civil rights and laws. Matthew Scherer is a Senior Policy Counsel for Workers' Rights and Technology at the Center for Democracy and Technology (CDT). He studies how emerging technologies affect workers in the workplace and labor market.   Matt is also an Advisor for the International Center for Advocates Against Discrimination. A transcript of this episode is here.
    --------  
    58:50

More Technology podcasts

About Pondering AI

Podcast website

Listen to Pondering AI, Lex Fridman Podcast and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Radio
Social
v6.28.0 | © 2007-2024 radio.de GmbH
Generated: 11/19/2024 - 11:32:36 AM