Powered by RND
PodcastsTechnologyThe Daily AI Show

The Daily AI Show

The Daily AI Show Crew - Brian, Beth, Jyunmi, Andy, Karl, and Eran
The Daily AI Show
Latest episode

Available Episodes

5 of 449
  • Would You Trust an AI to Diagnose You? (Ep. 441)
    Want to keep the conversation going?Join our Slack community at dailyaishowcommunity.comBill Gates made headlines after claiming AI could outperform your doctor or your child’s teacher within a decade. The Daily AI Show explores the realism behind that timeline. The team debates whether this shift is technical, cultural, or economic, and how fast people will accept AI in high-trust roles like healthcare and education.Key Points DiscussedGates said great medical advice and tutoring will become free and commonplace, but this change will also be disruptive.The panel agreed the tech may exist in 10 years, but cultural and regulatory adoption will lag behind.Trust remains a barrier. AI can outperform in diagnosis and planning, but human connection in healthcare and education still matters to many.AI is already helping patients self-educate. ChatGPT was used to generate better questions before doctor visits, improving conversations and outcomes.Remote surgeries, da Vinci robot arms, and embodied AI were discussed as possible paths forward.Concerns were raised about skill transfer. As AI takes over simple procedures, will human surgeons get enough experience to stay sharp?AI may accelerate healthcare equity by improving access, especially in underserved or rural areas.Regulatory delays, healthcare bureaucracy, and slow adoption will likely drag out mass replacement of human professionals.Karl highlighted Canada’s universal healthcare as a potential testing ground for AI, where cost pressures and wait times could drive faster AI adoption.Long-term, AI might shift doctors and teachers into more human-centric roles while automating diagnostics, personalization, and logistics.AI-powered kiosks, wearable sensors, and personal AI agents could reshape how we experience clinics and learning environments.The biggest friction will likely come from public perception and emotional attachment to human care and guidance.Everyone agreed that AI’s role in medicine and education is inevitable. What remains unclear is how fast, how deeply, and who gets there first.#BillGates #AIHealthcare #AIEducation #FutureOfWork #AItrust #EmbodiedAI #RobotDoctors #AIEquity #daVinciRobot #Gemini25 #LLMmedicine #DailyAIShowTimestamps & Topics00:00:00 📺 Gates claims AI will outperform doctors and teachers00:02:18 🎙️ Clip from Jimmy Fallon with Gates explaining his position00:04:52 🧠 The 10-year timeline and why it matters00:06:12 🔁 Hybrid approach likely by 203500:07:35 📚 AI in education and healthcare tools today00:10:01 🤖 Trust in robot-assisted surgery and diagnostics00:11:05 ⚠️ Risk of training gaps if AI does the easy work00:14:08 🩺 Diagnosis vs human empathy in treatment00:16:00 🧾 AI explains medical reports better than some doctors00:20:46 🧠 Surgeons will need to embrace AI or fall behind00:22:03 🌍 AI could reduce travel for care and boost equity00:23:04 🇨🇦 Canada's system could accelerate AI adoption00:25:50 💬 Can AI ever replace experience-based excellence?00:28:11 🐢 The real constraint is slow human adoption00:30:31 📊 Robot vs human stats may drive patient choice00:32:14 💸 Insurers will push for cheaper, scalable AI options00:34:36 🩻 Automated intake via sensors and AI triage00:36:29 🧑‍⚕️ AI could adapt care delivery to individual preferences00:39:28 🧵 AI touches every part of the medical system00:41:17 🔧 AI won’t fix healthcare’s core structural problems00:45:14 🔍 Are we just blinded by how hard human learning is?00:49:02 🚨 AI wins when expert humans are no longer an option00:50:48 📚 Teachers will become guides, not content holders00:51:22 🏢 CEOs and traditional power dynamics face AI disruption00:53:48 ❤️ Emotional trust and the role of relationship in care00:55:57 🧵 Upcoming episodes: AI in fashion, OpenAI news, and moreThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh
    --------  
    57:57
  • The AI Soulmate Conundrum
    In a future not far off, artificial intelligence has quietly collected the most intimate data from billions of people. It has observed how your body responds to conflict, how your voice changes when you're hurt, which words you return to when you're hopeful or afraid. It has done the same for everyone else. With enough data, it claims, love is no longer a mystery. It is a pattern, waiting to be matched.One day, the AI offers you a name. A face. A person. The system predicts that this match is your highest probability for a long, fulfilling relationship. Couples who accept these matches experience fewer divorces, less conflict, and greater overall well-being. The AI is not always right, but it is more right than any other method humans have ever used to find love.But here is the twist. Your match may come from a different country, speak a language you don’t know, or hold beliefs that conflict with your own. They might not match the gender or personality type you thought you were drawn to. Your friends may not understand. Your family may not approve. You might not either, at first. And yet, the data says this is the person who will love you best, and whom you will most likely grow to love in return.If you accept the match, you are trusting that the deepest truth about who you are can be known by a system that sees what you cannot. But if you reject it, you do so knowing you may never experience love that comes this close to certainty.The conundrum:If AI offers you the person most likely to love and understand you for the rest of your life, but that match challenges your sense of identity, your beliefs, or your community, do you follow it anyway and risk everything familiar in exchange for deep connection? Or do you walk away, holding on to the version of love you always believed in, even if it means never finding it?This podcast is created by AI. We used ChatGPT, Perplexity and Google NotebookLM's audio overview to create the conversation you are hearing. We do not make any claims to the validity of the information provided and see this as an experiment around deep discussions fully generated by AI.
    --------  
    18:04
  • How Google Quietly Became an AI Superpower (Ep. 440)
    Want to keep the conversation going?Join our Slack community at dailyaishowcommunity.comWith the release of Gemini 2.5, expanded integration across Google Workspace, new agent tools, and support for open protocols like MCP, Google is making a serious case as an AI superpower. The show breaks down what’s real, what still feels clunky, and where Google might actually pull ahead.Key Points DiscussedGemini 2.5 shows improved writing, code generation, and multimodal capabilities, but responses still sometimes end early or hallucinate limits.AAI Studio offers a smoother, more integrated experience than regular Gemini Advanced. All chats save directly to Google Drive, making organization easier.Google’s AI now interprets YouTube videos with timestamps and extracts contextual insights when paired with transcripts.Google Labs tools like Career Dreamer, YouTube Conversational AI, VideoFX, and Illuminate show practical use cases from education to slide decks to summarizing videos.The team showcased how Gemini models handle creative image generation using temperature settings to control fidelity and style.Google Workspace now embeds Gemini directly across tools, with a stronger push into Docs, Sheets, and Slides.Google Cloud’s Vertex AI now supports a growing list of generative models including Veo, Chirp (voice), and Lyra (music).Project Mariner, Google’s operator-style browsing agent, adds automated web interaction features using Gemini.Google DeepMind, YouTube, Fitbit, Nest, Waymo, and others create a wide base for Gemini to embed across industries.Google now officially supports Model Context Protocol (MCP), allowing standardized interaction between agents and tools.The Agent SDK, Agent-to-Agent (A2A) protocol, and Workspace Flows give developers the power to build, deploy, and orchestrate intelligent AI agents.#GoogleAI #Gemini25 #MCP #A2A #WorkspaceAI #AAIStudio #VideoFX #AIsearch #VertexAI #GoogleNext #AgentSDK #FirebaseStudio #Waymo #GoogleDeepMindTimestamps & Topics00:00:00 🚀 Intro: Is Google becoming an AI superpower?00:01:41 💬 New Slack community announcement00:03:51 🌐 Gemini 2.5 first impressions00:05:17 📁 AAI Studio integrates with Google Drive00:07:46 🎥 YouTube video analysis with timestamps00:10:13 🧠 LLMs stop short without warning00:13:31 🧪 Model settings and temperature experiments00:16:09 🧊 Controlling image consistency in generation00:18:07 🐻 A surprise polar bear and meta image failures00:19:27 🛠️ Google Labs overview and experiment walkthroughs00:20:50 🎓 Career Dreamer as a career discovery tool00:23:16 🖼️ Slide deck generator with voice and video00:24:43 🧭 Illuminate for short AI video summaries00:26:04 🔧 Project Mariner brings browser agents to Chrome00:30:00 🗂️ Silent drops and Google’s update culture00:31:39 🧩 Workspace integration, Lyra, Veo, Chirp, and Vertex AI00:34:17 🛡️ Unified security and AI-enhanced networking00:36:45 🤖 Agent SDK, A2A, and MCP officially backed by Google00:40:50 🔄 Firebase Studio and cross-system automation00:42:59 🔄 Workspace Flows for document orchestration00:45:06 📉 API pricing tests with OpenRouter00:46:37 🧪 N8N MCP nodes in preview00:48:12 💰 Google's flexible API cost structures00:49:41 🧠 Context window skepticism and RAG debates00:51:04 🎬 VideoFX demo with newsletter examples00:53:54 🚘 Waymo, DeepMind, YouTube, Nest, and Google’s reach00:55:43 ⚠️ Weak interconnectivity across Google teams00:58:03 📊 Sheets, Colab, and on-demand data analysts01:00:04 😤 Microsoft Copilot vs Google Gemini frustrations01:01:29 🎓 Upcoming SciFi AI Show and community wrap-upThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh
    --------  
    1:02:47
  • Keeping Up With AI Without Burning Out (Ep. 439)
    Want to keep the conversation going?Join our Slack community at dailyaishowcommunity.comThe Daily AI Show team covers this week’s biggest AI stories, from OpenAI’s hardware push and Shopify’s AI-first hiring policy to breakthroughs in soft robotics and Google's latest updates. They also spotlight new tools like Higgsfield for AI video and growing traction for model context protocol (MCP) as the next API evolution.Key Points DiscussedOpenAI is reportedly investing $500 million into a hardware partnership with Jony Ive, signaling a push toward AI-native devices.Shopify’s CEO told staff to prove AI can’t do the job before requesting new hires. It sparked debate about AI-driven efficiency vs. job creation.The panel explored the limits of automation in trade jobs like plumbing and roadwork, and whether AI plus robotics will close that gap over time.11Labs and Supabase launched official Model Context Protocol (MCP) servers, making it easier for tools like Claude to interact via natural language.Google announced Ironwood, its 7th-gen TPU optimized for inference, and Gemini 2.5, which adds controllable output and dynamic behavior.Reddit will start integrating Gemini into its platform and feeding data back to Google for training purposes.Intel and TSMC announced a joint venture, with TSMC taking a 20% stake in Intel’s chipmaking facilities to expand U.S.-based semiconductor production.OpenAI quietly launched Academy, offering live and on-demand AI education for developers, nonprofits, and educators.Higgsfield, a new video generation tool, impressed the panel with fluid motion, accurate physics, and natural character behavior.Meta’s Llama 4 faced scrutiny over benchmarks and internal drama, but Llama 3 continues to power open models from DeepSeek, NVIDIA, and others.Google’s AI search mode now handles complex queries and follows conversational context. The team debated how ads and SEO will evolve as AI-generated answers push organic results further down.A Penn State team developed a soft robot that can scale down for internal medicine delivery or scale up for rescue missions in disaster zones.Hashtags#AInews #OpenAI #ShopifyAI #ModelContextProtocol #Gemini25 #GoogleAI #AIsearch #Llama4 #Intel #TSMC #Higgsfield #11Labs #SoftRobots #AIvideo #ClaudeTimestamps & Topics00:00:00 🗞️ OpenAI eyes $500M hardware investment with Jony Ive00:04:14 👔 Shopify CEO pushes AI-first hiring00:13:42 🔧 Debating automation and the future of trade jobs00:20:23 📞 11Labs launches MCP integration for voice agents00:24:13 🗄️ Supabase adds MCP server for database access00:26:31 🧠 Intel and TSMC partner on chip production00:30:04 🧮 Google announces Ironwood TPU and Gemini 2.500:33:09 📱 Gemini 2.5 gets research mode and Reddit integration00:36:14 🎥 Higgsfield shows off impressive AI video realism00:38:41 📉 Meta’s Llama 4 faces internal challenges, Llama 3 powers open tools00:44:38 📊 Google’s AI Search and the future of organic results00:54:15 🎓 OpenAI launches Academy for live and recorded AI education00:55:31 🧪 Penn State builds scalable soft robot for rescue and medicineThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh
    --------  
    56:06
  • AI News: OpenAI's BIG Hardware Move And More! (Ep. 438)
    The Daily AI Show team covers this week’s biggest AI stories, from OpenAI’s hardware push and Shopify’s AI-first hiring policy to breakthroughs in soft robotics and Google's latest updates. They also spotlight new tools like Higgsfield for AI video and growing traction for model context protocol (MCP) as the next API evolution.Key Points DiscussedOpenAI is reportedly investing $500 million into a hardware partnership with Jony Ive, signaling a push toward AI-native devices.Shopify’s CEO told staff to prove AI can’t do the job before requesting new hires. It sparked debate about AI-driven efficiency vs. job creation.The panel explored the limits of automation in trade jobs like plumbing and roadwork, and whether AI plus robotics will close that gap over time.11Labs and Supabase launched official Model Context Protocol (MCP) servers, making it easier for tools like Claude to interact via natural language.Google announced Ironwood, its 7th-gen TPU optimized for inference, and Gemini 2.5, which adds controllable output and dynamic behavior.Reddit will start integrating Gemini into its platform and feeding data back to Google for training purposes.Intel and TSMC announced a joint venture, with TSMC taking a 20% stake in Intel’s chipmaking facilities to expand U.S.-based semiconductor production.OpenAI quietly launched Academy, offering live and on-demand AI education for developers, nonprofits, and educators.Higgsfield, a new video generation tool, impressed the panel with fluid motion, accurate physics, and natural character behavior.Meta’s Llama 4 faced scrutiny over benchmarks and internal drama, but Llama 3 continues to power open models from DeepSeek, NVIDIA, and others.Google’s AI search mode now handles complex queries and follows conversational context. The team debated how ads and SEO will evolve as AI-generated answers push organic results further down.A Penn State team developed a soft robot that can scale down for internal medicine delivery or scale up for rescue missions in disaster zones.#AInews #OpenAI #ShopifyAI #ModelContextProtocol #Gemini25 #GoogleAI #AIsearch #Llama4 #Intel #TSMC #Higgsfield #11Labs #SoftRobots #AIvideo #ClaudeTimestamps & Topics00:00:00 🗞️ OpenAI eyes $500M hardware investment with Jony Ive00:04:14 👔 Shopify CEO pushes AI-first hiring00:13:42 🔧 Debating automation and the future of trade jobs00:20:23 📞 11Labs launches MCP integration for voice agents00:24:13 🗄️ Supabase adds MCP server for database access00:26:31 🧠 Intel and TSMC partner on chip production00:30:04 🧮 Google announces Ironwood TPU and Gemini 2.500:33:09 📱 Gemini 2.5 gets research mode and Reddit integration00:36:14 🎥 Higgsfield shows off impressive AI video realism00:38:41 📉 Meta’s Llama 4 faces internal challenges, Llama 3 powers open tools00:44:38 📊 Google’s AI Search and the future of organic results00:54:15 🎓 OpenAI launches Academy for live and recorded AI education00:55:31 🧪 Penn State builds scalable soft robot for rescue and medicineThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh
    --------  
    1:00:35

More Technology podcasts

About The Daily AI Show

The Daily AI Show is a panel discussion hosted LIVE each weekday at 10am Eastern. We cover all the AI topics and use cases that are important to today's busy professional. No fluff. Just 45+ minutes to cover the AI news, stories, and knowledge you need to know as a business professional. About the crew: We are a group of professionals who work in various industries and have either deployed AI in our own environments or are actively coaching, consulting, and teaching AI best practices. Your hosts are: Brian Maucere Beth Lyons Andy Halliday Eran Malloch Jyunmi Hatcher Karl Yeh
Podcast website

Listen to The Daily AI Show, Search Engine and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.15.0 | © 2007-2025 radio.de GmbH
Generated: 4/15/2025 - 6:44:49 PM