
AI's Most Dangerous Truth: We've Already Lost Control
12/01/2026 | 1h 36 mins.
What happens when the people building artificial intelligence quietly believe it might destroy us? On this episode of Digital Disruption, we’re joined by Gregory Warner, Peabody Award–winning journalist, former NPR correspondent, and host of the hit AI podcast The Last Invention.Gregory Warner is a versatile journalist and podcaster. He has been recognized with a Peabody Award and other awards from organizations like Edward R. Murrow, New York Festivals, AP, and PRNDI. Warner's career includes serving as an East Africa correspondent, where he covered the region's economic growth and terrorism threats. He has also worked as a senior reporter for American Public Media's Marketplace, focusing on the economics of American health care. His work has been recognized with a Best News Feature award from the Third Coast International Audio Festival.Gregory sits down with Geoff for an honest conversation about the AI race unfolding today. After years spent interviewing the architects, skeptics, and true believers behind advanced AI systems, Gregory has come away with an unsettling insight: the same people racing to build more powerful models are often the most worried about where this technology is heading. This episode explores whether we’re already living inside the AI risk window, why AI safety may be even harder than nuclear safety, and why Silicon Valley’s “move fast and fix later” mindset may not apply to superintelligence. It also examines the growing philosophical divide between AI doomers and AI accelerationists. This conversation goes far beyond chatbots and job-loss headlines. It asks a deeper question few are willing to confront: are we building something we can’t control and, doing it anyway? In this video:00:00 Intro03:00 AI models that already behave like elite hackers05:00 Why the AI risk window may already be open06:30 What AI safety actually means (and why it’s so hard)12:00 Human-in-the-loop: safety feature or illusion?15:00 AI as an alien intelligence, not a human one19:00 The Silicon Valley AI arms race explained21:00 OpenAI, DeepMind, Anthropic, xAI: who’s racing and why25:00 The “Compressed Century” and radical AI optimism27:00 Can AI actually solve humanity’s biggest problems?33:00 Capital, competition, and the pressure to deploy37:00 Is AI more dangerous than nuclear weapons?39:00 The problem with comparing AI to past technologies43:00 What happens to human agency in an AI-driven world?45:00 How AI reshapes creativity, journalism, and truth53:00 The quiet assumptions built into AI systems55:00 Why optimism and fear both miss the full picture59:00 What responsibility do users have?01:01:00 The most important question we’re not asking about AIConnect with Gregory:LinkedIn: https://www.linkedin.com/in/radiogrego/Instagram: https://www.instagram.com/radiogrego/Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG

AI Will End Human Jobs: Emad Mostaque on the Future of Human Work
05/01/2026 | 1h 10 mins.
What happens to jobs, money, and meaning when intelligence becomes cheaper than labor and humans are no longer the smartest ones in the room?On this episode of Digital Disruption, we’re joined by Emad Mostaque, founder of Stability AI and a leading voice in the global AI revolution.Emad Mostaque is a businessman, mathematician, and former hedge fund manager. He is the co-founder and was CEO of Stability AI, the company behind the popular text-to-image generator Stable Diffusion. With a master’s degree in mathematics and computer science from Oxford, Emad Mostaque has significantly contributed to artificial intelligence. His vision for Stability AI was to “build the foundation to activate humanity’s potential” through open-source generative AI.Emad sits down with Geoff to explore a future that may arrive far sooner than most people expect. He argues that within the next 1,000 days, artificial intelligence will fundamentally reshape the global economy, upending work, capitalism, enterprise software, and even how we define human value. Drawing from his book The Last Economy, Emad lays out a stark and deeply thought-provoking framework for understanding what comes next when cognitive labor becomes economically irrelevant. This conversation explores the inevitabilities of exponential AI progress, including why intelligence is becoming “too cheap to measure,” how AI agents will replace many jobs done behind a screen, and the coming shift from human-plus-AI teams to AI-only systems. Beyond the economics, Emad also tackles the human question: where meaning comes from in a world where AI outperforms us at most cognitive tasks. He argues that resilience in the AI age will depend less on job titles and more on community, networks, relationships, and how deeply individuals engage with the technology itself.In this video:00:00 Intro04:30 What is “The Last Economy”?08:45 Intelligence becomes too cheap to measure13:30 The three possible AI futures18:00 Are humans becoming the weakest link?22:15 The rise of economic agents27:00 Digital doubles and the end of white-collar work31:45 Enterprises racing toward zero employees36:30 Why AI is cheaper than human labor (by orders of magnitude)41:15 Software, SaaS, and the collapse of enterprise moats46:00 The internet after AI agents50:15 Who controls the “AI next to you”?54:30 Open-Source vs Big Tech AI58:45 The first one-person billion-dollar company1:03:30 What humans are still for1:07:00 How to prepare for the AI economy nowConnect with Emad:LinkedIn: https://www.linkedin.com/in/emad-mostaque-9840ba274/?originalSubdomain=ukX: https://x.com/EMostaqueFacebook: https://www.facebook.com/mostaquee/Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG

AI Boom or Bust? AI Boomers and Doomers Reveal Their Predictions for Our Future
29/12/2025 | 1h 15 mins.
Is artificial intelligence humanity’s greatest salvation, or the most dangerous force we’ve ever unleashed?Artificial intelligence is no longer a future concept, it’s a force already reshaping geopolitics, economics, warfare, and the human experience itself. In this year in review episode of Digital Disruption, we bring together the most provocative, conflicting, and urgent ideas from this past year to confront the biggest question of our time: What does AI actually mean for humanity’s future?Across more than 40 conversations with leading technologists, journalists, researchers, and futurists, one theme dominated every debate, AI. Some guests argue that artificial general intelligence (AGI) and superintelligence could trigger an extinction-level event. Others believe AI may usher in an era of total abundance, solving humanity’s hardest problems. And still others claim today’s AI hype is little more than marketing smoke and mirrors.This episode puts those worldviews head-to-head.In this episode:00:00 The AI singularity is here05:00 Existential threat or greatest opportunity?10:00 Why no one agrees on ai’s future15:00 The race toward AGI and superintelligence20:00 The control problem nobody has solved25:00 Intelligence has no morality30:00 Capitalism, venture capital, and the AI arms race35:00 Is AI just a marketing illusion?40:00 Generative AI: Power, limits, and misuse45:00 Autonomous weapons and modern warfare50:00 Fear as the driver of dangerous innovation55:00 Why AI is not like nuclear weapons1:00:00 The first and second AI dilemmas1:05:00 Handing decisions over to machines1:10:00 Collapse, abundance, or course correction?Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG

What AI Bubble? Top Trends in Tech and Jobs in 2026
22/12/2025 | 1h 28 mins.
Are companies preparing for an AI-powered future or reacting out of fear of being left behind?Looking ahead to 2026, Geoff Nielson and Jeremy Roberts sit down for an unfiltered conversation about artificial intelligence, the economy, and the future of work. As AI hype accelerates across markets, boardrooms, and headlines, they ask the hard questions many leaders and workers are quietly worrying about: Are we in an AI bubble? If so, what happens when expectations collide with reality? This episode explores whether today’s massive investment in AI, GPUs, infrastructure, copilots, and generative tools is laying the foundation for long-term value or repeating the familiar patterns of past tech bubbles like the dot-com boom and the subprime mortgage crisis. Geoff and Jeremy break down why traditional metrics like price-to-earnings ratios matter, why Nvidia and big tech dominate the narrative, and why the real risk may not be collapse but widespread underperformance. The conversation goes far beyond markets. They dig into the impact of AI on jobs, layoffs, and corporate restructuring, challenging the idea that AI is “taking jobs” versus being used as convenient cover for economic tightening. From IT, HR, and operations to customer-facing roles, they examine how AI could reshape workforce composition, accelerate automation, and create a new and potentially unsettling employment equilibrium. You’ll also hear a candid critique of how organizations are actually using AI today and what is to come next in 2026.Tech Trends Report 2026: https://www.infotech.com/research/ss/tech-trends-2026?utm_source=youtube&utm_medium=social&utm_campaign=researchIn this video:00:00 Just add AI to everything?03:45 Looking ahead to 2026: Nobody knows what’s coming07:10 Are we in an AI bubble? 12:30 Comparing AI to the dot-com and 2008 crashes18:10 Nvidia, GPUs, and the AI Gold Rush24:20 Why AI infrastructure may be ahead of real-world use cases.30:40 Markets untethered from reality36:50 is AI really taking jobs or is something else happening?43:30 The real employment question for 202649:40 Corporate bloat, back-office roles, and automation56:10 Why most AI projects fail to deliver value1:02:45 From productivity theater to real ROI1:09:20 Faster horses vs. Real cars in AI1:15:40 AI 2.0: Agents, experiments, and what comes next1:22:10 The real risk ahead: Underperformance, not collapseVisit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG

Top Neuroscientist Says AI Is Making Us DUMBER?
15/12/2025 | 1h 23 mins.
Are we using AI in a way that actually makes us smarter or are we unknowingly making ourselves less capable, less curious, and easier to automate?On this episode of Digital Disruption, we are joined by artificial intelligence expert and neuroscientist, Dr. Vivienne Ming.Over her career, Dr. Vivienne Ming has founded 6 startups, been chief scientist at 2 others, and founded The Human Trust, a philanthropic data trust and “mad science incubator” that explores seemingly intractable problems—from a lone child’s disability to global economic inclusion—for free. She co-founded Dionysus Health, combining AI and epigenetics to invent the first ever biological test for postpartum depression and change the lives of millions of families. She also develops AI tools for learning at home and in school, models of bias in hiring and promotion, and neurotechnologies to treat dementia and TBI. Vivienne was named one of “10 Women to Watch in Tech” by Inc. Magazine and one of the BBC’s 100 Women in 2017. She is featured frequently for her research and inventions in The Financial Times, The Atlantic, Quartz Magazine and the New York Times.Dr. Vivienne Ming sits down with Geoff to unpack one of the most misunderstood truths about artificial intelligence: AI isn’t here to replace your thinking it’s here to challenge it. And whether you grow or get left behind depends entirely on how you choose to engage with it. Dr. Ming reveals why most organizations and most individuals are using AI in the worst possible way. Instead of creating leverage, they’re creating “work slop,” cognitive dependency, shallow automation, and declining human capability. She explains why the real competitive advantage in the AI age comes from productive friction, creative complementarity, and teams that know how to use AI to explore the ill-posed problems—the ambiguous, uncertain, high-value challenges machines can’t solve on their own. From how to robot-proof your company, to why AI tutors fail when they give answers, to the science of courage, reward systems, and organizational culture, this conversation is one of the most honest explorations of the future of human capability in an AI-saturated world. In this video:00:00 Intro02:30 The real value of hybrid intelligence05:00 Cognitive automation vs. true complementarity08:20 Ill-posed problems: where humans still win12:10 What elite performers really do differently16:00 The paradox of AI: why more automation creates more work18:30 How hybrid teams beat prediction markets20:50 Inequality & imagination disease in AI23:10 AI tutors & the golden rule: never give the answer28:00 The nemesis prompt: how to robot-proof yourself44:20 Courage, ethics & reward structures in organizations54:00 Using AI without losing the human story01:06:30 How to robot-proof your companyConnect with Vivienne:Website: https://socos.org/about-vivienneLinkedIn: https://www.linkedin.com/in/vivienneming/Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG



Digital Disruption with Geoff Nielson