Powered by RND
PodcastsGovernmentTechnology and Security

Technology and Security

Dr Miah Hammond-Errey
Technology and Security
Latest episode

Available Episodes

5 of 37
  • Human behaviour, digital twins and resilient cybersecurity with Prof Ganna Pogrebna
    In this episode of the Technology & Security podcast, host Dr. Miah Hammond-Errey is joined by Professor Ganna Pogrebna. They explore the intersections of behavioural data science, AI, cybersecurity, and technology adoption. The discussion covers urban-rural technology divides and the dilemmas faced by small businesses using "off the shelf" AI tools. It explores Australia's global position in quantum algorithms and cybersecurity innovation and digital twins, showcasing their role in simulating complex systems in cybersecurity and even nuclear decision-making.  This episode highlights the limits of machine learning for fighting misinformation, emphasising that humans still detect novel attacks better than algorithms. Ganna shares practical inclusion strategies that policy and industry leaders can adopt, such as "inclusion riders" in contracts to increase representation. The conversation closes on actionable ways to bridge the research-adoption gap, the evolving challenge of leading human–machine teams, and the enduring need for experimentation and resilience as technology, policy, and society evolve.​ Highlights 🌏 Bridging urban–rural and global divides in technology use, access and security.🛡️ Why digital twins are useful and how they can improve cybersecurity decision-making. 🕵️‍♂️ Human intuition still outpaces AI for spotting new cyber threats.⚖️ How business incentives and KPIS are still a leading method for behavioural change. 🤖 Leadership now means leading people who are combinations of human–machine. 💡 Co-design, not just adoption, drives effective technology and security tools.
    --------  
    42:54
  • Counterfeit medicine, forensic frontiers and foreign interference with Dr Adrian De Grazia
    In this episode of Technology & Security, host Dr. Miah Hammond-Errey is joined Dr Adrian De Grazia, Global Intelligence Lead at Pfizer. This episode explores counterfeit pharmaceuticals and the evolving landscape of forensic science. The conversation takes listeners inside global operations, including the technologies transforming supply chain integrity, collaboration with law enforcement, and the unique challenges of detecting and disrupting complex networks involved in medicine counterfeiting. It also explores the importance of data literacy for leaders, and the role of alliances in combating security threats at national and corporate levels.  Listeners hear about innovative approaches to product integrity and authentication—from advanced packaging to real-time tracking—alongside reflections on emerging security risks linked to AI, chemical profiling, and supply chain vulnerabilities. Insightful examples, including Operation Pangea and Australia’s digital forensic strategy for foreign interference, highlight the real-world impact of science in protecting patients, supporting public safety, and fostering interdisciplinary cooperation at local and global scales.Please note, while employed by Pfizer, the views shared by Dr De Grazia are his own, taken from studies, personal and professional experiences past and present.Resourceshttps://www.pfizer.com/products/medicine-safety/counterfeiting  This podcast was recorded on the lands of the Gadigal people, and we pay our respects to Elders past, present and emerging. We acknowledge their continuing connection to land, sea and community, and extend that respect to all Aboriginal and Torres Strait Islander people. Music by Dr Paul Mac and production by Elliott Brennan.
    --------  
    41:52
  • Copyright, class action and cybersecurity... Shaping our digital future with Lizzie O’Shea
    In this episode of the Technology & Security podcast, host Dr. Miah Hammond-Errey is joined by lawyer and digital rights activist, Lizzie O’Shea. This episode explores Australia’s technology debates from a security and legal lens—addressing copyright, creativity, AI, and the legal structures, including class action, that shape society and security. We discuss how so often in the AI discussion we are asked to make trade-offs about immense future potential with real present harms in the now. This episode breaks down why proposals to let large language models freely train on the copyrighted works of Australians have rattled artists, news media, and civil society. Lizzie explains the Productivity Commission’s push for a data mining exemption, unpacks strong community reaction, the distinction between fair use and fair dealing and highlights what’s at stake for creative industry sustainability and fair compensation in the digital age. We also explore recent legal action against Google and Apple–in Australia–and  the breadth of big tech legal and enforcement action globally, and what this means. The episode also covers the changing nature of US and Chinese AI strategies and approaches to the Indo Pacific, as well as an increase in big tech spending in Australian policy and research landscape. We explore the vulnerability of allowing mass data collection, noting that while data minimisation, and prioritising strong cybersecurity are understood priorities we question whether they are they really supported by legislative regimes. We discuss the significance of incentivising feedback in AI systems to integrate them into businesses in productive ways and crafting successful narratives for cautious adoption of AI. Finally, we look at why litigation has become central to holding digital giants accountable, and how Australians’ blend of healthy scepticism and tech enthusiasm might finally force smarter AI regulation. The conversation highlights how quick fixes and premature adoption, risk deeper, lasting social harms and national security threats. Resources mentioned in the recording: ·       Future Histories, What Ada Lovelace, Tom Paine, and the Paris Commune Can Teach Us about Digital Technology, by Lizzie O’Shea, Shortlisted for the Victorian Premier’s Literary Awards 2020 Award. https://lizzieoshea.com/future-histories/·       Burning Platforms podcast, https://percapita.org.au/podcasts/·       Empire of AI by Karen Hao ·       Digital Rights Watch https://digitalrightswatch.org.au This podcast was recorded on the lands of the Gadigal people, and we pay our respects to their Elders past, present and emerging. We acknowledge their continuing connection to land, sea and community, and extend that respect to all Aboriginal and Torres Strait Islander people. Thanks to the talents of those involved. Music by Dr Paul Mac and production by Elliott Brennan. 
    --------  
    44:43
  • Language, meaning, human connection and the AI hype with Prof Emily M. Bender
    In this episode, Dr Miah Hammond-Errey is joined by Professor Emily M. Bender—A renowned AI commentator, professor of linguistics at the University of Washington and co-author of The AI Con: How to Fight Big Tech Hype and Create the Future We Want. In this episode we explore the complex relationship between language, large language models, and the rise of “synthetic text-extruding machines.” Bender discusses the origins of the “stochastic parrots” metaphor, the risks of anthropomorphising generative AI, and what’s really at stake as automated systems permeate journalism, leadership, and collective decision making. The conversation outlines some of the social and democratic impacts of synthetic content, including on democratic discourse and journalism, the dangers of language standardisation, and how emerging tools can erode diversity and self-confidence in language users. Emily Bender offers practical advice for policymakers and leaders, emphasizing transparency, recourse, and data minimisation. She offers observations from her book tour, reflecting on the ongoing need for human connection in a digital era, and outlines the importance of workers’ collective rights in navigating the future of automation.
    --------  
    38:42
  • Australia’s AI future—trust, opportunity, and human rights with Prof Ed Santow
    In this episode of the Technology & Security podcast, Dr. Miah Hammond-Errey is joined by Professor Edward Santow, former Australian Human Rights Commissioner and co-director of the Human Technology Institute at UTS. The conversation is a candid exploration of Australia’s evolving AI landscape, diving into why Australians remain sceptical of AI despite being early adopters, and how trust in technology must be earned—not demanded—through transparency, robust safeguards, and practical engagement with both risks and opportunities. Professor Santow shares insights from his recent book, "Machines in Our Image," reflecting on the dual nature of AI: its power to enhance inclusion and accessibility, but also causing real harm. The discussion traverses global AI politics, the need for balanced regulation, and the critical role of workers and individuals in shaping responsible AI adoption. We also discuss the challenges of AI-driven information threats, misinformation and to democracy. Listeners come away with a nuanced understanding of how Australia can approach its own path in the rapidly shifting world of technology and security.
    --------  
    42:30

More Government podcasts

About Technology and Security

Technology and Security (TS) explores the intersections of emerging technologies and security. It is hosted by Dr Miah Hammond-Errey. Each month, experts in technology and security join Miah to discuss pressing issues, policy debates, international developments, and share leadership and career advice. https://miahhe.com/about-ts | https://stratfutures.com
Podcast website

Listen to Technology and Security, The Tara Palmeri Show and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.23.11 | © 2007-2025 radio.de GmbH
Generated: 11/15/2025 - 7:18:44 PM