PodcastsScience"News From The Future" with Dr Catherine Ball

"News From The Future" with Dr Catherine Ball

The Future Is Already Here.... Meet The Humans At The Cutting Edge
"News From The Future" with Dr Catherine Ball
Latest episode

31 episodes

  • "News From The Future" with Dr Catherine Ball

    Meet Abi the Aussie Robot winning hearts (and business) in the USA

    15/01/2026 | 6 mins.
    Podcast Transcript:
    Welcome to News From The Future Special Editions with Dr Cath working hard at the CES in Vegas. This podcast is produced using the AI voice clone of Cath by eleven labs.
    Cath was so happy to be in the audience today when Abi, the aussie robot was shown on stage in the Agetech section of the massive trade show. Here is a summary of what was discussed.
    Abbie is an innovative companion robot created by Andromeda Robotics, conceived during the pandemic by founder Grace Brown while she was a mechatronics student in Australia feeling lonely in her dorm room during the pandemic. This experience led her to research loneliness, particularly among elderly populations, which became the driving force behind Abbie’s development. The robot represents a creative solution to address what health experts, including the U.S. Surgeon General, have identified as a critical health issue - loneliness, which can be as damaging as smoking 15 cigarettes daily.
    The robot serves as an emotional companion, particularly in senior living facilities where residents often face long periods of isolation despite being in a communal setting. Abbie can speak over 90 languages, enabling meaningful connections with residents who may have lost their ability to communicate in their second language due to cognitive decline. A powerful example shared was of a resident who could only speak Mandarin - Abbie became his conversation partner, leading to him sharing Chinese poetry and drawing other curious residents to observe their interactions. This unexpected outcome addressed not just linguistic isolation but also created new social connections among residents.
    Abbie’s design is intentionally approachable and child-sized, featuring colorful components and expressive eyes that invite engagement. The robot’s appearance evolved partly by chance - during initial development, Grace had access to various colored materials for 3D printing, resulting in a vibrant, multi-colored design that proved highly effective at engaging residents. The robot can both participate in group activities - leading music sessions, dancing, and blowing bubbles - and engage in one-on-one conversations. During group sessions, Abbie has been known to spark impromptu dance parties, with residents and staff joining in the festivities.
    A key feature of Abbie’s technology is its memory capability. The robot maintains detailed records of previous interactions, remembering personal details about residents to create more meaningful ongoing relationships. This can be achieved either through facial recognition technology or through staff input via an accompanying app. This memory function allows Abbie to maintain conversation continuity and show genuine interest in residents’ stories, even when they’re repeated multiple times - something that can be challenging for human caregivers managing multiple residents.
    The robot operates on a subscription model, currently costing around US $5,000-6,000 per month per unit, making it more practical for institutional settings where multiple residents can benefit. While primarily focused on aged care facilities now, Andromeda has broader ambitions for future applications, including potential use in hospitals and private homes. The company has already received inquiries about personal use, particularly from families interested in providing companionship for children.
    A next-generation version called Gabby is already being deployed in some facilities. Slightly taller than Abbie but still child-sized, Gabby incorporates additional sensors and enhanced capabilities aimed at enabling more autonomous operation within care facilities. These improvements allow Gabby to navigate facilities more independently and potentially make autonomous visits to residents’ rooms when directed by staff.
    The impact of these companion robots extends beyond simple entertainment or basic interaction. Staff members have reported unexpected benefits, such as learning new approaches to difficult conversations with residents. In one notable case, staff adopted Abbie’s method of discussing sensitive topics like the passing of family members with residents experiencing memory loss, finding the robot’s approach more effective than their previous methods.
    The technology has shown particular promise in addressing various forms of isolation - physical, mental, and linguistic. Statistics indicate that approximately 40% of nursing home residents rarely receive visitors, with many receiving none at all. Abbie helps fill this gap, providing consistent companionship and engagement during the many hours when structured activities aren’t taking place.
    Currently headquartered in San Francisco for their U.S. operations, Andromeda faces high demand, with a growing waitlist for their robots. The company is taking a measured approach to expansion, learning from their current deployments while working toward making the technology more accessible for individual home use in the future. Their ambitious goal is to replace a billion hours of loneliness with companionship, recognizing that while human interaction is ideal, the demographics of an aging society make additional support tools necessary.
    The development process for Abbie has been collaborative, with the company working closely with care facilities to refine and improve the technology. Unlike traditional deep tech development, which often involves years of research and development before market entry, Andromeda has chosen to build alongside their customers, incorporating real-world feedback into their iterations. This approach, while sometimes challenging, has allowed them to create solutions that directly address the needs of both residents and care staff.
    Looking ahead, Andromeda envisions expanding Abbie’s capabilities and accessibility while maintaining focus on emotional connection rather than task-based assistance. The company emphasizes that Abbie is not designed to replace human caregivers or handle medical tasks, but rather to complement existing care by providing additional emotional support and companionship during times when human interaction might be limited.
    Please share this with someone who likes robots, works in aged care or healthcare, or who wants to get involved with emerging technologies. Thank you
    Thanks for reading "News From The Future" with Dr Catherine Ball! This post is public so feel free to share it.



    This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit drcatherineball.substack.com/subscribe
  • "News From The Future" with Dr Catherine Ball

    NVIDIA CEO Jensen Huang had a chat about AI

    09/01/2026 | 10 mins.
    Hello and welcome to news from the future where Dr Cath is running around the CES in Las Vegas and dropping the news as she goes. I am her voice clone, created by elevenlabs, thanks for listening.
    Here is the big one- the presentation by Jensen Huang, the CEO and Founder of NVIDIA. Take notes...
    The computer industry is experiencing an unprecedented transformation, with two major platform shifts occurring simultaneously: the rise of artificial intelligence and the evolution of accelerated computing. This marks a departure from historical patterns where platform shifts happened sequentially, roughly once per decade, such as the transitions from mainframe to personal computers and then to the internet era. These transitions have historically reshaped how we interact with technology, but the current dual shift represents a fundamental reimagining of computing itself.
    What makes this current transformation particularly remarkable is its comprehensive nature. The entire computing stack is undergoing reinvention, fundamentally changing how software is created and executed. Instead of traditional programming methods, software is increasingly being trained through AI systems. Applications are no longer simply precompiled but are generated contextually, responding to specific needs and circumstances. This shift has triggered a massive reallocation of resources, with trillions of dollars being channeled into AI development and infrastructure, representing one of the largest technological investments in history.
    The evolution of large language models (LLMs) represents a crucial milestone in this transformation. The introduction of models like BERT and ChatGPT has demonstrated the powerful capabilities of AI in understanding and generating human-like text. These models have revolutionized natural language processing, enabling computers to understand context, nuance, and complex linguistic patterns in ways that were previously impossible. Perhaps even more significant is the emergence of agentic systems – AI that can reason independently and interact with various tools and environments. This development has opened new possibilities for AI applications across numerous sectors, from healthcare to finance to environmental protection.
    The democratization of AI technology has been greatly facilitated by the advancement of open models. These accessible frameworks have enabled global innovation, allowing developers and organizations worldwide to build upon existing AI capabilities and create new applications. This openness has accelerated the pace of AI development and fostered a more inclusive technological ecosystem. The availability of open models has particularly benefited smaller organizations and developing nations, providing them with access to sophisticated AI tools that would otherwise be beyond their reach.
    NVIDIA’s contribution to this transformation is particularly noteworthy through their development of AI supercomputers, especially the DGX Cloud. This platform represents a significant step forward in providing the computational power necessary for advanced AI development. The DGX Cloud combines cutting-edge hardware with sophisticated software frameworks, enabling researchers and developers to train and deploy complex AI models more efficiently than ever before. NVIDIA has demonstrated its commitment to the open model approach by building systems and libraries that support broad AI development efforts, fostering collaboration and innovation across the industry.
    The applications of these technological advances extend far beyond traditional computing domains. In digital biology, AI is being used to understand complex biological systems and accelerate drug discovery, potentially revolutionizing how we develop new treatments for diseases. Weather prediction has become more accurate and detailed through AI-powered modeling, enabling better preparation for extreme weather events and improved climate change analysis. The integration of AI into robotics has created new possibilities for automation and physical world interaction, with a particular emphasis on understanding and applying physical laws to improve AI applications.
    A significant milestone in this journey is the introduction of the Vera Rubin supercomputer. This system represents the next generation of AI computing architecture, designed to meet the escalating demands of artificial intelligence applications. The Vera Rubin system incorporates innovative chip designs and networking technology that enable high-speed data transfer and processing, essential for handling the increasingly complex requirements of AI computation. Its architecture has been specifically optimized for AI workloads, representing a departure from traditional supercomputer designs.
    The networking capabilities of modern AI systems are particularly crucial. High-speed data transfer and processing are fundamental to the performance of AI applications, and innovations in networking technology have made it possible to handle the massive data flows required for advanced AI operations. These networks must maintain extremely low latency while managing enormous amounts of data, requiring sophisticated engineering solutions and new approaches to data center design. This infrastructure supports the development of more sophisticated AI applications that can process and analyze data at unprecedented speeds.
    The impact of these developments extends across industries, creating new opportunities and transforming existing business models. AI applications are becoming more capable of complex reasoning, learning from experience, and interacting with the physical world in meaningful ways. This evolution is not just about improving computational efficiency; it’s about enabling entirely new categories of applications and solutions that were previously impossible or impractical to implement.
    The role of companies like NVIDIA in this transformation goes beyond hardware provision. Their comprehensive approach encompasses the entire AI ecosystem, from developing sophisticated hardware architectures to creating software frameworks and supporting application development. This holistic strategy is essential for advancing the field of AI and ensuring that the technology can be effectively deployed across different sectors. The integration of hardware and software development has become increasingly important as AI systems become more complex and demanding.
    The future of AI and computing appears to be moving toward increasingly sophisticated systems that can handle complex reasoning tasks while maintaining efficient interaction with the physical world. This evolution suggests a future where AI systems will become more integrated into our daily lives, supporting decision-making processes and enabling new forms of human-machine collaboration. The development of these systems requires careful consideration of both technical capabilities and ethical implications.
    The emphasis on physical world understanding in AI development is particularly significant. As AI systems become more advanced, their ability to comprehend and interact with the physical environment becomes increasingly important. This understanding is crucial for applications in robotics, autonomous systems, and other fields where AI must interface with the real world. The development of AI systems that can effectively operate in physical environments requires sophisticated sensors, advanced algorithms, and robust safety mechanisms.
    The investment in AI infrastructure and development represents a significant bet on the future of computing. The trillions of dollars being redirected toward AI development indicate the industry’s confidence in this technology’s potential to transform how we interact with computers and how computers interact with the world. This investment is funding not only hardware and software development but also research into new AI architectures and applications.
    The transformation of the computing industry through AI and accelerated computing is creating new possibilities for solving complex problems and enabling innovations that were previously impossible. These advances are particularly important in fields such as scientific research, where AI can help process and analyze vast amounts of data, leading to new discoveries and insights. The combination of AI and accelerated computing is opening new frontiers in business operations and everyday applications, suggesting that we are at the beginning of a new era in computing history.
    The impact of these technological advances extends to environmental sustainability and resource management. AI systems are being used to optimize energy consumption in data centers, improve renewable energy integration, and develop more efficient transportation systems. These applications demonstrate how AI can contribute to addressing global challenges while driving technological innovation.
    The development of AI systems also raises important considerations about data privacy, security, and ethical use of technology. As these systems become more powerful and widespread, ensuring their responsible development and deployment becomes increasingly critical. The industry’s focus on open models and collaborative development helps ensure transparency and accountability in AI development.
    The convergence of AI and accelerated computing represents a pivotal moment in technological history, comparable to the introduction of personal computers or the rise of the internet. This transformation is reshaping not only how we develop and use technology but also how we approach problem-solving across all sectors of society. As these technologies continue to evolve, their impact on our world is likely to become even more profound and far-reaching.
    WOW just a start then... I will be unpacking Jensen’s presentation for the next few weeks. Thanks for listening and please share with anyone you know who cares about AI and the future.
    Thanks for reading "News From The Future" with Dr Catherine Ball! This post is public so feel free to share it.



    This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit drcatherineball.substack.com/subscribe
  • "News From The Future" with Dr Catherine Ball

    Autonomous Driving, NVDIA, and robotics...

    08/01/2026 | 6 mins.
    Hello there and welcome to the continuing special edition podcasts from the CES in Vegas. I am the voice clone of Dr Cath, thanks for joining me.
    Just before the big NVIDIA announcements from Jensen Huang there were some panels, here is the second one, and it is with the CEO of Mercedes Benz no less.
    Enjoy. and you might need to take notes.
    The intersection of autonomous driving and robotics technology is experiencing a transformative period, as highlighted in a recent discussion between Mercedes-Benz CEO Ola and Skilled AI’s Deepak. Their conversation revealed both the remarkable progress and significant challenges facing these interconnected fields.
    Mercedes-Benz’s journey in autonomous driving spans four decades, beginning with their pioneering “Prometheus” project in the 1980s. This long-term commitment has culminated in their current Level 3 autonomous system, which represents more than just technological advancement – it marks a fundamental shift in responsibility from human to machine. This transition carries profound legal and liability implications, as the computer system, not the driver, becomes legally responsible when autonomous features are engaged.
    The immediate future of autonomous driving, according to Mercedes, centers on their “Level 2++” technology. This system delivers point-to-point navigation capabilities that Ola describes as making the vehicle feel like it’s “on rails.” The technology has been successfully demonstrated in challenging environments, including San Francisco’s complex urban traffic patterns and freeway systems. This represents a strategic stepping stone toward full Level 3 and 4 autonomy, allowing for real-world deployment while more advanced systems continue development.
    A critical insight emerged regarding the “99% problem” in autonomous development. While achieving 99% functionality in controlled conditions is relatively straightforward, the remaining 1% – comprising rare edge cases and unexpected scenarios – presents the most formidable challenge. This final percentage requires extensive safety engineering, massive data collection efforts, and sophisticated decision-making algorithms capable of handling unprecedented situations.
    Mercedes-Benz emphasizes a comprehensive approach to autonomous system development, focusing equally on hardware and software components. Their strategy mirrors aviation industry standards, where redundancy is non-negotiable. This philosophy becomes particularly complex when scaling across different vehicle platforms, as each model requires unique sensor configurations and specialized AI model adaptations. The challenge intensifies when considering the need to maintain this redundancy while meeting commercial cost targets and managing platform proliferation.
    In the robotics domain, Skilled AI presented an ambitious vision for a universal robotic “brain” – an AI system capable of controlling various robot types, from humanoid machines to industrial arms and autonomous mobile robots. This approach challenges traditional robotics programming paradigms by suggesting that a single, general-purpose AI system could learn from and adapt to different robotic platforms and tasks. The potential advantage of this approach lies in creating a data flywheel effect, where learning from diverse robot experiences contributes to overall system improvement.
    The discussion delved deep into the ongoing debate about robotics data sources, examining three primary approaches: world-model/video pretraining, sim-to-real/reinforcement learning, and direct robot data collection. Deepak argued that unlike language models, which benefit from vast internet-scale training data, robotics faces unique challenges in data acquisition. He emphasized that merely observing tasks (like watching videos) isn’t sufficient for skill development, proposing instead a hybrid approach combining human demonstration videos, simulation training, and real-world task-specific data collection.
    Manufacturing automation emerged as a particularly promising application area. Ola suggested that AI-driven robotics could deliver the most significant productivity improvements in factory operations in up to a century. Rather than pursuing full automation, the vision focuses on collaborative “robot buddies” working alongside human workers. This approach includes leveraging digital twin technology, such as envidia’s Omniverse, to simulate and optimize production processes before physical implementation, potentially reducing costs and improving quality control.
    Several significant tensions emerged during the discussion. While optimism exists about achieving Level 4/5 autonomy, practical challenges around safety validation and regulatory compliance could extend development timelines. The balance between implementing robust sensor redundancy and maintaining commercial viability remains a point of contention. Questions persist about the most effective approach to robotics data acquisition and training methodologies.
    The workforce impact of increased automation presents another area of tension. While the speakers emphasized human-robot collaboration and productivity enhancement, concerns about potential job displacement remain. The “robot buddy” concept attempts to address these concerns by positioning automation as augmentation rather than replacement, though questions about long-term workforce implications persist.
    The discussion highlighted a fundamental challenge in both autonomous driving and robotics development: balancing market pressure for rapid deployment against the need for robust, safe systems. As Ola emphasized, there are “no shortcuts” in developing these technologies, yet competitive pressures often push for faster deployment schedules.
    This conversation raises crucial questions about the role of accelerated computing in autonomy, strategies for cost-effective redundancy, approaches to handling edge cases, simulation-to-reality transfer, and the practical benefits of digital twin technology. These topics represent key areas where further development and discussion are needed to advance both autonomous driving and robotics technologies. The intersection of these challenges with commercial viability, regulatory compliance, and workforce implications will likely shape the development trajectory of these technologies in the coming years.


    This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit drcatherineball.substack.com/subscribe
  • "News From The Future" with Dr Catherine Ball

    Dark Data, Dark Fiber, and Sovereign AI

    07/01/2026 | 8 mins.
    Welcome to News from the Future Special Editions with Dr Cath dialling in from the CES in Las Vegas... and today was all about Jensen Huang and the big NVIDIA announcement. But before that we had 2 panels chatting away and so here summarised are some of the main points for those chats. Enjoy and you may want to take notes... there is a lot!
    The AI infrastructure landscape is experiencing unprecedented growth, with approximately $800 billion invested over the past three years and projections of $600 billion more by 2026. While media headlines frequently question whether this represents a bubble, industry experts argue this cycle is fundamentally different from previous tech booms for several key reasons. The seamless adoption of tools like ChatGPT, reaching billions of users instantly, combined with consistently high utilization rates and cash flow-funded expansion, suggests a more sustainable foundation than previous tech cycles.
    Unlike the dotcom era’s “dark fiber,” today’s AI infrastructure shows consistently high utilization rates. Even older GPU hardware remains fully employed, processing various workloads from traditional computing tasks to smaller AI models. This high utilization, combined with well-financed buyers funding expansion through cash flow rather than speculation, suggests a more sustainable growth pattern. The industry emphasizes watching utilization as a leading indicator, rather than focusing on abstract return on investment calculations.
    Snowflake CEO Sridhar Ramaswamy provides compelling evidence of AI’s real-world value, particularly in high-wage workflows. When AI tools enhance the productivity of well-paid professionals like developers or analysts, the return on investment becomes readily apparent. Snowflake’s implementation of data agents, allowing executives to quickly access customer insights from their phones, demonstrates how AI can deliver immediate value in enterprise settings. The company’s Artificial Intelligence products, including Snowflake Intelligence, run on envidia chips, highlighting deep collaboration between infrastructure providers and application developers.
    Enterprise adoption faces several practical challenges beyond mere interest or budget constraints. Data governance and sovereignty emerge as critical concerns, with companies increasingly sensitive about where their data is processed and stored. This has led to interesting dynamics where local GPU availability becomes a negotiating point – for instance, when German workloads might need to be processed in Swedish facilities. Change management presents another significant hurdle, as organizations struggle to drive user adoption of new AI workflows. However, widespread consumer experience with AI technologies through smartphones and laptops is making enterprise adoption easier for companies that execute well.
    The global infrastructure buildout is increasingly viewed as a feature rather than just capacity expansion. As geopolitical tensions rise, the ability to process data within specific regions becomes a competitive advantage. This has spurred infrastructure development across the Middle East and Asia, creating a more distributed computing landscape that better serves local sovereignty requirements and regulatory compliance needs.
    In the ongoing debate between open and closed AI models, a nuanced picture emerges. While frontier models from leading companies maintain significant advantages in specific use cases like coding and tool-agent loops, open models are gaining importance for large-scale applications. The open-source ecosystem’s ability to attract developers and drive innovation mirrors historical patterns in data center development. This dynamic is particularly important when considering massive-scale deployments where cost and customization flexibility become critical factors.
    Sector-specific adoption shows interesting patterns. Financial services, particularly asset managers with fewer regulatory constraints than traditional banks, are leading the charge. Healthcare emerges as a surprising second frontier, with doctors increasingly turning to AI to address overwhelming documentation requirements. Unlike previous technology waves, enterprise-specific AI applications are developing in parallel with consumer tools, rather than lagging behind. This represents a significant shift from the Google Search era, where enterprise search solutions never gained the same traction as consumer offerings.
    The concept of “dark data” – unutilized information assets within enterprises – represents a significant opportunity. Companies like Snowflake emphasize the importance of making this data accessible while maintaining strict governance controls. A practical example involves decades of contracts stored in SharePoint systems, currently requiring manual searching but prime for AI-enabled retrieval and analysis. The challenge lies in creating drag-and-drop usability while ensuring unauthorized access doesn’t create regulatory compliance issues.
    Vertical-specific implementations reveal how AI adaptation varies by industry. In healthcare, companies like Abridge focus on integrating AI into existing workflows, aiming to reverse the current reality where doctors spend 80% of their time on clerical work and only 20% with patients. Their approach emphasizes fitting AI into existing processes rather than forcing workflow changes, while balancing privacy, security, and latency requirements. They utilize techniques like distillation, fine-tuning, and learning from clinician edits at scale to improve their systems.
    In software development, CodeRabbit positions itself as a trust layer between coding agents and production systems, highlighting how AI is changing the nature of software development rather than replacing developers. They argue that as code generation improves, review and intent specification become the primary bottlenecks. The platform suggests that AI is lowering barriers to entry in software development while questioning whether it truly transforms highly skilled developers into substantially more productive ones.
    The current state of AI infrastructure investment is frequently compared to early stages of previous platform shifts, such as the iPhone or PC eras. Mark Lipacis argues we’re in “early innings,” where investment must precede currently unknown workloads – though unlike previous cycles, current infrastructure already shows high utilization. This perspective suggests that current investment levels, despite their scale, may be justified by future applications and use cases that haven’t yet emerged.
    Several tensions remain unresolved in the industry. The durability of current utilization rates faces questioning, particularly whether they represent a temporary land-grab or sustainable demand. Agent reliability remains a challenge, especially for long-running or background tasks, with most successful implementations requiring human oversight. The sustainability of open-source model development, given high training costs, remains uncertain despite recent progress. The debate between centralized efficiency and data sovereignty requirements continues to shape infrastructure deployment decisions.
    The impact on workforce dynamics presents another area of debate. While some fear job displacement, evidence from the software development sector suggests AI is lowering barriers to entry and enabling more people to participate in technical fields. The panel concludes optimistically, suggesting that software creation will expand beyond traditional engineering roles, with examples of children using coding agents to build applications indicating a more democratized future for software development. This democratization of technology creation could fundamentally reshape how software is developed and who participates in its creation.
    This podcast was produced using Dr Cath’s AI Voice Clone from Eleven Labs. Thank you for listening. Please share with anyone you know who is interested in AI


    This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit drcatherineball.substack.com/subscribe
  • "News From The Future" with Dr Catherine Ball

    LEGO launches Smart Brick System March 2026

    06/01/2026 | 6 mins.
    Hello and Welcome to another special edition of News From The Future with Dr Cath as she beams in live from the CES technology show in Las Vegas.
    The LEGO Group unveiled a revolutionary innovation at CES - the LEGO Smart Brick, representing the most significant advancement in LEGO technology since the Minifigure’s introduction 50 years ago. This new platform seamlessly integrates digital technology into physical LEGO play without screens or power buttons, maintaining the core essence of hands-on creative building while adding responsive interactive elements.
    The Smart Brick appears as a standard 2x4 LEGO brick but contains sophisticated sensors and technology packed into a silicon chip smaller than one of its studs. The system consists of three key components that work together: the Smart Brick itself, which can be reused across different models; Smart Tags containing code that defines how models respond to interactions; and Interactive Smart Minifigures programmed with distinct personalities and behaviors.
    The technology demonstrates remarkable capabilities in bringing LEGO models to life. The Smart Brick generates responsive sounds based on movement and interaction, with synthesized audio that adapts to how children play. Advanced position sensing allows bricks to detect their relative locations in three-dimensional space, enabling precise distance measurements and directional awareness between multiple Smart Bricks. Color sensors let models recognize their environment and respond accordingly, while networked play capabilities allow multiple Smart Bricks to communicate and coordinate their responses automatically.
    During the CES demonstration, these features were showcased through various interactive models. A car equipped with a Smart Brick produced engine sounds that responded to movement, complete with acceleration noises and tire screeching effects. The system could detect when Minifigures were placed in different positions - as drivers, passengers, or even (somewhat mischievously) under the wheels. A LEGO duck came alive with appropriate quacking and splashing sounds, while demonstrating sleep behaviors when at rest.
    The technology enables entirely new dimensions of play. Vehicles can respond realistically to steering and acceleration, while knowing their position relative to other vehicles or obstacles. Characters gain awareness of their surroundings and can react appropriately to different situations. Models understand how they’re being played with and can coordinate responses between multiple Smart Bricks. This allows for racing games where cars know who’s in the lead, creatures that respond to care and interaction, and buildings that can detect and react to events happening around them.
    A key aspect of the Smart Brick’s design is its ability to work as a platform rather than just a single product. The same brick can be moved between different models, each time taking on new behaviors based on the Smart Tags included in the build. Multiple Smart Bricks can form decentralized networks, automatically coordinating to create rich interactive experiences across entire LEGO worlds that children create.
    The first commercial implementation of this technology comes through LEGO’s partnership with Star Wars, building on their 25-year collaboration that has already produced nearly 1,500 unique minifigures and countless beloved sets. The initial launch in March 2026 will feature three Smart Play sets: Luke Skywalker with X-Wing, Darth Vader with TIE Fighter, and the Emperor’s Throne Room. These sets demonstrate how the technology can enhance storytelling and imaginative play within the Star Wars universe.
    During the presentation, Disney Chief Brand Officer Asad Ayaz and Lucasfilm Chief Creative Officer Dave Filoni emphasized how this technology represents a natural evolution in their long-standing partnership with LEGO. They drew parallels between George Lucas’s pioneering use of special effects and sound design in the original Star Wars films and this new innovation in toy technology. The Smart Brick platform aims to similarly transform how children experience and interact with their LEGO creations.
    The LEGO Group emphasized that this launch represents just the beginning of the Smart Brick platform’s potential. The technology has been designed to be open-ended and expandable, integrating seamlessly with the existing LEGO system while adding new dimensions of interactive play. The company expects the platform to evolve based on how children use and innovate with it, potentially expanding into thousands of different models and play experiences.
    The development of the Smart Brick was driven by observing how children play in today’s digital world. While kids remain naturally creative and imaginative, they increasingly engage with digital experiences through screens and devices. The Smart Brick aims to bridge this gap by bringing technological interactivity into physical play, without losing the hands-on creative building that has defined LEGO for over 70 years.
    This builds on LEGO’s fundamental principle of unlimited creativity. The Smart Brick maintains this philosophy of open-ended play while adding new possibilities for interaction and responsiveness. The technology doesn’t prescribe specific ways to play but rather provides tools that children can use to enhance their own creative storytelling and imaginative adventures.
    The presentation included practical demonstrations of the technology’s capabilities, including a simple racing game where Smart Bricks could determine which duck-on-skateboard was closest to a trophy. This showcased how the position-sensing technology can enable new forms of competitive play while maintaining the physical, hands-on nature of LEGO building. The demonstration also featured Star Wars characters like Chewbacca interacting with the Smart Brick, producing characteristic roars and responses that brought the character to life.
    Throughout the CES presentation, LEGO emphasized how the Smart Brick represents not just a new product but a platform for future innovation. By creating a system that seamlessly integrates digital interactivity with physical play, while maintaining compatibility with existing LEGO bricks and sets, they’ve laid the groundwork for potentially thousands of new play experiences. The technology’s ability to network multiple Smart Bricks together, sense their environment, and respond to children’s play patterns suggests numerous possibilities for future development and expansion of the platform.
    This podcast was produced with Dr Cath’s AI Voice Clone by Eleven Labs. Thanks for Listening.


    This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit drcatherineball.substack.com/subscribe

More Science podcasts

About "News From The Future" with Dr Catherine Ball

Converging and emerging technologies from today, tomorrow, and next year. Educate and entertain yourself with Dr Cath's optimistic and curious nature as we peek over the horizon. drcatherineball.substack.com
Podcast website

Listen to "News From The Future" with Dr Catherine Ball, All In The Mind and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.3.0 | © 2007-2026 radio.de GmbH
Generated: 1/19/2026 - 6:24:32 PM