Artificial intelligence is no longer just another layer of software. It is rapidly becoming the foundation of modern computing itself.
That message was unmistakable on Wednesday night as Lenovo hosted its annual Tech World event at the Sphere, Las Vegas’s landmark venue opened in September 2023. The scale of the setting matched the ambition of the message: the Sphere stands 366 feet high and 516 feet wide, spans 875,000 square feet, seats 18,600 people, and features a sound system of 167,000 individually amplified loudspeakers. Built over five years at a cost of $2.3 billion, it has quickly become one of the most technologically advanced entertainment venues in the world.
Against this backdrop, Lenovo Chairman and CEO Yuanqing Yang welcomed an extraordinary lineup of global technology leaders and partners to the stage, including Jensen Huang, founder and CEO of NVIDIA; Lip-Bu Tan, CEO of Intel; Cristiano Amon, president and CEO of Qualcomm; Dr. Lisa Su, chair and CEO of AMD; Yusuf Mehdi, executive vice president and consumer CMO of Microsoft; Angelina Gomez, Motorola’s head of software marketing; Jennifer Koester, president and CEO of Sphere; and Gianni Infantino, president of FIFA.
The event featured multiple product announcements and partnership reveals, capped by Lenovo’s unveiling of its new personal AI platform, Qira—a unified AI system designed to deliver seamless, real-time intelligence across devices and platforms. But beyond individual products, the message from global tech leaders was clear and consistent: AI is no longer an application layer—it is becoming the operating system of the future.
From Classical Computing to AI-Native Systems
For decades, enterprise computing was built around applications written for CPUs and deployed in conventional data centers. That model is now rapidly being replaced by AI-native systems, where software is designed around large language models and executed on GPU-based, accelerated infrastructure.
Industry leaders described this transition as nothing less than a reinvention of the global IT landscape. Trillions of dollars in legacy systems, they said, will need to be modernized to support AI-driven workloads.
“This is not just a platform change,” one executive noted. “It’s a reinvention of the entire computing stack.”
AI Factories and Accelerated Infrastructure
At the core of this transformation is the emergence of what companies now call AI factories—purpose-built data centers designed specifically for AI training, inference, and deployment.
Unlike traditional data centers, these facilities are optimized for massive parallel processing, high-throughput workloads, and large-scale model operations. NVIDIA outlined multiple generations of accelerated computing platforms driving this shift, including Hopper, Blackwell, and the newly announced Vera Rubin architecture.
Each generation delivers exponential performance gains while lowering the cost of AI computation, making enterprise-scale AI deployment increasingly economically viable.
Agentic AI and the Explosion of Compute Demand
Another dominant theme was the rise of agentic AI—systems designed not merely to generate responses, but to reason, plan, reflect, and act autonomously.
These systems generate so-called “thinking tokens,” representing internal reasoning, decision-making, and multi-step planning processes. As AI models grow in complexity, computing demands are expanding rapidly. Executives noted that model sizes are already moving from hundreds of billions of parameters into the multi-trillion range, driving exponential growth in infrastructure requirements.
Hybrid AI: Intelligence Across Devices and Cloud
Lenovo outlined a hybrid AI architecture that distributes intelligence across multiple layers: personal devices, edge systems, private cloud environments, and large-scale AI factories.
In this model, smaller AI systems operate locally on devices, while larger models run in cloud infrastructure. Tasks are dynamically routed based on latency, performance, cost, and security requirements. The goal is to make AI more accessible while reducing complexity for both users and organizations.
Partnerships Powering Global Deployment
Strategic partnerships were presented as critical to scaling AI infrastructure worldwide.
Lenovo and NVIDIA announced an expanded collaboration to deploy AI factories globally, combining NVIDIA’s accelerated computing platforms with Lenovo’s manufacturing scale, cooling systems, and global services infrastructure. The objective: reduce deployment complexity and dramatically shorten the time from installation to operational AI use.
Intel and Lenovo also reaffirmed their long-standing partnership, focusing on AI-powered PCs, enterprise systems, and data center platforms designed to support AI workloads across devices and cloud environments.
From Intelligence to Action
Speakers emphasized that AI is moving beyond analytics and insight generation toward real-time action and automation.
Next-generation AI systems are increasingly being designed to execute workflows, manage operations, coordinate tasks, and operate autonomously across enterprise systems. This shift is expected to reshape industries including manufacturing, logistics, healthcare, entertainment, and even global sporting events.
A New Computing Era
Leaders at CES 2026 framed the current transition as one of the largest technological shifts in modern history—comparable to the rise of the internet or mobile computing.
In this new paradigm:
AI systems replace traditional application frameworks
The result is the emergence of an AI-native digital ecosystem.
Looking Ahead
As AI infrastructure, models, and systems continue to evolve, the boundaries between software, hardware, and intelligence are rapidly dissolving.
What is emerging is not simply a new generation of technology, but a new foundation for digital society—one in which intelligence itself becomes infrastructure.
At CES 2026, that future no longer appeared theoretical.
It is already being built.
The evening concluded with a live performance by Gwen Stefani, showcasing the Sphere’s immersive audio-visual capabilities and underscoring the fusion of technology, entertainment, and AI-driven experiences.
At the press conference Bosch executives: Tanja Rückert, board member and Paul Thomas, president of Bosch in North America unveiled a sweeping vision of how artificial intelligence and software are reshaping everyday life—from verifying product authenticity to redefining cooking, mobility, and industrial productivity. Drawing on its deep expertise across both the physical and digital worlds, Bosch demonstrated how intelligent technology can reduce complexity, eliminate stress, and empower people across skill levels.
Fighting Counterfeiting With AI
Bosch opened by introducing a revolutionary AI-powered authenticity verification technology. Using live video analysis, the system can quickly and reliably determine whether an item—such as sneakers, car parts, or artwork—is genuine. By bridging the physical and digital divide, this innovation has the potential to significantly disrupt the global counterfeiting industry and strengthen consumer trust.
AI That Serves People at Home
At the core of Bosch’s philosophy is a simple principle: technology should serve people, not the other way around. Nowhere is this more evident than in the kitchen. Bosch showcased how generative AI combined with advanced sensors can elevate cooking for everyone—from home cooks to professional chefs.
Celebrity chef Marcel Vigneron demonstrated Bosch Cook AI, a next-generation cooking assistant that builds on the brand’s AutoChef induction technology. By analyzing ingredients through images, understanding user preferences, and monitoring food in real time with Bluetooth temperature probes, the system precisely controls heat and cooking time to deliver consistent, high-quality results. The goal is not to replace human creativity, but to remove uncertainty and stress from cooking.
Appliances That Improve Over Time
Bosch emphasized that modern appliances no longer need to be replaced to gain new features. Through connectivity and over-the-air software updates, existing Bosch products can evolve long after purchase. Recent updates have added new cooking functions, such as air fry and advanced heating modes, to connected ovens at no additional cost—demonstrating Bosch’s commitment to long-term value.
Software-Defined Mobility
Beyond the home, Bosch highlighted how its software and hardware integration is transforming mobility. Its vehicle motion management technology allows cars to gain new driving modes and personalization features even after leaving the dealership. By intelligently coordinating braking, steering, powertrain, and suspension systems, Bosch’s software can improve comfort, safety, and driving dynamics.
One major advancement is six-degrees-of-freedom vehicle control, which helps significantly reduce motion sickness—an issue affecting a large portion of adults and a major barrier to autonomous driving adoption.
Building the Future of Software-Defined Vehicles
Bosch is playing a key role in the shift toward software-defined vehicles, a market projected to exceed one trillion dollars by the end of the decade. The company is developing open, high-performance middleware platforms that act as the “nervous system” of modern vehicles, simplifying development, lowering costs, improving security, and accelerating innovation for automakers worldwide.
Bosch also announced progress in key technologies such as steer-by-wire and brake-by-wire systems, essential components for future automated and autonomous vehicles. These systems are already moving into large-scale production with major global automakers.
AI That Sees, Hears, and Understands
Artificial intelligence is central to Bosch’s automotive roadmap. The company showcased AI-powered cognitive vehicles capable of both seeing and hearing through advanced language models. These systems enable natural conversations with vehicles, intelligent perception of surroundings, automated parking, and even real-time meeting assistance while driving or riding as a passenger.
Bosch expects AI-based solutions for assisted and automated driving to become a multi-billion-euro business by 2035.
AI for Industry and Productivity
Bosch also demonstrated how its AI expertise extends into industrial applications. By partnering with technology leaders such as Microsoft, Bosch aims to apply generative AI to manufacturing environments, boosting productivity, efficiency, and flexibility in increasingly complex industrial operations.
A Unified Vision
Across all domains—consumer goods, mobility, and industry—Bosch’s message was consistent: its strength lies in combining hardware, software, and AI into complete systems and ecosystems. Rather than offering isolated components, Bosch is building intelligent platforms that deliver real-world benefits, improve quality of life, and prepare society for a more connected, automated future.
The LEGO Group introduced a major new initiative at CES with the launch of LEGO Smart Play, a platform designed to merge physical LEGO bricks with embedded digital intelligence—without screens, apps, or traditional digital interfaces.
The company described the announcement as its most significant innovation in decades, marking a new chapter in LEGO’s 70-year history of physical play. Executives positioned Smart Play as a foundational platform rather than a single product, signaling a long-term strategy to integrate technology into hands-on creativity while preserving LEGO’s core identity.
A Modern Evolution of a Classic System
LEGO leaders opened the presentation by highlighting the enduring success of the LEGO system of play, which has remained structurally compatible since the interlocking brick was introduced in 1958. With more than 20,000 different elements that fit together, LEGO has maintained a consistent design philosophy centered on creativity, imagination, and open-ended building.
At the same time, the company acknowledged a changing cultural landscape. Today’s children grow up immersed in digital environments, prompting LEGO to explore how interactive technology could be integrated into physical play without replacing it.
Executives said the challenge was to introduce digital intelligence in a way that enhances creativity rather than shifting play toward screens or software-driven experiences.
Technology Designed to Stay Invisible
LEGO Smart Play is built around three core principles: seamless integration of technology into physical play, openness across the LEGO ecosystem, and simplicity in user experience.
Rather than relying on apps or digital interfaces, the system is designed to operate through natural physical interaction. There are no screens, no visible controls, and no learning curve for children. The technology remains embedded in the bricks, responding to how they are used rather than directing how they should be played with.
Inside the Smart Brick
At the center of the platform is the LEGO Smart Brick, a standard-looking brick that contains sensors, processing capabilities, and wireless communication technology. Despite its embedded intelligence, the brick has no screen or power button and functions automatically as part of a build.
According to LEGO, the Smart Brick can detect movement, sound, light, color, distance, orientation, and direction. It can recognize specially designed Smart Tags that define behaviors, identify Smart Minifigures within a model, and communicate wirelessly with other Smart Bricks.
This allows LEGO creations to respond dynamically to physical interaction. Vehicles can detect drivers and track movement, structures can respond to proximity, and entire play environments can interact in real time through decentralized networks of connected bricks.
Modular Intelligence Across Builds
A single Smart Brick can be reused across multiple builds and play scenarios. By changing tags and minifigures, the same brick can take on different roles, functioning as an engine, character, creature, or interactive object depending on how it is configured.
During demonstrations, LEGO showed how one Smart Brick could animate cars, aircraft, animals, and games, generating light, sound, and behavioral responses based solely on physical movement and positioning.
Spatially Aware Play
One of the platform’s distinguishing features is spatial awareness. Without relying on cameras or external tracking systems, Smart Bricks are able to detect distance, orientation, and directional relationships between builds.
This enables interactive play scenarios such as tracking race outcomes, triggering responses based on proximity, and creating game mechanics that operate in three-dimensional physical space.
A Long-Term Platform for Storytelling
LEGO executives emphasized that Smart Play is designed as a scalable platform rather than a single product line. The system is intended to support long-term storytelling, re-playability, collaboration, and evolving narratives across different LEGO themes.
To demonstrate the platform’s potential, LEGO announced its first Smart Play partnership with LEGO Star Wars.
LEGO Star Wars Becomes Interactive
In collaboration with Disney and Lucasfilm, LEGO revealed that Smart Play will be integrated into upcoming LEGO Star Wars sets. After more than 25 years of partnership, the franchise will move beyond static builds to interactive physical play environments.
Characters, vehicles, and locations will respond to movement, proximity, and interaction, allowing children to create dynamic Star Wars experiences without screens or digital displays.
The first LEGO Star Wars Smart Play sets are scheduled to launch in March.
Redefining Physical Play
LEGO Smart Play represents a shift in how physical toys can incorporate advanced technology. Rather than directing behavior through software, the system is designed to respond to play itself, keeping creative control in the hands of children.
Company leaders described the platform as a foundation for future development, positioning Smart Play as the beginning of a broader transformation in physical play design.
At CES, LEGO framed the initiative not as a move toward digital toys, but as a reimagining of physical creativity—one where technology remains invisible, and imagination remains central.
Hyundai Motor Group, Boston Dynamics, and Google DeepMind used this year’s Consumer Electronics Show to present a long-term vision for artificial intelligence and robotics focused on collaboration between humans and machines, rather than automation for its own sake.
The companies framed their message around what they described as human-centered AI robotics—systems designed to support human work, improve safety, and expand productivity, rather than replace human labor. Executives emphasized that robotics is moving beyond spectacle and demonstration toward real-world deployment and practical impact.
From humanoid robots in factories to AI systems that learn through experience, speakers stressed that the next phase of robotics development is about purpose-driven technology.
From Demonstration to Deployment
Boston Dynamics, long known for its high-profile demonstrations of robots that run, jump, and perform complex movements, positioned its work as increasingly focused on industrial and commercial applications.
Aya Durbin, Humanoid Application Product Lead and Zachary Jackowski, Vice President, General Manager of Atlas highlighted the shift toward robots designed for hazardous environments, repetitive labor, and physically demanding tasks. The stated goal is to reduce workplace injuries, increase safety, and improve operational efficiency across industries.
Rather than replacing human workers, executives said the company’s strategy is centered on extending human capability and removing people from dangerous or exhausting roles.
Atlas: A General-Purpose Humanoid Platform
The centerpiece of the presentation was the public unveiling of Atlas, Boston Dynamics’ next-generation humanoid robot.
Unlike traditional industrial robots built for single-task automation, Atlas is being developed as a general-purpose humanoid platform. According to company representatives, the robot is designed to navigate complex environments, manipulate objects with human-like dexterity, and adapt to different tasks as operational needs change.
Atlas is engineered for industrial settings and includes capabilities such as heavy lifting, extended reach, autonomous operation, operation in extreme temperatures, and self-managed battery systems. It is also designed to share learned tasks across multiple units through cloud-based intelligence systems, creating what the company described as a networked learning model.
Commercial Robotics in Operation
Boston Dynamics pointed to its existing commercial robots as evidence that this approach is already being deployed at scale.
The quadruped robot Spot is currently used in thousands of facilities across more than 40 countries, where it performs industrial inspection, data collection, and safety monitoring tasks. The warehouse robot Stretch has been deployed in logistics environments to automate truck unloading and material handling, with more than 20 million boxes reportedly processed through customer operations.
Company officials emphasized that these systems are already in commercial use and producing measurable operational outcomes.
Hyundai’s Global Robotics Strategy
Hyundai Motor Group outlined plans to build large-scale infrastructure to support global robotics deployment. The company is developing manufacturing facilities capable of producing tens of thousands of humanoid robots annually, alongside data-driven production systems and AI-enabled factory environments.
Executives described a long-term strategy that extends beyond manufacturing into logistics, construction, energy, infrastructure, and smart city development, with eventual plans for integration into consumer and home environments.
Hyundai’s approach includes service-based robotics models, integrated deployment networks, and AI-powered industrial ecosystems designed to scale robotics adoption across multiple sectors.
Partnership with Google DeepMind
A major announcement at the event was the partnership between Boston Dynamics and Google DeepMind Robotics, bringing together advanced physical robotics and large-scale AI foundation models.
The collaboration aims to develop general-purpose humanoid intelligence systems that combine physical capability with advanced reasoning, language understanding, and adaptive learning.
Rather than relying on pre-programmed task execution, the companies said future robots will be able to learn through observation and experience, generalize skills across environments, and continuously improve performance over time.
Redefining Human–Robot Collaboration
Speakers emphasized that the vision presented at CES is based on collaboration rather than replacement.
Under this model, humans remain responsible for supervision, decision-making, judgment, and ethics, while robots take on physically demanding, repetitive, and hazardous tasks. The goal, according to company leaders, is to improve workplace safety, increase productivity, and allow people to focus on higher-value activities such as problem-solving, leadership, and creative work.
A Broader Shift in Robotics
The companies framed the developments as part of a broader transformation in how robotics is designed and deployed. Rather than isolated automation systems, they described the emergence of integrated human–robot ecosystems built around shared intelligence, learning systems, and scalable infrastructure.
Executives summarized the vision as a model of technological development centered on partnership between humans and machines, rather than competition between them.
Looking Ahead
As AI systems, robotics platforms, and industrial infrastructure continue to converge, industry leaders said the line between digital intelligence and physical systems will continue to blur.
What is emerging, they argued, is a new model of robotics—one where machines are designed not simply to operate autonomously, but to function as collaborators within human systems.
At CES 2026, that future was presented not as a distant concept, but as a roadmap already moving into real-world deployment.
Imec closed out 2025 with significant announcements that underscore its growing influence in semiconductor R&D and AI systems design. Between a high-profile debut at Super Computing 2025 and a strong showing at the International Electron Devices Meeting (IEDM), the Belgium-based research center marked the end of the year with momentum across multiple technology fronts.
In November at Super Computing 2025, one of the world’s largest gatherings for high-performance computing, imec unveiled imec.kelis, an analytical performance modeling tool aimed at reshaping how AI datacenters are planned and optimized.
Imec positioned the platform as a response to pressures facing datacenter designers, as AI workloads expand into the trillions of parameters and energy demands climb. According to the organization, imec.kelis offers a faster and more transparent alternative to conventional simulation tools, which are often slow or limited in scope.
Early adopters have already begun exploring the platform, an early sign of commercial interest.
“Imec.kelis is more than a simulator—it’s a strategic enabler for the next generation of AI infrastructure,” said Axel Nackaerts, system scaling lead.
Imec.kelis provides an end-to-end analytical framework that evaluates performance across compute, communication, and memory subsystems. The tool is optimized for large languagemodel (LLM) training and inference, offering predictions validated on widely used systems such as Nvidia’s A100 and H100 GPUs.
The platform draws heavily on imec’s longstanding expertise in hardware-software co-design, system-level modeling, and semiconductor technology road mapping. Imec said the goal is to give system architects the ability to make better-informed decisions at datacenter scale, where design choices directly impact cost, efficiency, and sustainability.
At the beginning of December 2025, imec continued to demonstrate research leadership at the 71st International Electron Devices Meeting (IEDM), presenting 21 papers spanning advanced logic, memory, quantum computing, imaging, and bioelectronics.
With a high-visibility launch in HPC computing and a deep bench of contributions at IEDM, imec concludes 2025 with strengthened leadership in both advanced semiconductor research and the rapidly expanding AI datacenter ecosystem. The organization is positioning itself as a critical contributor to the technologies that will define next-generation computing infrastructure.
Guillaume de Fondaumiere is the Co-CEO of the Quantic Dream Studio based in France, which developed games such as, “Fahrenheit” (2005), “Heavy Rain” (2010), and in collaboration with Sony Computer Entertainment, the PS3 exclusive title, “Beyond: Two Souls”, starring actors Ellen Page and Willem Dafoe (2013).
Today, on the 15th anniversary of “Heavy Rain,” he finds himself reminiscing: “In the mid-2000s when we started production on “Heavy Rain” I was executive producer on the project. I was also responsible for managing relationships with actors, composers, etc. In the weeks leading up to the launch, we decided with Sony, to send the game to several editorial teams. I remember very clearly sending out those codes, one after another. And a few weeks later we started receiving the first reviews. It was a huge relief to realize that the reviewers had understood what we are trying to do. When we hit after six weeks, one million copies I couldn’t help but shed a tear, telling myself “Phew!”. What I’d tell to myself fifteen years ago in the tough moments, because there are always some during a game’s development, is this: “Don’t worry. It’s going to be okay.”
Guillaume de Fondaumiere was also appointed the Chairman of the European Games Developer Federation. During his position, the French government and the European Union agreed to introduce a 20% tax credit for video games studios. He not only fought for many years to support the gaming industry, but had also lobbied for the games industry to be recognized as an art form.
“To me, all games are a form of cultural expression”, he says. “I see no reason why games should be treated differently than any type of literature or any type of movie. I think that more and more video games are becoming artful, and are becoming a form of art that should be recognized next to the others.” In his opinion, games should be placed among institutional forms of art such as, architecture, sculpture, visual arts, music, literature, theater, cinema art and media arts (television, radio and photography).
Media and internet often sell the information; games trigger the violence, players get addicted to them, and at the end, they are merely entertainment for young and immature minds. The stereotype of thinking associate games with either the shooting or the lighthearted entertainment for children. The reasons for such thinking originate from the early years of the gaming industry, which was actually targeted the children. The first games were very simple, they had boosted the simplest instinctive behavior and the release of adrenaline, which is also referred to as hormone 3F – fear, fight, flight.
But since that time has changed almost everything: games, hardware and the players themselves. Today, the old game enthusiasts had grown up and they still want to play games but they expect deeper, artistic and intellectual entertainment. Under such demanding clients there was a dynamic and multidirectional development of games, and palette of emotions has greatly increased. Today, players can incarnate in any characters, make their own choices, stand for duels with hundreds of players from all over the world. Production studios strive for authenticity and put meticulous attention to details.
Every little part is important, and approaching players to reality. There is also a 3D technology that changed the flat images into three-dimensional images. Today’s game has a story, uses the visual graphics and new, advanced forms of interaction with the player. In addition, there is also increased proliferation of the authors in the games industry, artists express themselves creatively and individually. The impact of games on mass culture is unquestionable and its value is growing at a dynamic pace.
So, is art or not?
A precise, unambiguous, and commonly held definition of art does not exist. However, it is known that art acts through aesthetic, ethical or cultural functions. It affects its audience through watching, listening, creating and reflecting. Without a doubt, the video game industry, which is the fastest growing sector of the modern entertainment industry, is a part of modern culture.
P.S. Roger Ebert, the legendary (Pulitzer Prize) film critic, who for 46 years shaped the tastes of American film audiences, remarked, “as long as there is a great movie unseen or a great book unread, I will continue to be unable to find the time to play video games”. He repeated this statement for eight years and once he hit harder “video games can never be art.”. He died in 2013, with no chance for revision of his assessment.
Medicine—once reactive, treating disease only after symptoms appear—is rapidly evolving into something new. In 2014 American biologist and biotech pioneer Dr. Leroy Hood has offered the clearest vision of tomorrow’s healthcare. He describes the future as 4P Medicine: Predictive, Preventive, Personalized, Participatory. This vision is no longer on the horizon—it has arrived.
This shift, driven by biotechnology and digital innovation, marks one of the greatest transformations in the history of healthcare.
For generations, people have relied on forecasts to guide their daily decisions. When we want to know what the weather will be tomorrow, we open an app on our phone, turn on the radio, or watch the evening news. These predictions help us choose the right clothing, plan a trip, or prepare for a storm. Though convenient, these forecasts are not essential to survival. If we don’t know the weather, life goes on.
But the question “What will my health be like tomorrow?” is very different. Unlike the weather, the answer can determine the course of our life. Will we wake up feeling strong and healthy? Will cold symptoms appear overnight? Or will tomorrow bring a diagnosis that changes everything—a chronic condition, a genetic disorder, or a life-threatening illness? Knowing the future of our health is profoundly important, yet for most of human history, this knowledge has been out of reach.
Traditional medicine waits. It waits for pain, for symptoms, for problems that must be solved after they occur. For centuries this was the only option, because doctors lacked tools to understand what was happening inside the human body before illness appeared.
But advances in biotechnology, genetics, and data analytics are rewriting the rules. Modern medicine is beginning to resemble weather forecasting: predictive models built from enormous streams of data can now indicate our health risks long before we feel anything.
The science behind this new capability builds on several breakthroughs:
Genomics, which maps our genetic predispositions
Wearable sensors, which collect real-time data about physiology
Artificial intelligence, which identifies patterns invisible to humans
Behavioral tracking, which captures environmental and lifestyle influences
Together, these tools allow physicians to anticipate illness rather than simply react to it.
Millions of people now wear devices that continuously track: heart rate, oxygen level, activity and movement, sleep stages, blood pressure, blood glucose levels or stress signals. These sensors turn our bodies into sources of data, providing information that once required clinical visits. When combined, these data streams create a high-resolution portrait of our health.
The smartphone has quietly become the central device in digital medicine. It stores our medical data, tracks behavior, connects to wearable devices, and hosts apps that analyze symptoms, drug interactions, and lifestyle patterns. For the first time in history, billions of people carry clinical-quality sensors in their pockets.
The human body produces enormous amounts of information each second. Until recently, we lacked the tools to interpret it. AI changes everything. Machine learning models can detect: early sign of heart disease before symptoms occur, cancer signatures in bloodwork, anomalies in breathing and sleeping. AI operates like a constant medical companion, analyzing data streams and alerting us to risks long before a crisis emerges.
The discovery of DNA’s structure in the 1950s was one of the most significant moments in science. But only today—thanks to advances in sequencing—are we fully unlocking its potential. This revolution means individuals can now: understand their genetic predisposition to hundreds of conditions, tailor diet, exercise, and lifestyle to their genetic profile, detect carriers of hereditary diseases within families. Genomics is no longer a laboratory dream—it is becoming part of everyday healthcare.
Artificial intelligence is impacting the music industry at a rapid pace, offering tools for impromptu creation, production, and even performance. Along with text, images, and videos, generative AI can also produce music, assist with songwriting and production, and even replicate voices in a matter of seconds. Deep learning is based through its training data, which through its underlying patterns and structures is used to produce new data based on the input, which often comes in the form of natural language prompts.
There are numerous websites that use this technique, such as Suni, AIVA, Udio just to name a few, that can generate a complete song with music and lyrics in a certain style, mood, instrumentation, genre, and vocal style with just a brief description and a click of a button. These prompts can be entered directly or created using external tools like ChatGPT, which generates the lyrics.
For example, the user can enter into the Suni song description field “a song in the style of classical music about a ballet dancer struggling to find success in her career”. In just seconds with a click on the “create” button, the user will be able to hear a complete song that sounds like it was written by a semi-professional songwriter, with fairly decent lyrics.
Now, anyone can create music…well at least generate it.
Will the listening music public start listening to music produced entirely by A.I. instead of the real music we know and love?
The answer is many of them already are without really knowing it.
A.I.’s “The Velvet Sundown”
For example, last June, “The Velvet Sundown” (named after “The Velvet Underground”) came out of nowhere and released its first albums on Amazon Music, Apple Music, Spotify as well as other music streaming services: “Floating on Echoes” on June 5, “Dust and Silence” on June 20, and then another on July 14th called “Paper Sun Rebellion”. At their peak, they had well over 900,000 monthly listeners on the streaming platform with their opening track “Dust on the Wind”, (not to be confused with the iconic Kansas song) played over 2.7 million times.
However, there were soon allegations on the bands social media pages that the band was A.I. generated. There was no evidence that this band ever existed. There were no tours, interviews, group websites or any clues whatsoever online. Even many listeners commented that The Velvet Sundown’s music was “soulless” and was missing the “human element”.
The “band” denied all allegations on its X account, claiming it was “absolutely crazy that so-called ‘journalists’ keep pushing the lazy, baseless theory that the Velvet Sundown is ‘AI-generated’ with zero evidence.… This is not a joke. This is our music, written in long, sweaty nights in a cramped bungalow in California with real instruments, real minds and real soul.”
Just a week later, the apparent hoaxer, using the name Andrew Frelon, admitted that he impersonated the band on X and falsely claimed to be a spokesperson for the band in interactions with the media, including a phone interview with Rolling Stone magazine. Frelon finally admitted that the band was 100% A.I. generated using the Suni platform for all the “band”.
“It’s marketing. It’s trolling. People before, they didn’t care about what we did, and now suddenly, we’re talking to Rolling Stone, so it’s like, ‘Is that wrong?’” Frelon questioned.
As with the Spotify subscriber numbers since the “bands” breaking news, over 500,000 subscribers removed their names from the “bands” playlists, a drop of 55% from its peak, as it continues to drop quickly on a daily basis.
The bands Spotify bio eventually changed their description:
“All characters, stories, music, voices and lyrics are original creations generated with the assistance of artificial intelligence tools employed as creative instruments. Any resemblance to actual places, events or persons – living or deceased – is purely coincidental and unintentional. Not quite human. Not quite machine. The Velvet Sundown lives somewhere in between.”
The real issue is its sudden emergence of its popularity and a growing concern about the future of art, culture and authenticity in the era of advanced generative artificial intelligence. It’s both astounding and appalling that music from A.I. can amass and defraud so many listeners in a relatively short amount of time.
“Personally, I’m interested in art hoaxes,” Frelon continues. “The Leeds 13, a group of art students in the U.K., made, like, fake photos of themselves spending scholarship money at a beach or something like that, and it became a huge scandal. I think that stuff’s really interesting.… We live in a world now where things that are fake have sometimes even more impact than things that are real. And that’s messed up, but that’s the reality that we face now. So it’s like, ‘Should we ignore that reality? Should we ignore these things that kind of exist on a continuum of real versus fake or kind of a blend between the two? Or should we dive into it and just let it be the emerging native language of the internet?’”
In another similar hoax “project” from decades ago, one can’t forget the infamous story of the pop group Milli Vanilli and it’s producer Frank Farian, who may have pulled the biggest hoax in popular music history: selling over 7 million albums and 30 million singles, and winning a Grammy for “Best New Artist” by deceiving the public with a pair of lip-synching performance artists who did not sing one note on their records.
Even with the regret and humiliation that Milli Vanilli producer Frank Farian went through, at least their end product was “real music” that used professional musicians and was produced in a recording studio. That takes real talent.
In a notable moment for the music industry, an A.I.-assisted Beatles song, “Now and Then,” won the Grammy for Best Rock Performance in 2025. It was the first time an AI-assisted song received one.I-assisted song received one.
You can credit director Peter Jackson and his production team who worked on the 2021 “The Beatles – Get Back” documentary. They developed an A.I. tool (MAL) for the film and discovered that they could use it to extract John Lennon’s voice from a demo cassette tape recorded in 1974 that originally had Lennon’s piano and vocal on it. They were able to isolate the tracks mixed into a 2 track master and later combined the original 1995 guitar tracks that George Harrison recorded from their “Now and Then” recording session along with McCartney and Starr, who decided to re-record their tracks in 2023 for a true authentic Beatles recording.
Currently, unless you have access to Peter Jacksons MAL A.I. tool, it appears the only way to tell if the music is A.I. generated is if you have software such as Apple’s Logic Pro track splitter and finding “artifacts” from the inputted music files, as music producer Rick Beato calls it.
Also, the music streaming app Deezer, also uses its own tool to identify AI-generated content and declared that 100% of The Velvet Sundown’s tracks were created using A.I. Deezer labels that content on its site, ensuring that AI-generated music does not appear on its recommended playlists and that royalties are maximized for human artists.
Unlike generative A.I., there is nothing fake with the Fab Four latest song, “Now and Then”… it’s just real music with real musicians with a little help from A.I. and their friends; producers Peter Jackson, Giles Martin and George Martin with all of the original Beatles back together again.
“Imagine” that.
Originally published on https://mlsentertainment.com/2025/08/31/milli-vanilli-or-the-velvet-sundown-discerning-real-music-in-the-a-i-era/
Over the past decade, data centers have served as the digital backbone of modern life—warehouses of servers designed to store information, host applications, and deliver content across the internet. But the rise of large-scale artificial intelligence has fundamentally changed what these facilities need to do. Traditional data centers are evolving into AI factories highly specialized, compute-intensive environments designed to train and run AI models at unprecedented scale. This transformation is reshaping architecture, operations, energy consumption, and economics across the tech ecosystem.
NVIDIA held its main GTC (GPU Technology Conference) in San Jose, California, from March 17-21, 2025, focusing heavily on transforming data centers into AI factories with Blackwell Ultra and Reuben architectures, plus AI-powered robotics. Yes, the data centers are no longer in fashion. The AI factories is the word that is describing the transformation what has been happening in technology world.
GTC 2025 solidified NVIDIA’s vision for an AI-driven future, emphasizing massive AI factories, a reinvented computing stack, and the practical application of AI across all industries – from healthcare, life science to manufacture robotics, autonomous vehicles, computer graphics, even video games. Jensen Huang found himself reminiscing on Video games that started Nvidia company in 1983 running the first application and the journey where Nvidia is now.
Key NVIDIA GTC 2025 Themes & Announcements:
AI Factories & Infrastructure: Shift to full-stack accelerated computing, with Blackwell Ultra boosting reasoning workloads and Reuben architecture offering massive performance gains (900x scale-up flops).
Software & Platforms: Introduction of Nvidia Dynamo, an OS for AI factories, and platforms for connecting millions of GPUs.
Physical AI & Robotics: Reality of AI in robotics, logistics, and manufacturing, with demos of self-driving cars and digital humans.
Industry Focus: Deep dives into healthcare (drug discovery), telecommunications (AI-RAN), and public sector AI.
Geopolitics & Sovereign AI: Initiatives for nations to control their own AI infrastructure.
“It’s becoming a giant industry and it’s crushing it and it’s growing exponentially. After A.I., it’s the fastest growing tech sector…” – Ori Inbar, CEO and Co-Founder of AWE
The Augmented World Expo USA 2024, had its 15th anniversary this month, sharing the latest in AR, XR and spatial computing innovations. It’s the longest running and largest event focused on Augmented Reality and Virtual Reality (XR) in the world. The annual three day event outgrew the Santa Clara Convention Center where the annual event took place for its first fourteen years, relocating to its new home in Long Beach Convention Center, attracting more than 6,000 attendees, 300 exhibitors and 575 speakers.
CEO and Co-founder, Ori Inbar, for the first time in its AWE opening keynote history, entered through the convention floor onto the stage wearing a XR headset (Vision Pro) headset, showing all types of mixed reality and face filters that was shown from his headset directly onto the screen entertaining his AWE audience.
LEARNING FROM XR’S PAST TO CREATE THE FUTURE
During his keynote, he summarized the state of XR and went through a brief XR history lesson from the beginnings from 1963, with a photo of Hugo Gernsback, the father of science fiction, wearing a type of headworn device that looks like a transister radio with two antennas on it, then five years later, in 1968 with a photo of the first working demo of a head-mounted display created by Ivan Sadland, along with the first keyboard mouse and 2D screen.
Hugo Gernsback, the father of science fiction, wearing a type of headworn device in 1963. Courtesy of AWE.
“If we want spatial computing to one day replace 2D computing we all have to become history buffs”, Inbar stresses to the crowd.
For those who unfamiliar with what “spatial computing” means, Wikipedia defines it as follows:
Spatial computing is any of various human–computer interaction techniques that are perceived by users as taking place in the real world, in and around their natural bodies and physical environments, instead of constrained to and perceptually behind computer screens.
For me the essense of spatial computing is very simple and I quote Ivan (Sadland), “The image of an object changes in the same way the real object changes with similar motions of the head”. Inbar continues, “Head computing imitating real life…that’s a concept I would bet my career on…because humans are biologically spatial and so should computing.”
In conjunction to its annual Auggie Awards, AWE had its inauguration and induction ceremony celebrating the first 101 members of the of the XR Hall of fame—a new platform dedicated to honoring the pioneers whose monumental contributions have shaped and propelled the XR industry forward, including Palmer Luckey, the founder of Oculus VR and designer of the Oculus Rift . It also featured an XR museum showcassing over 80 vintage AR and VR devices, including the Gernsback headworn device.
THE FUTURE OF XR INDUSTRY: THE TIME IS NOW
According to ARtillery Intelligence, a research and analyst firm for the business of spatial computing, the XRP market today this year is a $35 billion market. In 2027, it’s expected to double to $70 billion. “It’s becoming a giant industry and it’s crushing it and it’s growing exponentially. After AI, it’s the fastest growing tech sector.” says Inbar.
“Big Tech are all in a tight race to lead the market and it’s rearranging. Each player is retrenching in it to its strengths: software, hardware, operating system, infrastructure…doubling down or opening up and partnering, partnering, partnering. This is good for the market and it’s great for customers.”
According to Inbar, AR penetration has been stagnant around the 30%, but active usage is on the rise and VR adoption is growing where it really matters with the new generation; one in four teenagers are playing in VR.
Almost every single Fortune 1000 company has adopted XR. Enterprise revenue is now over 70% of the XR market. Fortune 1000 companies made its presence at AWE this week. All are participating in AWE’s new enterprise focus program which is ironically called “Focus” with custom export tours, roundtables, networking and really getting business done”, according to Inbar.
“Investments are picking up. Anderson Horwitz, probably the most influential VC in the world is bullish about XR.”, Inbar says. The partner at the firm recently posted this: “We believe AR/VR is among the most underrated markets today”. “Quest has a similar sales trajectory to the iPhone…the time is now” …
“On the Quest store, more than 40 developers have earned over $10 million each”, Inbar continued “amd what’s attractive about XR is that the most popular experiences on the Quest store did it with small teams and no funding.” He cites “Gorilla Tag”, “Penguin Paradise”, and “NoClip” were built with no funding and with one or two people. In addition, games are no longer developed exclusively for XR. Many iOS and Android developers are shifting to spatial with an estimate of over 2 million XR developers in the world.
As an attendee who has been going to the AWE conferences since the very beginnings, never did I have any doubt that VR, AR, MR, XR didn’t have a future. I’m really looking forward to another fifteen years that XR has to offer.
It’s been one incredible year for Bay Area first-time feature filmmaker, writer-director, Sean Wang.
Last January, his first full feature-length film, Dìdi, (meaning “Younger Brother”) had its world premiere at the 2024 Sundance Film Festival where it won the the U.S. Dramatic Audience Award and Special Jury Prize for Best Ensemble Cast. In the same week, his documentary short, Nǎi Nai & Wài Pó, which premiered at the 2023 South by Southwest, where it won both the Grand Jury Award and Audience Award, was nominated for Best Documentary Short Film at the 96th Academy Awards.
Dìdi was also selected the Opening Night film at the San Francisco International Film Festival last April.
“It’s been it’s been a crazy few months. We had our hometown premiere of our movie…such a love letter to the Bay Area”, Wang exclaims the following day at the SFFILM Lounge to a small crowd after its Dìdi premiere, “Still kind of floating a little bit on cloud nine but it made me think about just the seeds of all of this.
Attending the Opening Night for Dìdi, Wang wore a sporty black blazer along with a white t-shirt proudly displaying Joan Chen’s name on it. The actress plays the loving immigrant mother to Chris, a thirteen year old teenage boy who makes his way through a series of firsts that his family can’t teach him (how to skate, how to flirt, and how to eventually how to love your mom) preceding his freshman year in high school.
Just turning 30 years old, Wang’s rise to success as a filmmaker came quite rapidly. Raised in the Bay Area in Fremont, California where his film was shot, which seems to becoming a popular location spot for movies these days with Indie filmmakers. It’s the second movie in the last two years to be released that features the East Bay’s fourth largest city, along with last year’s movie, Fremont.
“It was all on location, all in places that felt so hyper familiar to me,” Wang says. It fed not only into the personal story he wanted to tell, but also into his hopes to cement Fremont into the burgeoning contemporary canon of Bay Area films, from San Francisco’s Medicine for Melancholy and The Last Black Man in San Francisco to Oakland’s Sorry to Bother You and Blindspotting.
“They capture their cities, and the locations are so vivid and colorful and vibrant, and I thought, there is a story to be told in Fremont,” he says. “This story is maybe not as loud, but it’s just as emotional. I wanted to do something for my corner of the Bay Area.”
It’s also a love letter to all the coming-of-age films and to the directors of those films, that inspired him during the years such as “400 Blows”, “Fruitvale Station”, “Stand by Me”, “Short Term 12” and “Lady Bird”.
Dìdi is a semi-autobiographical film and opens up with a scene that all mischievous teenage boys can relate to; having the time of their lives by igniting neighborhood mailboxes and fleeing from getting caught. Thus sets the tone for Wang’s personal film.
Dìdi is set in the Summer of 2008 in Fremont, where Taiwanese-American Chris Wang (Izaac Wang) lives – in an all-female fatherless household, since dad is working abroad. All except his caring mother are somewhat dysfunctional with language and generation barriers that prevent them from fully understanding each other, especially with grandma at the dinner table.
With a cast of professional and first-time actors, the casting director should get special recognition as the ensemble was near perfect. Izaac Wang (Raya and the Last Dragon), who plays the lead as Chris Wang, seemed so natural and convincing that you would never have thought he was a real actor. Especially when working alongside veteran global icon, Joan Chen as his mother, one of the most respected actresses of Asian cinema, who is usually cast in flamboyant and dramatic roles, unlike the one she is here.
“It is a character that resonates very deeply with me.”, Chen says. “I am an immigrant mother who brought up two American children who had extremely tumultuous teen years…adolescence. I haven’t played the character like Chungsing (Chris’s mother) before; so gentle…warm…”
However, it was Chang Li Hua, Wang’s real life 86-year old grandmother (Nǎi Nai & Wài Pó), who may have stolen the show with the scene where Chris and his older sister (Shirley Chen) get into an intense verbal yelling match at the family dinner table, resulting in a side argument with the judgmental Chinese speaking grandmother condescending down on mother on the issue of how to properly raise kids. It was probably the film’s most hilarious moment.
“My grandma who had never been in a narrative film before, could act next to Joan and have it feel like the same movie,” says Wang, whose own 86-year old grandmother Chang Li Hua, plays Chris’s grandma in the film. “They share their most intense scenes together, and for a lot of actors of her caliber, it could be like, What is this movie with a bunch of first-time actors who have never acted before? This is beneath me. It was the total opposite. It was such a joy, such a dream. She would stay on set and do origami with my family.”
Director Sean Wang and Actress Joan Chen at a Q&A screening of Dìdi in San Francisco, July 29th, 2024. Photo by Marcus Siu
Wang’s path to becoming a filmmaker was untypical. As a teenager, he would shoot footage of his friends jumping off trees, then editing and adding music to it, and eventually posting it on YouTube. Wang confessed, “I didn’t know that was filmmaking until years and years and years later and it all traced back to skating for me. I fell in love with skating. It was something I truly have such a pure love for. It never left. I think that that skating just gave me ethos and introduced me to cameras and photography making skate videos.”
During those early years, Wang connected with the skating videos directed by Spike Jonze.
“He had made a skate video that was really emotional”, Wang reflects. “This is weird…Why am I crying watching a skate video?”, Wang questioned and replied back. “Because it’s Spike and that was the seed of everything! I don’t know what this is, but I could do this for 24 hours a day.” From that moment on, Wang knew he wanted to be a filmmaker.
He started making wedding videos and random commercials for local companies and was earning decent money while attending community college. When he went to USC, he realized that their school curriculum forced you to make a choice in a specific field in film school, but Wang wanted to be knowledgeable in every facet of filmmaking, and already knew he wanted to write and direct his own films.
He also initially realized that he didn’t want to waste his time to writing scripts that would require huge budgets that would probably never be made. Wang recalls doing a Google search for “movies made for under $1 million” and just started watching movies that were small in budget, such as Barry Jenkins’ “Medicine for Melancholy” and David Gordon Green’s “George Washington”.
“Oh! What are these feature films that are small independent films that look like they weren’t $200 million blockbusters? Maybe there’s a path through this side of things. They made these for so little – but they’re so amazing and just shoe box in production but not in emotion – and maybe I can go this route.”
Both of his films, Nǎi Nai & Wài Pó and Dìdi have something in common. They are packed with heart, honesty and emotion but on a shoe-string budget.
“I think those ideas, both of these films are so small and contained and somehow they ended up with a worldwide theatrical distribution plan”, Wang says. “The short got nominated for an Oscar. That must have been the cheapest nominated short of all time. We shot it with a crew of three people, so the fact those two were indies; so small and personal, all of a sudden having this giant platform and having it within months of one another, it just sort of feels like so unexpected. That’s not why we made it but I’m certainly thrilled that it happened.”
After graduating USC film school, Wang worked at Google’s Creative Lab in 2016 with a one year residency. His objective was to figure out his next step on how to make a feature-length movie without the obstacles that would normally burden the production. Wang was thinking at the time, “I do want to make a feature one day and I heard all these stories saying that it takes seven to eight years to get overnight success and I was like, man, if I’m gonna make a feature I should start now.”
From the looks of it with his first full-length feature, Dìdi, Sean Wang’s meteoric rise came right on schedule.
The film has a limited release Friday, but goes world wide after that.