Artificial intelligence is no longer just another layer of software. It is rapidly becoming the foundation of modern computing itself.
That message was unmistakable on Wednesday night as Lenovo hosted its annual Tech World event at the Sphere, Las Vegas’s landmark venue opened in September 2023. The scale of the setting matched the ambition of the message: the Sphere stands 366 feet high and 516 feet wide, spans 875,000 square feet, seats 18,600 people, and features a sound system of 167,000 individually amplified loudspeakers. Built over five years at a cost of $2.3 billion, it has quickly become one of the most technologically advanced entertainment venues in the world.
Against this backdrop, Lenovo Chairman and CEO Yuanqing Yang welcomed an extraordinary lineup of global technology leaders and partners to the stage, including Jensen Huang, founder and CEO of NVIDIA; Lip-Bu Tan, CEO of Intel; Cristiano Amon, president and CEO of Qualcomm; Dr. Lisa Su, chair and CEO of AMD; Yusuf Mehdi, executive vice president and consumer CMO of Microsoft; Angelina Gomez, Motorola’s head of software marketing; Jennifer Koester, president and CEO of Sphere; and Gianni Infantino, president of FIFA.
The event featured multiple product announcements and partnership reveals, capped by Lenovo’s unveiling of its new personal AI platform, Qira—a unified AI system designed to deliver seamless, real-time intelligence across devices and platforms. But beyond individual products, the message from global tech leaders was clear and consistent: AI is no longer an application layer—it is becoming the operating system of the future.
From Classical Computing to AI-Native Systems
For decades, enterprise computing was built around applications written for CPUs and deployed in conventional data centers. That model is now rapidly being replaced by AI-native systems, where software is designed around large language models and executed on GPU-based, accelerated infrastructure.
Industry leaders described this transition as nothing less than a reinvention of the global IT landscape. Trillions of dollars in legacy systems, they said, will need to be modernized to support AI-driven workloads.
“This is not just a platform change,” one executive noted. “It’s a reinvention of the entire computing stack.”
AI Factories and Accelerated Infrastructure
At the core of this transformation is the emergence of what companies now call AI factories—purpose-built data centers designed specifically for AI training, inference, and deployment.
Unlike traditional data centers, these facilities are optimized for massive parallel processing, high-throughput workloads, and large-scale model operations. NVIDIA outlined multiple generations of accelerated computing platforms driving this shift, including Hopper, Blackwell, and the newly announced Vera Rubin architecture.
Each generation delivers exponential performance gains while lowering the cost of AI computation, making enterprise-scale AI deployment increasingly economically viable.
Agentic AI and the Explosion of Compute Demand
Another dominant theme was the rise of agentic AI—systems designed not merely to generate responses, but to reason, plan, reflect, and act autonomously.
These systems generate so-called “thinking tokens,” representing internal reasoning, decision-making, and multi-step planning processes. As AI models grow in complexity, computing demands are expanding rapidly. Executives noted that model sizes are already moving from hundreds of billions of parameters into the multi-trillion range, driving exponential growth in infrastructure requirements.
Hybrid AI: Intelligence Across Devices and Cloud
Lenovo outlined a hybrid AI architecture that distributes intelligence across multiple layers: personal devices, edge systems, private cloud environments, and large-scale AI factories.
In this model, smaller AI systems operate locally on devices, while larger models run in cloud infrastructure. Tasks are dynamically routed based on latency, performance, cost, and security requirements. The goal is to make AI more accessible while reducing complexity for both users and organizations.
Partnerships Powering Global Deployment
Strategic partnerships were presented as critical to scaling AI infrastructure worldwide.
Lenovo and NVIDIA announced an expanded collaboration to deploy AI factories globally, combining NVIDIA’s accelerated computing platforms with Lenovo’s manufacturing scale, cooling systems, and global services infrastructure. The objective: reduce deployment complexity and dramatically shorten the time from installation to operational AI use.
Intel and Lenovo also reaffirmed their long-standing partnership, focusing on AI-powered PCs, enterprise systems, and data center platforms designed to support AI workloads across devices and cloud environments.
From Intelligence to Action
Speakers emphasized that AI is moving beyond analytics and insight generation toward real-time action and automation.
Next-generation AI systems are increasingly being designed to execute workflows, manage operations, coordinate tasks, and operate autonomously across enterprise systems. This shift is expected to reshape industries including manufacturing, logistics, healthcare, entertainment, and even global sporting events.
A New Computing Era
Leaders at CES 2026 framed the current transition as one of the largest technological shifts in modern history—comparable to the rise of the internet or mobile computing.
In this new paradigm:
AI systems replace traditional application frameworks
The result is the emergence of an AI-native digital ecosystem.
Looking Ahead
As AI infrastructure, models, and systems continue to evolve, the boundaries between software, hardware, and intelligence are rapidly dissolving.
What is emerging is not simply a new generation of technology, but a new foundation for digital society—one in which intelligence itself becomes infrastructure.
At CES 2026, that future no longer appeared theoretical.
It is already being built.
The evening concluded with a live performance by Gwen Stefani, showcasing the Sphere’s immersive audio-visual capabilities and underscoring the fusion of technology, entertainment, and AI-driven experiences.
At the press conference Bosch executives: Tanja Rückert, board member and Paul Thomas, president of Bosch in North America unveiled a sweeping vision of how artificial intelligence and software are reshaping everyday life—from verifying product authenticity to redefining cooking, mobility, and industrial productivity. Drawing on its deep expertise across both the physical and digital worlds, Bosch demonstrated how intelligent technology can reduce complexity, eliminate stress, and empower people across skill levels.
Fighting Counterfeiting With AI
Bosch opened by introducing a revolutionary AI-powered authenticity verification technology. Using live video analysis, the system can quickly and reliably determine whether an item—such as sneakers, car parts, or artwork—is genuine. By bridging the physical and digital divide, this innovation has the potential to significantly disrupt the global counterfeiting industry and strengthen consumer trust.
AI That Serves People at Home
At the core of Bosch’s philosophy is a simple principle: technology should serve people, not the other way around. Nowhere is this more evident than in the kitchen. Bosch showcased how generative AI combined with advanced sensors can elevate cooking for everyone—from home cooks to professional chefs.
Celebrity chef Marcel Vigneron demonstrated Bosch Cook AI, a next-generation cooking assistant that builds on the brand’s AutoChef induction technology. By analyzing ingredients through images, understanding user preferences, and monitoring food in real time with Bluetooth temperature probes, the system precisely controls heat and cooking time to deliver consistent, high-quality results. The goal is not to replace human creativity, but to remove uncertainty and stress from cooking.
Appliances That Improve Over Time
Bosch emphasized that modern appliances no longer need to be replaced to gain new features. Through connectivity and over-the-air software updates, existing Bosch products can evolve long after purchase. Recent updates have added new cooking functions, such as air fry and advanced heating modes, to connected ovens at no additional cost—demonstrating Bosch’s commitment to long-term value.
Software-Defined Mobility
Beyond the home, Bosch highlighted how its software and hardware integration is transforming mobility. Its vehicle motion management technology allows cars to gain new driving modes and personalization features even after leaving the dealership. By intelligently coordinating braking, steering, powertrain, and suspension systems, Bosch’s software can improve comfort, safety, and driving dynamics.
One major advancement is six-degrees-of-freedom vehicle control, which helps significantly reduce motion sickness—an issue affecting a large portion of adults and a major barrier to autonomous driving adoption.
Building the Future of Software-Defined Vehicles
Bosch is playing a key role in the shift toward software-defined vehicles, a market projected to exceed one trillion dollars by the end of the decade. The company is developing open, high-performance middleware platforms that act as the “nervous system” of modern vehicles, simplifying development, lowering costs, improving security, and accelerating innovation for automakers worldwide.
Bosch also announced progress in key technologies such as steer-by-wire and brake-by-wire systems, essential components for future automated and autonomous vehicles. These systems are already moving into large-scale production with major global automakers.
AI That Sees, Hears, and Understands
Artificial intelligence is central to Bosch’s automotive roadmap. The company showcased AI-powered cognitive vehicles capable of both seeing and hearing through advanced language models. These systems enable natural conversations with vehicles, intelligent perception of surroundings, automated parking, and even real-time meeting assistance while driving or riding as a passenger.
Bosch expects AI-based solutions for assisted and automated driving to become a multi-billion-euro business by 2035.
AI for Industry and Productivity
Bosch also demonstrated how its AI expertise extends into industrial applications. By partnering with technology leaders such as Microsoft, Bosch aims to apply generative AI to manufacturing environments, boosting productivity, efficiency, and flexibility in increasingly complex industrial operations.
A Unified Vision
Across all domains—consumer goods, mobility, and industry—Bosch’s message was consistent: its strength lies in combining hardware, software, and AI into complete systems and ecosystems. Rather than offering isolated components, Bosch is building intelligent platforms that deliver real-world benefits, improve quality of life, and prepare society for a more connected, automated future.
The LEGO Group introduced a major new initiative at CES with the launch of LEGO Smart Play, a platform designed to merge physical LEGO bricks with embedded digital intelligence—without screens, apps, or traditional digital interfaces.
The company described the announcement as its most significant innovation in decades, marking a new chapter in LEGO’s 70-year history of physical play. Executives positioned Smart Play as a foundational platform rather than a single product, signaling a long-term strategy to integrate technology into hands-on creativity while preserving LEGO’s core identity.
A Modern Evolution of a Classic System
LEGO leaders opened the presentation by highlighting the enduring success of the LEGO system of play, which has remained structurally compatible since the interlocking brick was introduced in 1958. With more than 20,000 different elements that fit together, LEGO has maintained a consistent design philosophy centered on creativity, imagination, and open-ended building.
At the same time, the company acknowledged a changing cultural landscape. Today’s children grow up immersed in digital environments, prompting LEGO to explore how interactive technology could be integrated into physical play without replacing it.
Executives said the challenge was to introduce digital intelligence in a way that enhances creativity rather than shifting play toward screens or software-driven experiences.
Technology Designed to Stay Invisible
LEGO Smart Play is built around three core principles: seamless integration of technology into physical play, openness across the LEGO ecosystem, and simplicity in user experience.
Rather than relying on apps or digital interfaces, the system is designed to operate through natural physical interaction. There are no screens, no visible controls, and no learning curve for children. The technology remains embedded in the bricks, responding to how they are used rather than directing how they should be played with.
Inside the Smart Brick
At the center of the platform is the LEGO Smart Brick, a standard-looking brick that contains sensors, processing capabilities, and wireless communication technology. Despite its embedded intelligence, the brick has no screen or power button and functions automatically as part of a build.
According to LEGO, the Smart Brick can detect movement, sound, light, color, distance, orientation, and direction. It can recognize specially designed Smart Tags that define behaviors, identify Smart Minifigures within a model, and communicate wirelessly with other Smart Bricks.
This allows LEGO creations to respond dynamically to physical interaction. Vehicles can detect drivers and track movement, structures can respond to proximity, and entire play environments can interact in real time through decentralized networks of connected bricks.
Modular Intelligence Across Builds
A single Smart Brick can be reused across multiple builds and play scenarios. By changing tags and minifigures, the same brick can take on different roles, functioning as an engine, character, creature, or interactive object depending on how it is configured.
During demonstrations, LEGO showed how one Smart Brick could animate cars, aircraft, animals, and games, generating light, sound, and behavioral responses based solely on physical movement and positioning.
Spatially Aware Play
One of the platform’s distinguishing features is spatial awareness. Without relying on cameras or external tracking systems, Smart Bricks are able to detect distance, orientation, and directional relationships between builds.
This enables interactive play scenarios such as tracking race outcomes, triggering responses based on proximity, and creating game mechanics that operate in three-dimensional physical space.
A Long-Term Platform for Storytelling
LEGO executives emphasized that Smart Play is designed as a scalable platform rather than a single product line. The system is intended to support long-term storytelling, re-playability, collaboration, and evolving narratives across different LEGO themes.
To demonstrate the platform’s potential, LEGO announced its first Smart Play partnership with LEGO Star Wars.
LEGO Star Wars Becomes Interactive
In collaboration with Disney and Lucasfilm, LEGO revealed that Smart Play will be integrated into upcoming LEGO Star Wars sets. After more than 25 years of partnership, the franchise will move beyond static builds to interactive physical play environments.
Characters, vehicles, and locations will respond to movement, proximity, and interaction, allowing children to create dynamic Star Wars experiences without screens or digital displays.
The first LEGO Star Wars Smart Play sets are scheduled to launch in March.
Redefining Physical Play
LEGO Smart Play represents a shift in how physical toys can incorporate advanced technology. Rather than directing behavior through software, the system is designed to respond to play itself, keeping creative control in the hands of children.
Company leaders described the platform as a foundation for future development, positioning Smart Play as the beginning of a broader transformation in physical play design.
At CES, LEGO framed the initiative not as a move toward digital toys, but as a reimagining of physical creativity—one where technology remains invisible, and imagination remains central.
Imec closed out 2025 with significant announcements that underscore its growing influence in semiconductor R&D and AI systems design. Between a high-profile debut at Super Computing 2025 and a strong showing at the International Electron Devices Meeting (IEDM), the Belgium-based research center marked the end of the year with momentum across multiple technology fronts.
In November at Super Computing 2025, one of the world’s largest gatherings for high-performance computing, imec unveiled imec.kelis, an analytical performance modeling tool aimed at reshaping how AI datacenters are planned and optimized.
Imec positioned the platform as a response to pressures facing datacenter designers, as AI workloads expand into the trillions of parameters and energy demands climb. According to the organization, imec.kelis offers a faster and more transparent alternative to conventional simulation tools, which are often slow or limited in scope.
Early adopters have already begun exploring the platform, an early sign of commercial interest.
“Imec.kelis is more than a simulator—it’s a strategic enabler for the next generation of AI infrastructure,” said Axel Nackaerts, system scaling lead.
Imec.kelis provides an end-to-end analytical framework that evaluates performance across compute, communication, and memory subsystems. The tool is optimized for large languagemodel (LLM) training and inference, offering predictions validated on widely used systems such as Nvidia’s A100 and H100 GPUs.
The platform draws heavily on imec’s longstanding expertise in hardware-software co-design, system-level modeling, and semiconductor technology road mapping. Imec said the goal is to give system architects the ability to make better-informed decisions at datacenter scale, where design choices directly impact cost, efficiency, and sustainability.
At the beginning of December 2025, imec continued to demonstrate research leadership at the 71st International Electron Devices Meeting (IEDM), presenting 21 papers spanning advanced logic, memory, quantum computing, imaging, and bioelectronics.
With a high-visibility launch in HPC computing and a deep bench of contributions at IEDM, imec concludes 2025 with strengthened leadership in both advanced semiconductor research and the rapidly expanding AI datacenter ecosystem. The organization is positioning itself as a critical contributor to the technologies that will define next-generation computing infrastructure.
Guillaume de Fondaumiere is the Co-CEO of the Quantic Dream Studio based in France, which developed games such as, “Fahrenheit” (2005), “Heavy Rain” (2010), and in collaboration with Sony Computer Entertainment, the PS3 exclusive title, “Beyond: Two Souls”, starring actors Ellen Page and Willem Dafoe (2013).
Today, on the 15th anniversary of “Heavy Rain,” he finds himself reminiscing: “In the mid-2000s when we started production on “Heavy Rain” I was executive producer on the project. I was also responsible for managing relationships with actors, composers, etc. In the weeks leading up to the launch, we decided with Sony, to send the game to several editorial teams. I remember very clearly sending out those codes, one after another. And a few weeks later we started receiving the first reviews. It was a huge relief to realize that the reviewers had understood what we are trying to do. When we hit after six weeks, one million copies I couldn’t help but shed a tear, telling myself “Phew!”. What I’d tell to myself fifteen years ago in the tough moments, because there are always some during a game’s development, is this: “Don’t worry. It’s going to be okay.”
Guillaume de Fondaumiere was also appointed the Chairman of the European Games Developer Federation. During his position, the French government and the European Union agreed to introduce a 20% tax credit for video games studios. He not only fought for many years to support the gaming industry, but had also lobbied for the games industry to be recognized as an art form.
“To me, all games are a form of cultural expression”, he says. “I see no reason why games should be treated differently than any type of literature or any type of movie. I think that more and more video games are becoming artful, and are becoming a form of art that should be recognized next to the others.” In his opinion, games should be placed among institutional forms of art such as, architecture, sculpture, visual arts, music, literature, theater, cinema art and media arts (television, radio and photography).
Media and internet often sell the information; games trigger the violence, players get addicted to them, and at the end, they are merely entertainment for young and immature minds. The stereotype of thinking associate games with either the shooting or the lighthearted entertainment for children. The reasons for such thinking originate from the early years of the gaming industry, which was actually targeted the children. The first games were very simple, they had boosted the simplest instinctive behavior and the release of adrenaline, which is also referred to as hormone 3F – fear, fight, flight.
But since that time has changed almost everything: games, hardware and the players themselves. Today, the old game enthusiasts had grown up and they still want to play games but they expect deeper, artistic and intellectual entertainment. Under such demanding clients there was a dynamic and multidirectional development of games, and palette of emotions has greatly increased. Today, players can incarnate in any characters, make their own choices, stand for duels with hundreds of players from all over the world. Production studios strive for authenticity and put meticulous attention to details.
Every little part is important, and approaching players to reality. There is also a 3D technology that changed the flat images into three-dimensional images. Today’s game has a story, uses the visual graphics and new, advanced forms of interaction with the player. In addition, there is also increased proliferation of the authors in the games industry, artists express themselves creatively and individually. The impact of games on mass culture is unquestionable and its value is growing at a dynamic pace.
So, is art or not?
A precise, unambiguous, and commonly held definition of art does not exist. However, it is known that art acts through aesthetic, ethical or cultural functions. It affects its audience through watching, listening, creating and reflecting. Without a doubt, the video game industry, which is the fastest growing sector of the modern entertainment industry, is a part of modern culture.
P.S. Roger Ebert, the legendary (Pulitzer Prize) film critic, who for 46 years shaped the tastes of American film audiences, remarked, “as long as there is a great movie unseen or a great book unread, I will continue to be unable to find the time to play video games”. He repeated this statement for eight years and once he hit harder “video games can never be art.”. He died in 2013, with no chance for revision of his assessment.
Artificial intelligence is impacting the music industry at a rapid pace, offering tools for impromptu creation, production, and even performance. Along with text, images, and videos, generative AI can also produce music, assist with songwriting and production, and even replicate voices in a matter of seconds. Deep learning is based through its training data, which through its underlying patterns and structures is used to produce new data based on the input, which often comes in the form of natural language prompts.
There are numerous websites that use this technique, such as Suni, AIVA, Udio just to name a few, that can generate a complete song with music and lyrics in a certain style, mood, instrumentation, genre, and vocal style with just a brief description and a click of a button. These prompts can be entered directly or created using external tools like ChatGPT, which generates the lyrics.
For example, the user can enter into the Suni song description field “a song in the style of classical music about a ballet dancer struggling to find success in her career”. In just seconds with a click on the “create” button, the user will be able to hear a complete song that sounds like it was written by a semi-professional songwriter, with fairly decent lyrics.
Now, anyone can create music…well at least generate it.
Will the listening music public start listening to music produced entirely by A.I. instead of the real music we know and love?
The answer is many of them already are without really knowing it.
A.I.’s “The Velvet Sundown”
For example, last June, “The Velvet Sundown” (named after “The Velvet Underground”) came out of nowhere and released its first albums on Amazon Music, Apple Music, Spotify as well as other music streaming services: “Floating on Echoes” on June 5, “Dust and Silence” on June 20, and then another on July 14th called “Paper Sun Rebellion”. At their peak, they had well over 900,000 monthly listeners on the streaming platform with their opening track “Dust on the Wind”, (not to be confused with the iconic Kansas song) played over 2.7 million times.
However, there were soon allegations on the bands social media pages that the band was A.I. generated. There was no evidence that this band ever existed. There were no tours, interviews, group websites or any clues whatsoever online. Even many listeners commented that The Velvet Sundown’s music was “soulless” and was missing the “human element”.
The “band” denied all allegations on its X account, claiming it was “absolutely crazy that so-called ‘journalists’ keep pushing the lazy, baseless theory that the Velvet Sundown is ‘AI-generated’ with zero evidence.… This is not a joke. This is our music, written in long, sweaty nights in a cramped bungalow in California with real instruments, real minds and real soul.”
Just a week later, the apparent hoaxer, using the name Andrew Frelon, admitted that he impersonated the band on X and falsely claimed to be a spokesperson for the band in interactions with the media, including a phone interview with Rolling Stone magazine. Frelon finally admitted that the band was 100% A.I. generated using the Suni platform for all the “band”.
“It’s marketing. It’s trolling. People before, they didn’t care about what we did, and now suddenly, we’re talking to Rolling Stone, so it’s like, ‘Is that wrong?’” Frelon questioned.
As with the Spotify subscriber numbers since the “bands” breaking news, over 500,000 subscribers removed their names from the “bands” playlists, a drop of 55% from its peak, as it continues to drop quickly on a daily basis.
The bands Spotify bio eventually changed their description:
“All characters, stories, music, voices and lyrics are original creations generated with the assistance of artificial intelligence tools employed as creative instruments. Any resemblance to actual places, events or persons – living or deceased – is purely coincidental and unintentional. Not quite human. Not quite machine. The Velvet Sundown lives somewhere in between.”
The real issue is its sudden emergence of its popularity and a growing concern about the future of art, culture and authenticity in the era of advanced generative artificial intelligence. It’s both astounding and appalling that music from A.I. can amass and defraud so many listeners in a relatively short amount of time.
“Personally, I’m interested in art hoaxes,” Frelon continues. “The Leeds 13, a group of art students in the U.K., made, like, fake photos of themselves spending scholarship money at a beach or something like that, and it became a huge scandal. I think that stuff’s really interesting.… We live in a world now where things that are fake have sometimes even more impact than things that are real. And that’s messed up, but that’s the reality that we face now. So it’s like, ‘Should we ignore that reality? Should we ignore these things that kind of exist on a continuum of real versus fake or kind of a blend between the two? Or should we dive into it and just let it be the emerging native language of the internet?’”
In another similar hoax “project” from decades ago, one can’t forget the infamous story of the pop group Milli Vanilli and it’s producer Frank Farian, who may have pulled the biggest hoax in popular music history: selling over 7 million albums and 30 million singles, and winning a Grammy for “Best New Artist” by deceiving the public with a pair of lip-synching performance artists who did not sing one note on their records.
Even with the regret and humiliation that Milli Vanilli producer Frank Farian went through, at least their end product was “real music” that used professional musicians and was produced in a recording studio. That takes real talent.
In a notable moment for the music industry, an A.I.-assisted Beatles song, “Now and Then,” won the Grammy for Best Rock Performance in 2025. It was the first time an AI-assisted song received one.I-assisted song received one.
You can credit director Peter Jackson and his production team who worked on the 2021 “The Beatles – Get Back” documentary. They developed an A.I. tool (MAL) for the film and discovered that they could use it to extract John Lennon’s voice from a demo cassette tape recorded in 1974 that originally had Lennon’s piano and vocal on it. They were able to isolate the tracks mixed into a 2 track master and later combined the original 1995 guitar tracks that George Harrison recorded from their “Now and Then” recording session along with McCartney and Starr, who decided to re-record their tracks in 2023 for a true authentic Beatles recording.
Currently, unless you have access to Peter Jacksons MAL A.I. tool, it appears the only way to tell if the music is A.I. generated is if you have software such as Apple’s Logic Pro track splitter and finding “artifacts” from the inputted music files, as music producer Rick Beato calls it.
Also, the music streaming app Deezer, also uses its own tool to identify AI-generated content and declared that 100% of The Velvet Sundown’s tracks were created using A.I. Deezer labels that content on its site, ensuring that AI-generated music does not appear on its recommended playlists and that royalties are maximized for human artists.
Unlike generative A.I., there is nothing fake with the Fab Four latest song, “Now and Then”… it’s just real music with real musicians with a little help from A.I. and their friends; producers Peter Jackson, Giles Martin and George Martin with all of the original Beatles back together again.
“Imagine” that.
Originally published on https://mlsentertainment.com/2025/08/31/milli-vanilli-or-the-velvet-sundown-discerning-real-music-in-the-a-i-era/