During the CES 2026 keynote held at the Fontainebleau in Las Vegas, Jensen Huang, Chief Executive Officer of Nvidia Corporation (NASDAQ: NVDA), articulated a detailed outlook on the upcoming developments in artificial intelligence (AI) computing. Highlighting a significant evolutionary phase in the industry, Huang announced that the company's newest AI supercomputing infrastructure, Vera Rubin, is currently in full production. This announcement marks a notable milestone for Nvidia as it positions itself to lead the next cycle of AI advancements.
Huang began the presentation by framing the current computing landscape as undergoing a rare, once-in-a-decade shift—one that is unique for its dual nature. He explained that while applications are increasingly centered directly on AI capabilities, the very process of software creation is simultaneously being fundamentally reimagined. This combination, according to Huang, sets the stage for an unprecedented transformation in how technology platforms will operate going forward.
His arrival to the stage was marked by his characteristic leather jacket, this time with a noticeably glossier finish, as he greeted the audience with New Year's wishes before diving into Nvidia's role in scaling AI innovations. Huang portrayed Nvidia's contribution as pivotal to pushing AI into more advanced, agentic forms—systems with the capability to plan, reason, and act independently across extended periods. He spoke of teaching these systems not only human language but also the underlying laws governing the universe, reflecting a broader aspiration for AI's integration with real-world physics and structure.
Tracing the historical evolution of AI, Huang referenced early neural networks through to contemporary transformer architectures and large language models (LLMs), while emphasizing that the next stage will expand far beyond text-based AI. He introduced the concept of "physical AI," which involves training AI systems to understand and interact with the physical world by applying the laws of physics, an approach Nvidia is pioneering.
A significant theme in Huang's address was Nvidia's strategic emphasis on open AI ecosystems. He stated that while proprietary frontier AI models have historically led the field, open models have been catching up rapidly and are currently only about six months behind. Huang noted that roughly 80% of AI startups are building solutions based on open models. Moreover, a large portion of AI activity on developer platforms relies on these open-source systems. To support this ecosystem, Nvidia is releasing not just AI models but also the associated datasets and comprehensive lifecycle tooling needed to train, evaluate, and deploy these models effectively.
The keynote also featured Nvidia's advancements in AI-powered physical modeling, particularly through their Cosmos world foundation model. This model generates realistic simulations and synthetic data used for training robots and autonomous vehicles. Huang mentioned that Nvidia internally employs Cosmos for developing its own self-driving technology, underscoring the platform's practical application. Additionally, Nvidia introduced Alpamayo, an open-source AI designed for reasoning and decision-making within autonomous driving contexts. Alpamayo enables vehicles to operate competently with limited exposure to real-world data, adapting to novel and unpredictable scenarios with increased reliability.
At one point during the presentation, autonomous robots known as Star Wars BDX droids made an onstage appearance, powered by Nvidia's Cosmos platform. Huang engaged dynamically with these robots, illustrating the tangible capabilities physical AI now offers. Further expanding the industrial scope of AI, Nvidia announced a collaboration with Siemens to leverage synthetic data generated from digital factory twins. This partnership aims to train and enhance the next generation of robotics for manufacturing, highlighting AI's growing role in transforming industrial automation.
A centerpiece of the event was Huang's confirmation that the Vera Rubin platform, Nvidia’s next-generation AI supercomputing system and successor to the record-setting Blackwell architecture, has entered full production. Designed as an extreme co-designed six-chip AI platform, Vera Rubin boasts performance improvements of up to five times relative to Blackwell. The platform achieves these gains while improving power efficiency, memory bandwidth, and data interconnect speeds to address critical bottlenecks in AI workload performance.
Technically, Vera Rubin integrates advanced graphics processing units (GPUs), custom central processing units (CPUs), high-speed networking components, and comprehensive full-stack encryption to enhance security. The system features Nvidia's ConnectX-9 Spectrum-X SuperNIC, facilitating rapid data movement, paired with an NVLink 6 Switch that enables simultaneous, high-bandwidth communication between GPUs. Notably, the platform is designed for rapid assembly, capable of being put together in approximately five minutes, a marked reduction from the two-hour assembly time of previous systems. Cooling is managed through a water-based system utilizing hot water at around 45°C, a method that Huang highlighted as energy-efficient despite sounding counterintuitive.
The name Vera Rubin honors the famed astronomer whose research into galaxy rotation rates contributed to the discovery of dark matter. Huang referenced her work at the outset of the Vera Rubin presentation, underscoring the symbolic connection between Nvidia’s AI platform and transformative scientific observation.
On the market front, Nvidia’s shares experienced a slight decline during the trading day following the keynote, dipping by 0.39% in regular session and an additional 0.069% in after-hours trading. Despite this, according to Benzinga's Edge Stock Rankings, Nvidia maintains strong growth and quality metrics, ranking in the 94th percentile for Growth and 97th percentile for Quality among peer companies.