Home > All news > Industry News > Nvidia AI Chips Outpacing Moore
芯达茂F广告位 芯达茂F广告位

Nvidia AI Chips Outpacing Moore

At CES 2025, Nvidia CEO Jensen Huang made a challenging argument: Nvidia's AI chips are advancing technology at a rate that surpasses Moore's Law. This argument not only challenges the decades-long framework for semiconductor development dominated by Moore's Law, but also hints at the potential for an unprecedented paradigm shift in computing and artificial intelligence.

The Evolution and Challenges of Moore's Law

Since Intel co-founder Gordon Moore proposed Moore's Law in 1965, the semiconductor industry has experienced decades of rapid growth. Moore's Law predicts that the number of transistors on a chip will double every 18 months or so, driving exponential improvements in computing power and performance. However, in recent years, the growth rate of Moore's Law has gradually slowed due to technical bottlenecks and physical limitations, and many people have begun to question whether this "golden rule" can still be maintained.

But Huang clearly disagrees. In his speech, he made it clear that Nvidia's AI chips have grown faster in performance than Moore's Law expects, significantly accelerating the increase in computing power.

NVIDIA's Innovation Path: Comprehensive Architecture Development

Huang believes that the key to Nvidia's ability to break Moore's Law lies in its comprehensive development strategy. Different from the traditional chip research and development method, Nvidia not only innovates at the hardware level, but also promotes technological progress in all aspects from architecture design, system development, software library to algorithm optimization. "If you can develop architectures, chips, systems, libraries, and algorithms at the same time, you can move faster than Moore's Law," he stressed. "This strategy of collaborative innovation enables Nvidia to break through traditional performance growth limits at multiple levels.

Through this comprehensive innovation, Nvidia's AI chips have come by leaps and bounds over the past decade. According to Huang, Nvidia's AI chips today are a full 1,000 times more advanced than their products from a decade ago, far exceeding the expected growth rate of Moore's Law.

A new generation of AI chips: a leap forward in performance

During Huang's presentation, Nvidia's latest GB200 NVL72 superchip took center stage. The chip's performance in AI inference workloads is 30 to 40 times faster than that of its predecessor, the H100 chip, a breakthrough that will greatly promote the adoption of AI inference models, especially high-performance AI inference models like OpenAI's o3. Huang noted that this leap in performance will not only reduce the cost of AI inference, but also improve the responsiveness of AI applications, thereby accelerating the adoption of AI technology across industries.

Pictured: Nvidia's AI chips are surpassing Moore's Law

Pictured: Nvidia's AI chips are surpassing Moore's Law

As the demand for AI inference computing grows dramatically, Huang also highlighted the three major computational principles for AI scaling: pre-training, post-training, and computation-on-test. Among them, test-time computation plays a crucial role in the inference stage, which enables the AI model to have more time to "think" after each inference, thereby improving computational efficiency and result accuracy.

The core force that drives the development of the AI industry

Nvidia's chip technology advancement is at a critical juncture for the AI industry. Currently, more and more AI companies (such as Google, OpenAI, Anthropic, etc.) rely on Nvidia's chips to support their computing tasks. With the continuous development of AI technology, the focus of the industry is shifting from training to inference, and whether Nvidia's high-end AI chips can continue to maintain their market dominance has become a hot issue of concern in the industry. Huang's announcement shows that Nvidia not only leads the way in inference performance, but also makes a significant contribution to reducing costs and improving price/performance.

As Huang said, the cost of running AI inference models was once high, but with computing breakthroughs from hardware companies like Nvidia, the cost of AI technology will continue to fall. In the future, more and more businesses and institutions will be able to afford powerful and complex AI models, driving wider adoption and innovation.

Future outlook: The infinite potential of AI chips

NVIDIA's breakthrough marks a major revolution in computing. With the rapid development of AI technology, we will see more efficient and low-cost AI applications across all industries in the next few years. From autonomous driving to smart healthcare, from financial services to energy management, the potential of AI is everywhere, and Nvidia's AI chips will undoubtedly be the core driving force for the adoption of these technologies.

Related news recommendations

Login

Registration

Login
{{codeText}}
Login
{{codeText}}
Submit
Close
Subscribe
ITEM
Comparison Clear all