The AI infrastructure giant hit the milestone as it continued to defy concerns that the rise of efficient AI models like DeepSeek-R1 will undercut demand for its GPUs, with Nvidia CEO Jensen Huang claiming that such models will fuel significantly greater demand.
Nvidia Wednesday became the first company to hit a $4 trillion market capitalization.
The AI infrastructure giant achieved the milestone after its share price grew 2.4 percent to $164, according to Reuters, making it the world’s most valuable company once again. Its stock price has risen approximately 18 percent since the beginning of the year.
[Related: Analysis: Nvidia Vs. Intel Vs. AMD Q1 Earnings Face-Off]
At the time of Nvidia reaching the sky-high market cap Wednesday morning, Microsoft had the second largest at roughly $3.7 billion and Apple had the third largest at $3.1 billion.
Santa Clara, Calif.-based Nvidia exceeded $3 trillion in market cap for the first time last June and weeks later surpassed Microsoft to become the world’s most valuable company for the first time. It hit the No. 1 spot at least two more times last year.
Nvidia hit the number as the company continues to report high demand for its GPUs and associated products in data centers for generative and agentic AI workloads.
In its first quarter, Nvidia’s revenue grew nearly 70 percent year over year to $44.1 billion despite a multibillion-dollar write-off that was caused by new U.S. export controls on the company’s H20 GPUs being shipped into China.
The company’s data center segment delivered nearly 90 percent of its first-quarter revenue, thanks to the vendor growing revenue 10 percent sequentially and 73 percent year over year to $39 million for the category.
Some 87 percent of the company’s data center revenue came from compute products, with Blackwell-based products such as the Grace Blackwell Superchip that goes inside the GB200 NVL72 rack-scale platform helping drive a 5 percent sequentially and a 76 percent year-over-year increase for the sub-category.
The rest of Nvidia’s data center revenue came from networking products, which grew in sales by 64 percent sequentially and 56 percent year over year. The company attributed this sequential increase to the growth of the NVLink compute fabric underpinning its GB200 systems as well as Ethernet for AI offerings.
The company has so far been able to defy concerns that the rise of efficient AI models like DeepSeek-R1 will undercut demand for its GPUs and associated componentry, with Nvidia CEO Jensen Huang claiming that such models will fuel significantly greater demand.
“The amount of computation we need at this point as a result of agentic AI, as a result of reasoning, is easily 100 times more than we thought we needed this time last year,” Huang said during his keynote at the company’s GTC 2025 event.
Huang made the assertion after claims in late January that DeepSeek, the Chinese company behind its eponymous R1 model, spent significantly less money than Western competitors such as OpenAI and Anthropic to develop its model fueled concerns that AI model developers would require fewer GPUs to train and run models.
In explaining his position, Huang said reasoning models like DeepSeek-R1 and the agentic AI workloads they power will create a need for more powerful GPUs in greater quantities. That’s because of how reasoning models have significantly increased the number of tokens—or words and other kinds of characters—used for queries as well as answers when compared with traditional large language models.
To that end, Huang pointed to Nvidia’s GB300 NVL72 rack-scale platform powered by its new Blackwell Ultra GPU as well as more powerful computing platforms coming out over the next two years as necessary for keeping up with the computational demands of reasoning models.