Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Dissecting Nvidia's FY2026 Performance: Three Major Changes Are Taking Place!
As a global leader in artificial intelligence (AI) chips, NVIDIA has once again delivered an “above expectations” earnings report. The Q4 FY2026 financials and the full-year FY2026 data center business reached new highs, with guidance for the next quarter also surpassing market expectations. Analyzing NVIDIA’s performance reveals three major changes taking place.
Business Model Upgrade
Although NVIDIA founder and CEO Jensen Huang has repeatedly emphasized that NVIDIA is not just selling chips but building AI factories, public perception remains deeply rooted. However, the latest results confirm the successful upgrade of NVIDIA’s business model.
In FY2026, NVIDIA achieved full-year revenue of $215.9 billion, a 65% increase year-over-year. Under GAAP, net profit was $120.07 billion, also up 65%. Earnings per share were $4.90, a 67% increase year-over-year.
As NVIDIA’s core business, data centers showed strong growth, with Q4 revenue reaching a record $62.3 billion, up 75% year-over-year. Full-year revenue was $193.7 billion, accounting for nearly 90% of the company’s total.
Gu Zhengshu, chief analyst at DeepCore Alliance, pointed out that NVIDIA’s data center business has shifted from a single component to full-stack integrated systems. By guiding customers from purchasing individual B200 GPUs to deploying GB200 NVL72 rack-level systems, the average product price has increased from about $40,000 to $2-3 million, representing a significant upgrade in the business model.
“This rack-level integration not only raises revenue ceilings but also deeply binds the ‘Big Four’ cloud service providers—Amazon, Google, Meta, and Microsoft—within the NVIDIA ecosystem through NVLink interconnect technology, Grace CPUs, and full-stack software, significantly offsetting the long-term threat posed by large-scale enterprise self-developed ASIC chips,” Gu said.
Further, as the foundation of NVIDIA’s data center, the network business is experiencing an unprecedented boom.
Driven by strong adoption of NVLink, Spectrum X Ethernet, and InfiniBand, NVIDIA’s network business growth has been remarkable, with both horizontal and vertical expansion technologies reaching new highs. Sequential growth has been in double digits, with FY2026 network revenue surpassing $31 billion. Compared to 2021, when NVIDIA acquired Mellanox to strengthen its network business, growth has exceeded tenfold.
An electronics industry analyst told reporters that the network business is considered another important “killer app” outside of the CUDA ecosystem. It is penetrating competitors’ camps with increasing flexibility, and as more new rack architectures are shipped, the network’s share will continue to grow.
Huang Huang positioned the network as a natural extension of the platform: “We open all components for customers to freely combine and manage different scales, integrating into customized data centers.” In Q4, NVIDIA announced support for NVLink on Amazon AWS, enabling integration with its self-developed chips.
Full Focus on the Inference Era
NVIDIA’s Blackwell architecture platform is progressing smoothly, with the next-generation Rubin platform scheduled for mass production in the second half of 2026. Compared to Blackwell, Rubin’s training MoE (Mixture of Experts) models require 75% fewer GPUs, and inference token costs are reduced by up to ten times, precisely targeting the low-cost demands of the inference era.
Huang repeatedly emphasized that the era of intelligent AI has reached a turning point, happening in the past two to three months. Industry insiders have observed this for about six months, but the world is only now awakening. Computing power demand is growing exponentially—“computing power equals revenue.”
As the “king” of AI training, NVIDIA is heavily investing in AI inference. In December last year, the company spent $20 billion in cash to acquire Groq’s low-latency inference technology and related engineering teams. This non-exclusive technology licensing and talent transfer deal is NVIDIA’s largest transaction in history.
“This acquisition has high strategic value,” said Yu Yiran, managing director at CIC Zhaoshi Consulting. “Groq’s LPU (Language Processing Unit) offers extremely low latency and high efficiency in specific inference scenarios, such as large language model text generation. The acquisition helps NVIDIA integrate different technological approaches, providing more comprehensive inference solutions and consolidating its position as a ‘one-stop AI computing shop.’”
Yu added that this move is necessary to address differentiated competition. In the inference market, customers are more sensitive to cost, energy efficiency, and scenario-specific optimization. With Groq and other technologies, NVIDIA can respond more flexibly to competition from Google TPU, Amazon’s self-developed chips, and various ASICs.
Huang’s latest “spoiler” indicates that Groq’s technology will be integrated into NVIDIA’s new architecture, further enhancing AI infrastructure performance and cost-effectiveness. More details are expected to be announced at the GTC conference in March.
Industry experts have long believed that NVIDIA’s real challenge lies not in order volume but in the capacity of key supply links like TSMC. Gu Zhengshu analyzed that TSMC’s CoWoS-4L advanced packaging capacity is expected to be in severe shortage until mid-2026. Meanwhile, HBM4 memory supply has become a strategic high ground, with SK Hynix (holding 70% market share) and Samsung’s delivery pace directly affecting the initial ramp-up of the Rubin platform. Recently, shortages of glass fabric have also severely impacted AI server PCB production.
At this earnings call, NVIDIA’s EVP and CFO Colette Kress clarified that the company has prepared inventory and supply commitments covering shipments through 2027. It is expected that FY2026 revenue will continue to grow sequentially, exceeding the company’s previous guidance of $500 billion from Blackwell and Rubin platforms.
TSMC has also increased capital expenditure, projecting a record high of $52-56 billion in 2026. About 70-80% of this will be invested in advanced process expansion, with 10-20% allocated to advanced packaging to ensure AI chip capacity.
AI Spending Confidence Remains “Unshaken”
Despite repeated concerns about an AI bubble in the capital markets, global cloud service providers continue to invest heavily in AI infrastructure, providing ongoing support for NVIDIA’s performance. TrendForce estimates that in 2026, the eight major cloud providers’ capital expenditures will exceed $710 billion, with an annual growth rate of about 61%. This includes continued procurement of NVIDIA and AMD GPUs, as well as expanding ASIC infrastructure to ensure AI application performance and data center cost efficiency.
NVIDIA is further deepening its ecosystem collaborations, nearing a partnership agreement with OpenAI and advancing cooperation with Anthropic, Meta, and others.
Beyond supercloud clients and cutting-edge AI companies, NVIDIA’s customers also include sovereign nations. Colette Kress stated that in FY2026, the company’s sovereign AI revenue more than doubled to over $30 billion, with key clients from Canada, France, the Netherlands, Singapore, and the UK. Future sovereign AI spending is expected to grow in tandem with the AI infrastructure market, proportional to each country’s GDP.
Looking ahead to Q1 FY2027, NVIDIA forecasts revenue of about $78 billion, a 2% increase year-over-year, significantly surpassing the market expectation of $72.78 billion and being described by analysts as a “blowout” guidance.
“Strong company performance but moderate stock response reflects full valuation and the market’s anticipation of a near-term turning point,” Yu said. In the inference market, the competitive landscape is becoming more open. Google TPU continues rapid growth due to cost-effectiveness and compatibility; cloud providers are diversifying their chip procurement strategies; domestic GPU manufacturers are making progress in certain scenarios, all of which could erode NVIDIA’s long-term pricing power and market share.
NVIDIA’s customers are also adopting more diversified supply strategies. On February 17, just one week after placing an order with NVIDIA, Meta announced a purchase of up to 6 gigawatts of computing power from AMD.