Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Dissecting Nvidia's FY2026 Performance: Three Major Changes Are Taking Place!
As a global leader in artificial intelligence (AI) chips, NVIDIA has once again delivered an “above expectations” earnings report. The Q4 FY2026 financials and the full-year FY2026 data center business reached new highs, with guidance for the next quarter also surpassing market expectations. Analyzing NVIDIA’s performance reveals three major changes taking place.
Business Model Upgrade
Although NVIDIA founder and CEO Jensen Huang has repeatedly emphasized that NVIDIA is not just selling chips but building AI factories, the deep-rooted public perception remains. However, the latest results confirm the successful upgrade of NVIDIA’s business model.
In FY2026, NVIDIA achieved full-year revenue of $215.9 billion, a 65% increase year-over-year. Under GAAP, net profit was $120.1 billion, also up 65%. Earnings per share were $4.90, a 67% increase year-over-year.
As NVIDIA’s core business, data centers showed strong growth, with Q4 revenue reaching a record $62.3 billion, up 75% year-over-year. Full-year revenue from data centers was $193.7 billion, accounting for nearly 90% of the company’s total revenue.
Chief analyst Gu Zhengshu from Shenxin Alliance pointed out that NVIDIA’s data center business has shifted from a single component to full-stack integrated systems. By guiding customers from purchasing individual B200 GPUs to deploying GB200 NVL72 rack-level systems, the average product price has increased from about $40,000 to $2-3 million, representing a dimensional upgrade in the business model.
“This rack-level integration not only raises revenue ceilings but also deeply binds the ‘Big Four’ cloud service providers—Amazon, Google, Meta, and Microsoft—within the NVIDIA ecosystem through NVLink interconnect technology, Grace CPUs, and full-stack software, significantly offsetting the long-term potential threat posed by large-scale enterprise self-developed ASIC chips,” Gu said.
Furthermore, as the foundation of NVIDIA’s data center, the network business is experiencing an unprecedented boom.
Driven by strong adoption of NVLink, Spectrum X Ethernet, and InfiniBand, NVIDIA’s network business growth has been outstanding, with both horizontal and vertical expansion technologies reaching historic highs. Sequential growth has been in double digits, and FY2026 network revenue surpassed $31 billion. Compared to 2021, when NVIDIA acquired Mellanox to strengthen its network business, growth has exceeded tenfold.
Industry analysts told reporters that the network business is considered another key “killer” outside of the CUDA ecosystem, increasingly penetrating competitors’ camps with more flexible approaches. As more new rack architectures are shipped, the network’s share will also grow.
Huang Huang positioned the network as a natural extension of the platform: “We open all components for customers to freely combine and manage different scales, integrating into customized data centers.” In Q4, NVIDIA announced it would provide NVLink support for Amazon AWS, enabling integration with its self-developed chips.
Full Focus on the Inference Era
NVIDIA’s Blackwell architecture platform is progressing smoothly, with the next-generation Rubin platform scheduled for mass production in the second half of 2026. Compared to Blackwell, Rubin’s training hybrid expert (MoE) models require 75% fewer GPUs, and inference token costs are reduced by up to ten times, precisely meeting the low-cost demands of the inference era.
Huang repeatedly emphasized that the AI agent has reached a turning point, which has occurred in the past two or three months. Industry insiders have observed this for about six months, but the world is only now awakening. Computing power demand is growing exponentially—“computing power equals revenue.”
As the “king” of AI training, NVIDIA is heavily investing in AI inference. In December last year, the company spent $20 billion in cash to acquire Groq’s low-latency inference technology and related engineering teams. This non-exclusive technology licensing and talent transfer mark NVIDIA’s largest deal in history.
“Such an acquisition has immense strategic value,” said Yu Yiran, managing director of CIC Zhuoshi Consulting. Groq’s LPU (Language Processing Unit) offers extremely low latency and high efficiency in specific inference scenarios, such as large language model text generation. The acquisition helps NVIDIA integrate different technological approaches, providing more comprehensive inference solutions and consolidating its position as a “one-stop AI computing store.”
Yu added that this move is also necessary to address differentiated competition. In the inference market, customers are more sensitive to cost, energy efficiency, and scenario-specific optimization. With Groq and other technologies, NVIDIA can respond more flexibly to competition from Google TPU, Amazon’s self-developed chips, and various ASICs.
Huang’s latest “spoiler” indicates that Groq’s technology will be integrated into NVIDIA’s new architecture, further enhancing AI infrastructure performance and cost-effectiveness. More updates are expected at the GTC conference in March.
Long-standing industry opinion holds that NVIDIA’s real challenge is not order volume but the capacity of key suppliers like TSMC to meet demand. Gu Zhengshu analyzed that TSMC’s CoWoS-4L advanced packaging capacity is expected to be in severe shortage until mid-2026. Meanwhile, HBM4 memory supply has become a strategic high ground, with SK Hynix (holding 70% market share) and Samsung’s delivery pace directly affecting the initial volume ramp-up of the Rubin platform. Recently, shortages of glass fabric have also severely impacted AI server PCB production.
At this earnings call, NVIDIA’s EVP and CFO Colette Kress clarified that the company has arranged inventory and supply commitments to cover shipments through 2027. The full-year FY2026 revenue is expected to continue sequential growth, exceeding the company’s previous guidance of $500 billion from Blackwell and Rubin platforms.
TSMC has also increased capital expenditure, projecting a record high of $52-56 billion in 2026. About 70-80% of this will be invested in advanced process expansion, with 10-20% allocated to advanced packaging to ensure AI chip capacity.
AI Spending Confidence Has Not “Collapsed”
Despite repeated concerns about an AI bubble in capital markets, global cloud service providers continue to invest heavily in AI infrastructure, providing ongoing support for NVIDIA’s performance. TrendForce estimates that in 2026, the combined capital expenditure of the eight major cloud providers will exceed $710 billion, with an annual growth rate of about 61%. This includes ongoing purchases of NVIDIA and AMD GPUs, as well as expanding ASIC infrastructure, to ensure AI application service readiness and data center cost efficiency.
NVIDIA is further deepening ecosystem collaborations, nearing a partnership agreement with OpenAI and advancing cooperation with Anthropic, Meta, and others.
Beyond super-cloud clients and cutting-edge AI companies, NVIDIA’s customers also include sovereign nations. Colette Kress noted that in FY2026, the company’s sovereign AI revenue more than doubled year-over-year, surpassing $30 billion, with key clients from Canada, France, the Netherlands, Singapore, and the UK. Future sovereign AI spending is expected to grow in tandem with the AI infrastructure market, proportional to each country’s GDP.
Looking ahead to Q1 FY2027, NVIDIA forecasts revenue of approximately $78 billion, a 2% increase year-over-year, significantly surpassing the market expectation of $72.78 billion and being described by analysts as a “blowout” guidance.
“Strong company performance but moderate stock response reflects full valuation of expectations and the upcoming inflection point in capital flows,” Yu said. In the inference market, the competitive landscape is becoming more open. Google TPU continues rapid growth due to cost-effectiveness and compatibility; cloud providers are developing their own chips and diversifying procurement strategies; domestic GPU manufacturers are making progress in certain scenarios, all potentially eroding NVIDIA’s long-term pricing power and market share.
NVIDIA’s customers are also adopting more diversified supply strategies. On February 17, just one week after placing an order with NVIDIA, Meta announced a purchase of up to 6 gigawatts of computing power from AMD.