Target Price: $180
Ranking: Buy
Executive Summary
NVIDIA (NASDAQ: NVDA) delivered another blockbuster quarter in Q1(Feb–Apr 2025), with revenue surging 69% year-on-year to $44.1 billion. In light of the earnings beat, new product rollouts, and updated guidance, we are revising our valuation to $180 and reaffirming our bullish investment thesis. NVIDIA’s fundamental position at the center of the AI boom remains unchallenged, and although export restrictions and other headwinds warrant caution, the stock’s pullback from last year’s highs provides an attractive entry point. We maintain a Buy rating with an updated target price reflecting NVIDIA’s strengthened outlook and our conviction that not owning a leader in the AI revolution would be a major opportunity cost for growth investors.
Investment Thesis
NVIDIA is the foundational enabler of the AI era. The company’s GPUs and accelerated computing platforms have become indispensable infrastructure for artificial intelligence, giving NVIDIA an estimated 90% share of the data center GPU market. Our updated thesis remains that NVIDIA will continue to dominate the AI silicon ecosystem and benefit disproportionately from the explosive growth in AI applications across industries. The Q1 FY2026 results reinforced this view – demand for NVIDIA’s AI hardware and software shows no sign of slowing, even in the face of macro challenges. We see AI becoming a “core to the next industrial revolution” globally, analogous to electricity or the internet as critical infrastructure. In this context, NVIDIA’s end-to-end platform (silicon, software, and services) positions it as a quasi-monopoly supplier of “digital picks and shovels” for the AI gold rush.
We incorporate several key updates post-earnings: AI demand is even stronger, supported by new partnerships and guidance, prompting us to raise estimates. The rollout of Blackwell GPUs and Grace AI CPUs is boosting NVIDIA’s tech lead and margins. While China export restrictions pose a challenge, they’re already reflected in guidance, and growth outside China remains strong. Valuation has normalized, with the stock trading at ~55× earnings—reasonable given ~69% revenue growth. In short, NVIDIA’s dominant AI platform and solid execution support continued high growth. We see any near-term volatility as a buying opportunity for long-term investors.
Earnings Call Highlights
Q1 FY2026 Results (Feb–Apr 2025): NVIDIA reported record quarterly revenue of $44.1 billion (up 12% QoQ and +69% YoY), significantly exceeding consensus estimates. Data Center remained the growth engine, with sales of $39.1 billion (+73% YoY) fueled by voracious demand for AI GPUs from cloud providers and enterprises. Gaming revenue also surprised to the upside at $3.8 billion (+42% YoY), as Blackwell-based GeForce RTX 50-series GPUs drove a PC upgrade cycle. On the call, CEO Jensen Huang emphasized that AI demand is broad-based and “incredibly strong” worldwide. He noted that AI model “inference token generation has surged 10× in just one year”, underscoring the exponential growth in deployed AI services (which directly translates to more GPU hours sold). Importantly, NVIDIA’s results revealed resilience despite new export headwinds: On April 9, the U.S. government tightened export controls, requiring a license for NVIDIA’s H20 chips to China, which took effect immediately. As a result, NVIDIA recorded a $4.5 billion charge for excess inventory of H20 and missed out on ~$2.5 billion in China sales in Q1. Even so, adjusted gross margin was ~71.3% (excluding the one-time charge), and adjusted EPS came in at $0.96 (vs $0.81 GAAP), beating the Street’s ~$0.93 estimate. CFO Colette Kress acknowledged the hit from the export ban but affirmed that underlying demand far outstrips supply in the rest of the world, and that NVIDIA is reallocating would-be China units to other markets where possible. Jensen Huang was notably candid about the China issue, stating that “the $50 billion China AI market is effectively closed to U.S. industry” due to U.S. policy. He argued that this not only harms NVIDIA in the near term, but could “drive half of the world’s AI talent to rivals” if American platforms are absent in China. Despite these challenges, Huang struck an upbeat tone regarding global AI momentum: “Every nation now sees AI as essential infrastructure… and NVIDIA stands at the center of this transformation,” he remarked.
Updated Guidance
Q2 FY2026 Outlook: NVIDIA’s guidance for the upcoming quarter was bullish and well above consensus. Revenue is projected at $45.0 billion ±2%, which would be roughly 63% higher than the prior-year quarter. It’s important to note this outlook already factors in an estimated ~$8 billion headwind from lost China sales due to the new export controls. In other words, absent the export ban, Q2 revenues might have been ~$53 billion – a staggering figure that speaks to underlying AI demand. Even with the ban, the guided sequential growth (+2% QoQ at the midpoint) indicates NVIDIA expects other customers (U.S. cloud firms, EU, Middle East, etc.) to more than absorb the capacity that can no longer go to China. Management also guided for GAAP gross margin of ~71.8% (72.0% non-GAAP), ±50 bps, in Q2. This represents a sharp improvement from the 60.5% GAAP in Q1, as the one-time inventory charge is behind them and higher-margin Blackwell products ramp up. NVIDIA indicated it is on track to reach mid-70s gross margins by late FY2026, which suggests strong pricing power and more software-rich revenue streams (e.g. AI cloud services or software licensing) contributing over time.
Operating expenses are expected to be ~$4.0 billion (non-GAAP) in Q2, with full-year FY2026 opex growth of ~35% YoY – reflecting heavy investments in R&D and capacity (hiring, new facilities) to support the huge revenue growth. Importantly, NVIDIA’s CAPEX plans remain elevated: the company is building out domestic assembly and test capacity (it announced new “AI supercomputer factories” in the U.S.), likely to mitigate geopolitical risk and shorten supply chains for U.S. customers. On the earnings call, management did not provide full-year revenue guidance (consistent with past practice), but Jensen Huang did comment that visibility is strong and demand continues to “far exceed our supply, so we are ramping capacity aggressively.” They also mentioned working closely with foundry and supply partners to procure substantially more wafer capacity and components in the second half of 2025. We interpret the Q2 guide and commentary as implying FY2026 (calendar 2025) revenue well above $180 billion, an upward revision from the ~$160 billion we modeled previously. At the guided margins, this would translate to unprecedented profitability – likely $75–85 billion in net income for FY2026 (up from ~$70 billion in FY2025).
Beyond Q2, NVIDIA sees sustained momentum into the second half of 2025. Management highlighted a wave of new product shipments: e.g. Blackwell Ultra GPUs and “GB300” Grace-Blackwell systems will hit mass production, and hyperscale customers (cloud giants) are lining up deployments for these. They also cited a growing pipeline in enterprise AI adoption (with many Fortune 500 firms in pilot phases for AI factory infrastructure that could scale in late 2025). While acknowledging that quarterly growth may eventually moderate from the recent torrid pace, NVIDIA’s leadership conveyed a high degree of confidence in multi-year secular growth. One telling comment was Huang’s prediction that global data center spending on AI could triple over the next 3 years, reaching ~$1 trillion by 2028. Overall, the updated guidance and management outlook reinforce our view that NVIDIA is in the early innings of an AI investment super-cycle and will continue to post strong growth barring any major external shocks.
Growth Drivers
NVIDIA’s growth prospects are underpinned by powerful secular drivers and company-specific catalysts.
- Surging AI GPU Demand: The broad-based hunger for AI compute is the primary engine of NVIDIA’s growth. “Global demand for NVIDIA’s AI infrastructure is incredibly strong,” says CEO Jensen Huang, as evidenced by data center revenues soaring +73% YoY. The proliferation of large-scale AI models (e.g. generative transformers powering chatbots, search, and analytics) is driving an exponential rise in inference and training workloads. NVIDIA noted that AI inference token generation has increased ~10× in just one year – a direct proxy for how much more compute is being consumed to serve AI queries. Every major cloud provider (AWS, Azure, Google Cloud, Oracle) and countless enterprises are racing to add GPU capacity to keep up with this demand. NVIDIA’s flagship H100 GPUs remain essentially supply-constrained in the near term (even as lead times have improved to ~2–3 months from 11 months late last year), highlighting that demand is outstripping what the industry can currently produce. This structural tailwind – AI as a must-have capability – ensures a multi-year growth runway for NVIDIA’s core hardware sales.
- Next-Gen Product Ramps (Blackwell GPUs & Grace CPUs): NVIDIA is in the midst of a major product cycle upswing. Its new Blackwell GPU architecture (successor to “Hopper”) began shipping in late 2024 and is now scaling rapidly. Blackwell brings significant performance and efficiency gains, especially for AI inference and “long reasoning” tasks. In industry benchmarks, Blackwell-based systems have outperformed all prior competitors; as one analyst noted, “The only thing faster than Hopper is Blackwell.” Early Blackwell models (e.g. B200 GPUs) feature ~36% more memory and new low-precision compute modes that dramatically boost AI throughput. Jensen Huang said NVIDIA has “massively ramped” Blackwell production, achieving billions in sales in its first quarter of availability. This ramp will continue into 2025–26, refreshing the product lineup for both training and inference use-cases. Alongside GPUs, NVIDIA’s foray into CPUs with the Grace processor is a strategic growth vector. Grace is an ARM-based AI-centric CPU that pairs with NVIDIA GPUs for optimal performance. As it enters full production, Grace allows NVIDIA to capture more content per server (CPU+GPU together) and tap into high-end CPU sockets (competing with Intel’s Xeon). Grace-Blackwell “superchips” (combining a Grace CPU with Blackwell GPUs via high-speed interconnect) are slated for deployment in cutting-edge systems – for example, the Stargate UAE cluster will utilize NVIDIA’s Grace Blackwell GB300 systems. The successful adoption of Blackwell GPUs and Grace/GB systems will not only drive unit growth but also support higher ASPs and margins (as these are premium products). We view this new product cycle as a core driver sustaining NVIDIA’s momentum in the coming years.
- NVIDIA is rapidly expanding its AI footprint through a two-pronged strategy: Building global AI infrastructure in partnership with governments and tech leaders, and scaling AI inference across enterprise and cloud markets. A prime example is the Stargate UAE project, a 1-gigawatt AI supercomputing cluster announced in May 2025. In collaboration with G42, OpenAI, Oracle, and SoftBank, NVIDIA will supply its most advanced AI systems, with the first 200MW phase set to launch in 2026. Similar national-scale “AI factories” are being developed with HUMAIN in Saudi Arabia and with Foxconn and Taiwan’s government. These large-scale deployments involve selling extensive hardware, software, and services—positioning NVIDIA at the heart of the global AI buildout. Simultaneously, AI inference demand is accelerating. While 2023–2024 focused on training large AI models, 2025–2026 is seeing explosive growth in inference—running those models in real-time to power applications like chatbots, search, and analytics. This shift expands NVIDIA’s customer base from a few training centers to nearly every cloud provider and enterprise. Major clouds (AWS, Google, Microsoft, Oracle) are deploying inference fleets using NVIDIA GPUs, while enterprises are adopting turnkey solutions like NVIDIA RTX PRO Servers and NVLink Fusion to power private AI workloads. By lowering the barrier for AI adoption through certified systems and software stacks, NVIDIA is tapping into a much broader market. Unlike training—which involves one-time GPU purchases—inference requires continuous, scalable infrastructure, creating an ongoing revenue stream. With AI features becoming embedded in everything from office software to industrial systems, inference demand is expected to multiply, driving long-term growth in NVIDIA’s data center business. Together, these initiatives reinforce NVIDIA’s central role in AI and extend its market reach far beyond tech giants to sovereign projects and traditional enterprises alike.
- Software & Ecosystem Lock-In: A subtle but crucial driver of NVIDIA’s success is its software ecosystem. Its CUDA platform and core AI libraries like cuDNN and TensorRT are widely used by developers, making NVIDIA the default choice for AI model development. This software foundation has created strong developer loyalty and broad compatibility with NVIDIA hardware. In 2025, the company is pushing deeper into full-stack solutions with offerings like NVIDIA AI Foundations—a suite of pre-trained models—and new tools for robotics and simulation. Its Omniverse platform, used for 3D simulation and industrial metaverse applications, is gaining traction with major companies such as Accenture, Databricks, SAP, and Siemens. As more enterprise workflows rely on NVIDIA software, customers are increasingly tied to its ecosystem. NVIDIA is also turning this advantage into a high-margin revenue stream through products like NVIDIA AI Enterprise and partnerships to embed its AI stack into cloud platforms. This growing software layer not only boosts revenue but also strengthens customer retention.
Challenges & Constraints
Despite NVIDIA’s strengths, we must also acknowledge the macroeconomic and industry challenges that could restrain its growth or add volatility.
- U.S.–China Export Restrictions: Geopolitical tensions have directly impacted NVIDIA’s business, and this remains the most significant external risk. The U.S. government’s escalating restrictions on advanced chip sales to China already cost NVIDIA billions in revenue in 2024–25. The latest blow came in April 2025, when the export of NVIDIA’s H20 data center GPUs to China was abruptly banned (with no grace period) by the U.S. administration. NVIDIA was forced to take a $4.5 billion inventory write-down for unsellable H20 chips and lost an estimated $2.5 billion in Q1 sales. Jensen Huang has been outspoken that “the $50 billion Chinese market is effectively closed to U.S. industry” under current policies. This means NVIDIA is largely locked out of China’s AI build-out – a market that otherwise has half the world’s AI researchers and enormous demand. Crucially, NVIDIA has no easy workaround: as the company told investors, with the ban on H20 “we are not able to change the Hopper (H100) design to sell to China”. Future architectures like Blackwell are likely to face similar restrictions if U.S.–China relations don’t improve. The risk is not only lost sales, but also that Chinese customers will turn to domestic alternatives, strengthening Chinese competitors in the long run. Huang noted that shielding Chinese firms from U.S. GPUs “only strengthens them abroad and weakens America’s position,” as Chinese chipmakers are now free to leverage the entire local market without NVIDIA’s competition. In summary, export controls create a dual challenge: an immediate revenue shortfall in China, and the potential fostering of a self-sufficient Chinese AI chip ecosystem that could eventually compete globally. This will remain an overhang on NVIDIA’s growth potential and is largely a policy issue outside the company’s direct control.
- Global Supply Chain & Geopolitical Dependence: NVIDIA relies on a complex global supply chain, which makes it vulnerable to various risks. Its most advanced GPUs are manufactured by TSMC in Taiwan—a region facing constant geopolitical tension due to China’s territorial claims. Any disruption there, whether from conflict or natural disaster, could severely impact NVIDIA’s chip supply. Additionally, key processes like packaging and testing for HBM memory and CoWoS technology are concentrated among a few suppliers in Asia. The surge in AI demand over the past year pushed this supply chain to its limits, with wait times for H100 GPUs stretching up to a year at one point. While lead times have shortened, supply remains tight, and NVIDIA cannot meet all demand immediately, which could create openings for competitors. The company is working to reduce risk by diversifying production, including new assembly sites in the U.S., and securing more capacity at TSMC. Still, geopolitical tensions, trade policies, and potential export restrictions—especially from China—remain significant threats that could disrupt operations and weigh on financial results.
- Emerging Competition (AMD, Intel, Specialized Chips): NVIDIA’s leadership in AI is facing increasing competition, which could gradually erode its market share or pricing power. AMD has made significant strides with its data center GPUs, particularly the MI300 and MI320 series launched in 2024. Its latest MI325 accelerator has shown comparable performance to NVIDIA’s H200 on some AI tasks and is being promoted as a better value option in select use cases. Though AMD’s ecosystem is still much smaller, it has secured key wins like powering the El Capitan supercomputer and may serve as a second-source supplier for cloud providers. Intel is also in the race, though it lags behind. Its Gaudi chips have seen limited adoption, but its roadmap includes promising products like Falcon Shores in 2025–2026. A growing concern is the rise of in-house chips from cloud giants like Google (TPUs), Amazon (Inferentia), and Microsoft (Athena), which could reduce their long-term reliance on NVIDIA. Meanwhile, in China, domestic players such as Huawei, and startups like Biren are investing heavily in AI chips to replace NVIDIA under export restrictions. Though still behind, they are improving fast within the local market. NVIDIA’s strong advantage remains its robust software ecosystem, but rising competition could lead to pricing and margin pressure. Over time, even a modest decline in market share could impact its growth trajectory.
Valuation & Target Price Reassessment
We employ multiple valuation methods to triangulate a fair value for NVIDIA and update our 12-month price target. Given the company’s rapid growth and evolving risk profile, we’ve refreshed our DCF and relative valuation analyses with inputs reflecting the latest earnings and guidance.
- Discounted Cash Flow (DCF) Analysis: Our updated 10-year DCF model reflects NVIDIA’s stronger-than-expected revenue and margin performance following Q1. In our base case, we forecast a 5-year revenue CAGR of approximately 25% (FY2025–2030), gradually slowing to 10% by year 10, with a terminal growth rate of 2.5%. We assume long-term gross margins in the mid-70% range and operating margins rising to around 50%, in line with management guidance. These projections are based on ongoing global AI infrastructure investment, tempered by potential headwinds from China and rising competition. Using a 12% WACC to reflect elevated equity risk—particularly geopolitical—we estimate a base-case intrinsic value of approximately $200 per share. Our scenario analysis shows a bull case DCF reach $210+ (with faster AI adoption and no major export restrictions), and a bear case in the $120–$130 range (with slower growth due to China competition or macro weakness). Even in the downside case, the valuation remains relatively close to the current market level. Our base-case valuation around $200 supports a Buy rating, implying further upside.
- Free Cash Flow to Equity (FCFE): NVIDIA’s business model produces robust free cash flow due to high margins and relatively modest capital expenditure needs (as a fabless company). We cross-check our DCF with an FCFE approach, capitalizing near-term growth. For FY2026, we project FCF (post capex) on the order of $60.7 billion. If we assume FCF grows at ~25% for 5 years then levels to ~4% long-term, the implied equity value by FCFE yield (discounting at cost of equity ~13%) is about $205 per share, consistent with our DCF result.
- Relative Valuation (Multiples): We compare NVIDIA’s valuation multiples to peers and its own growth to ensure our target price is grounded in market reality. At ~$135, NVDA trades around 55× trailing EPS (GAAP) and roughly 38× forward 12-month EPS (our estimate, which factors in rapid earnings growth over the next year). The forward P/E is elevated, but not extreme given ~50–60% expected EPS growth in FY2026 – the PEG ratio is ~0.7–0.8, indicating the valuation is actually growth-adjusted reasonable. For a peer perspective, large-cap tech peers in cloud/AI (e.g., Microsoft, Google) trade at PEGs around 1–2, so NVIDIA’s high growth arguably isn’t fully priced in. On an EV/EBITDA basis, NVDA trades at ~30× forward EBITDA. This is a premium to the broader semiconductor industry (which might be ~15–20×), but again NVIDIA’s growth and margins (72% gross, ~45% EBITDA margin) justify a higher multiple. If we apply a target multiple of ~45–50× forward earnings (which is in line with the stock’s post-earnings trading range and high-end analyst comps), and our FY2026E EPS of ~$3.60, we get a valuation of $160–$180 per share. Similarly, a 30× EV/EBITDA on our FY2026 EBITDA ($105 billion) plus net cash yields an equity value in the mid-$160s per share. These simple approaches corroborate our DCF/FCF analysis.
We summarize our valuation outcomes below:
| Valuation Method | Key Assumptions | Implied Value (per share) |
| DCF (10-year) | 25% 5-yr rev CAGR; 2.5% terminal growth; 12% WACC | ~$200 (Base case) |
| DCF (Bull scenario) | 25–30% 5-yr CAGR; stronger margins | ~$210+ (Upside case) |
| DCF (Bear scenario) | ~10-15% 5-yr CAGR; export/comp setbacks | ~$120–130 (Downside) |
| P/E Multiple | 50× FY2026e EPS (~$3.5–3.6) | ~$175 |
| EV/EBITDA Multiple | 30× FY2026e EBITDA; ~$10B net cash | ~$165 |
| FCFE Yield | ~4% implied forward FCF yield | ~$205 |
Based on the above, we are revising our 12-month price target to $180, reflecting our confidence in NVIDIA’s execution and the strong secular trends. A $180 target equates to ~50× our FY2026 EPS, ~30× EV/EBITDA, and implies ~33% upside from the current stock price – which we view as justified given the ~69% YoY revenue growth and NVIDIA’s unparalleled position in AI. We acknowledge this valuation is not cheap in absolute terms, but for a company of NVIDIA’s caliber, a premium is warranted. It’s also worth noting that at $180, NVIDIA’s market cap would be about $4.3 trillion (post-split shares), which may sound lofty but would be supported by the extraordinary earnings power NVIDIA is building (>$80 billion annual profit potential). We will continue to monitor our assumptions, especially around China and competition, but as of now our valuation work reaffirms that NVIDIA offers attractive upside for long-term investors.
Recommendation
After careful consideration of NVIDIA’s post-earnings fundamentals, valuation, and risk factors, we reiterate our BUY recommendation on NVDA. At ~$135–140, NVIDIA is trading at a growth-adjusted valuation that we consider reasonable (if not modest), especially relative to the company’s dominant role in a burgeoning AI industry. The technical pressure points, U.S.–China trade friction, supply limitations, and potential competition – are real, but they do not derail the core growth story. NVIDIA has demonstrated an ability to navigate these challenges (e.g., reallocating supply, lobbying for sensible policies, and doubling down on innovation to stay ahead of rivals). Meanwhile, the secular tailwinds of AI are so strong that it would be imprudent for any growth investor to remain on the sidelines. To put it bluntly, not investing in the AI revolution – and by extension not owning NVIDIA – would be irrational, given the transformative impact AI is expected to have across every sector of the economy. NVIDIA is the closest thing to a “pure pick-and-shovel” play on AI infrastructure, and it has built a nearly unassailable moat across hardware and software.
We expect NVIDIA’s stock to resume an upward trajectory over the next 12–18 months, fueled by continued earnings beats and robust growth in AI expenditures globally. While the stock may not repeat the extreme 2024 gains in such a short span, we see a strong likelihood of outperforming the broader market and delivering solid absolute returns from here. Our $180 price target reflects high conviction that the current challenges are adequately priced in, and that as new catalysts emerge, NVIDIA’s share price will rally to reflect its fundamental earnings power. The company’s execution remains superb, its growth thesis intact, and the current valuation not showing excessive by institutional standards. Balancing near-term volatility against long-term opportunity, we view NVIDIA as a compelling buy-on-dips candidate. We maintain a Buy rating and would use any interim weakness as an opportunity to add, keeping in mind that NVIDIA’s leadership in AI makes it a foundational holding for a future-oriented portfolio.