【AI+NVDA】Guo Mingchi: Nvidia's Investment in Groq Drives Growth - LPU Shipments Expected to Reach 5 Million Units Over Next Two Years, Over 10x Magnitude Increase

robot
Abstract generation in progress

Guo Mingchi of Tianfeng International stated that according to his latest industry survey, after Nvidia’s investment in Groq, the shipment plans for LPU (Language Processing Unit) have been significantly revised upward. From 2026 to 2027, total LPU shipments are estimated to reach approximately 4 to 5 million units, representing more than a tenfold increase compared to previous years.

Guo Mingchi explained that the rapid growth in LPU demand is mainly driven by two factors. First, its high integration with Nvidia’s ecosystem (such as CUDA) greatly reduces the barriers to application development and deployment. Additionally, the demand for ultra-low latency inference is rapidly increasing, including AI agents (such as coding agents) and emerging applications like real-time, consumer-facing, and physical AI.

He said that to maintain the ultra-low latency advantage during inference decode and to meet the fast-growing KV cache requirements driven by long-text reasoning, Nvidia plans to increase the number of LPUs per rack from the current 64 to 256 units, expanding memory capacity and maintaining ultra-low latency performance. The new architecture racks are expected to enter mass production in the fourth quarter of this year through the first quarter of next year. In 2026 and 2027, rack shipments are projected to be approximately 300,000 to 500,000 and 15,000 to 20,000 units, respectively.

Financial Hot Topics

Is the role of gold as a hedge failing? Are rising tensions sparking concerns about interest rate hikes?

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin