On a recent Friday announcement, AI chip startup Groq disclosed a non-exclusive licensing agreement with Nvidia to provide access to Groq's inference technology. The arrangement further involves key leadership figures from Groq joining Nvidia to enhance and scale this licensed technology. Specifically, Groq's founder and CEO Jonathan Ross, along with president Sunny Madra, plus other integral team members, have transitioned to Nvidia, underscoring the comprehensive nature of the collaboration.
Interpreted from a strategic standpoint, this joint effort closely resembles an "acqui-hire," given Nvidia's procurement of core Groq talent coupled with the technology license. While Groq is set to persist as an ongoing enterprise—transitioning its CFO to the CEO role and continuing operations of its GroqCloud under its original banner—the departure of the company’s visionary founder signals a shift in proprietary innovation towards Nvidia's stewardship.
The deal's monetary details remain undisclosed by the involved parties. Yet, reports indicate a valuation near $20 billion, which would eclipse Nvidia's previous largest acquisition—namely, its $6.9 billion purchase of Mellanox Technologies in 2020. That transaction proved highly fruitful, spurring significant growth in Nvidia's networking sector. The cited figure implies a considerable premium over Groq's last estimated worth, which was approximately $6.9 billion following a September $750 million funding round.
From Nvidia's vantage point, structuring this collaboration as a licensing plus team acquisition may strategically mitigate regulatory pressures. Given Nvidia's current dominance in AI chip markets, any overt acquisition aiming to further increase market share would likely trigger rigorous antitrust scrutiny.
Groq specializes in language processing units, or LPUs, designed to facilitate the AI inference phase—an essential step after the training of AI models. AI training involves feeding models extensive datasets to build learning capability, whereas inferencing applies those trained models to deliver outputs such as textual answers or images in real time.
Historically, Nvidia's graphics processing units have led the AI training domain and also hold significant presence in inference tasks. Nonetheless, competition in AI inference is intensifying, with rivals including Advanced Micro Devices' (AMD) data center GPUs and bespoke application-specific integrated circuits engineered by Broadcom and Marvell Technology for major technology companies.
Additionally, it surfaced that Meta Platforms is evaluating the acquisition of Google's custom tensor processing unit (TPU), designed specifically for inferencing workloads in data centers. This reflects a broader trend among large tech entities to diversify their hardware base for inferencing beyond traditional GPU solutions. The motivations involve both curbing costs and diminishing single-supplier dependency risks within their supply chains.
Groq's strategic aim focused on carving a leading role in the AI inference hardware sector by offering LPUs capable of outperforming alternative technologies in certain inference applications. Their business model emphasized competitive pricing relative to Nvidia GPUs and other market offerings.
Nvidia's decision to license Groq's technology and incorporate its leadership and engineering teams underscores recognition of Groq's competitive potential and the intrinsic value of its innovations. Notably, Jonathan Ross, Groq’s founder and CEO, garners particular acclaim for spearheading Google's TPU development, attesting to the caliber of expertise now joining Nvidia.
Previously, Nvidia faced regulatory hurdles in its attempted acquisition of CPU chip leader Arm Holdings in 2020, a transaction canceled following intense antitrust examinations. This historical context likely underpins the circumspect and partnership-oriented structure of the current arrangement with Groq.