Nvidia absorbs Groq's IP, hires top execs in $20 billion move to dominate inference
- Marijan Hassan - Tech Journalist
- 16 minutes ago
- 2 min read
Deal structured as non-exclusive IP license and 'Acqui-hire' to acquire Groq's ultra-fast LPU architecture and talent.

Nvidia, the world's most valuable chip company, has entered into a landmark, non-exclusive licensing agreement with AI chip startup Groq, a deal reportedly valued at approximately $20 billion. The move is a strategic pivot designed to integrate Groq’s high-speed, low-latency AI inference technology and neutralize a key emerging competitor.
The deal, which avoids a full corporate acquisition, is focused on bringing Groq's core technology and technical talent into Nvidia’s ecosystem to solidify its dominance in the rapidly growing field of AI inference, the process of running trained models for real-time applications like chatbots.
The strategic target: Inference speed
Groq is renowned for its proprietary Language Processing Unit (LPU), a specialized chip designed specifically for ultra-fast, predictable AI inference. Groq’s architecture, which leverages massive on-chip SRAM memory, offers significantly lower latency and higher token throughput (speed) for running Large Language Models (LLMs) compared to general-purpose GPUs.
While Nvidia dominates the market for AI training (building the models), Groq posed a threat in the inference market, where speed and efficiency are paramount. The licensing deal allows Nvidia to offer a full-spectrum solution—powerful GPUs for training and Groq's specialized LPU technology for lightning-fast inference.
As a key component of the agreement, Groq Founder and CEO Jonathan Ross, who helped invent Google's Tensor Processing Unit (TPU), and other key senior executives are joining Nvidia. This is considered a high-value "acqui-hire" to absorb top-tier, specialized engineering talent instantly.
Groq Remains Independent (Formerly): Groq will continue to operate as an independent company under new leadership, with its GroqCloud service remaining operational. The non-exclusive licensing structure minimizes regulatory and antitrust hurdles that a full acquisition would entail.
Implications for the AI hardware race
The transaction, which values Groq at a significant premium over its last funding round, signals a dramatic shift in the AI hardware arms race:
Inference is the next battlefield
Nvidia's willingness to spend a reported $20 billion highlights the strategic importance of winning the inference market as AI moves from the research lab to real-time consumer and enterprise applications.
The deal confirms that general-purpose GPUs are not the final answer for every AI workload, validating the need for specialized architectures like Groq's LPU.
New M&A playbook
The use of an IP licensing deal combined with a major personnel transfer allows the dominant market player to absorb critical technology and talent without triggering lengthy antitrust reviews, setting a new precedent for "soft acquisitions."
Nvidia CEO Jensen Huang stated that the company plans to integrate Groq's low-latency processors into the NVIDIA AI factory architecture, extending the platform to serve an even broader range of AI workloads.










