Google’s new AI chip is shaking up Nvidia’s dominance: What you need to know

Last week, The Information reported that Meta was in talks to buy billions of dollars worth of Google’s artificial intelligence chips starting in 2027. The report sent Nvidia shares lower as investors worried the company’s decade-long dominance of artificial intelligence computing hardware now faces serious challenges.
Google officially launched Ironwood TPU in early November. A TPU (or tensor processing unit) is an application-specific integrated circuit (ASIC) optimized for use with mathematical deep learning models. Unlike CPUs, which handle day-to-day computing tasks, or GPUs, which process graphics and power machine learning, TPUs are specifically designed to run artificial intelligence systems efficiently.
Ironwood’s debut reflects a broader industry shift: Workloads are moving from large-scale, capital-intensive training runs to cost-sensitive, high-volume inference tasks that underpin everything from chatbots to agent systems. This shift is reshaping the economics of artificial intelligence in favor of hardware like Ironwood that is designed for responsiveness and efficiency rather than brute force training.
Although the TPU ecosystem is gaining momentum Real-world adoption remains limited. South Korean semiconductor giants Samsung and SK Hynix are reportedly expanding their roles as Google chip component manufacturers and packaging partners. In October, Anthropic announced plans to access up to 1 million TPUs from Google Cloud in 2026 (not buying them, but effectively renting them) to train and run future generations of Claude models. The company will deploy them in-house as part of its diversified computing strategy, alongside Amazon’s Trainium custom ASICs and Nvidia GPUs.
Analysts are describing this moment as Google’s “artificial intelligence comeback.” “With Nvidia unable to meet AI demand, alternatives from hyperscalers like Google and semiconductor companies like AMD are viable in terms of cloud services or on-premises AI infrastructure. It’s just customers looking for ways to realize their AI ambitions and avoid vendor lock-in,” Alvin Nguyen, a senior analyst at Forrester who specializes in semiconductor research, told the Observer.
The shifts point to a broader push by big tech companies to reduce their reliance on Nvidia, whose GPU prices and limited availability have put pressure on cloud providers and artificial intelligence labs. Nvidia still supplies Google with Blackwell Ultra GPUs like the GB300 for its cloud and data center workloads, but Ironwood now offers one of the first reliable paths to greater independence.
Google began developing TPUs in 2013 to handle growing artificial intelligence workloads in data centers more efficiently than GPUs. The first chips went live internally in 2015 for inference tasks, then expanded to use TPU v2 for training in 2017.
Ironwood now powers Google’s Gemini 3 model, which tops benchmark charts for multimodal inference, text generation, and image editing. On the X, Salesforce CEO Marc Benioff called Gemini 3’s leap “crazy,” while OpenAI CEO Sam Altman said it “looks like a great model.” Nvidia also praised Google’s progress, noting that it is “pleased with Google’s success” and will continue to supply chips to the company, though it added that its own GPUs still offer “better performance, versatility and fungibility than ASICs” like those made by Google.
Nvidia’s dominance comes under pressure
Nvidia still controls more than 90% of the AI chip market, but the pressure is growing. Nguyen said Nvidia may lead the next phase of competition in the short term, but long-term leadership may be more fragmented.
“Nvidia has ‘golden handcuffs’: They are the face of artificial intelligence, but they are forced to keep pushing the state of the art in terms of performance,” he said. “Semiconductor processes need to continue to improve, software needs to continue to advance, etc. This allows them to offer high-margin products, and they will be forced to abandon less profitable products/markets. This will give competitors the ability to expand share in abandoned areas.”
Meanwhile, AMD continues to make progress. The company is already well-positioned for inference workloads, updating its hardware at the same annual cadence as Nvidia and delivering performance that is on par or slightly better than Nvidia equivalents. Google’s latest AI chips also claim to have performance and scale advantages over Nvidia’s current hardware, although slower release cycles could shift the balance over time.
Google may not be replacing Nvidia anytime soon, but it’s forcing the industry to imagine a more diverse future where vertically integrated TPU-Gemini stacks compete head-to-head with the GPU-driven ecosystem that defined the past decade.




