Marvell stock pops on report it will help Google with custom AI chips

Marvell stock pops on report it will help Google with custom AI chips

Marvell Technology Group Ltd. headquarters in Santa Clara, California, on Sept. 6, 2024.

David Paul Morris | Bloomberg | Getty Images

Shares of Marvell Technology gained nearly 6% on Monday amid reports that Google will use the chip design firm for two new chips to power artificial intelligence workloads.

Until now, Google has relied on Marvell rival Broadcom for the design of its in-house Tensor Processing Units, or TPUs. Broadcom shares fell nearly 2% Monday following the report by The Information.

The potential deal between Google and Marvell could include a TPU as well as a memory processing unit, The Information reported on Sunday. Google and Marvell did not immediately reply to requests for comment.

Both Marvell and Broadcom help their customers translate chip designs into silicon, providing back-end support before the processors are sent off to be manufactured at huge fabrication plants by companies like Taiwan Semiconductor Manufacturing Company.

It’s a role that’s fueled the growth of both Marvell and Broadcom as more tech giants design in-house accelerators for AI.

Amid that hustle to make enough silicon to power AI, it’s no surprise to see Google diversify its chip deals beyond Broadcom. The Google-Broadcom partnership is alive and well, having just been extended through 2031 in an expanded deal announced earlier this month.

Meta last week also made a big deal with Broadcom, committing to deploy 1 gigawatt of its own custom MTIA chips using Broadcom technology.

Marvell stock gained more than 20% in March as the company posted strong fourth-quarter earnings and guidance amid surging demand for AI. Shares have continued to soar in April, up nearly 50% so far.

Nvidia also announced a $2 billion investment in Marvell in March. The deal makes it easier for Nvidia customers to access the application-specific integrated circuits, or ASICs, being made by hyperscalers like Google.

Google was the first hypserscaler to begin developing its own custom ASIC to accelerate AI workloads, releasing its initial TPU in 2015. Giants like Amazon, Meta, Microsoft and OpenAI all followed suit, as Big Tech scrambles for enough compute and lower-cost alternatives to Nvidia’s AI chips.

Google released its latest 7th generation “Ironwood” TPU in November, and may release its next chips at its annual AI conference, Google Cloud Next, later this week.

Originally trained for internal workloads, Google’s custom microchip has been available to cloud customers since 2018. Meta, Anthropic and Apple all now use TPUs, as Google increasingly encroaches on a market cornered by Nvidia’s graphics processing units.

Memory has been one of several bottlenecks facing AI chipmakers in recent months, with a shortage of supply from memory makers like Micron, SK Hynix and Samsung.

CNBC’s Kristina Partsinevelos contributed to this report.

Watch: Inside Google’s chip lab, where it makes custom silicon to train Gemini and Apple AI models

Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *