SAN FRANCISCO, April 9 (Reuters) - Alphabet's
on Wednesday unveiled its seventh-generation artificial
intelligence chip named Ironwood, which the company said is
designed to speed the performance of AI applications.
The Ironwood processor is geared toward the type of data
crunching needed when users query software such as OpenAI's
ChatGPT. Known in the tech industry as "inference" computing,
the chips perform rapid calculations to render answers in a
chatbot or generate other types of responses.
The search giant's multi-billion dollar, roughly decade-long
effort represents one of the few viable alternative chips to
Nvidia's ( NVDA ) powerful AI processors.
Google's tensor processing units (TPUs) can only be used by
the company's own engineers or through its cloud service and
have given its internal AI effort an edge over some rivals.
For at least one generation Google split its TPU family of
chips into a version that's tuned for building large AI models
from scratch. Its engineers have made a second line of chips
that strips out some of the model building features in favor of
a chip that shaves costs of running AI applications.
The Ironwood chip is a model designed for running AI
applications, or inference, and is designed to work in groups of
as many as 9,216 chips, said Amind Vahdat, a Google vice
president.
The new chip, unveiled at a cloud conference, brings
functions from earlier split designs together and increases the
available memory, which makes it better suited for serving AI
applications.
"It's just that the relative importance of inference is
going up significantly," Vahdat said.
The Ironwood chips boast double the performance for the
amount of energy needed compared with Google's Trillium chip it
announced last year, Vahdat said. The company builds and deploys
its Gemini AI models with its own chips.
The company did not disclose which chip manufacturer is
producing the Google design.