Jan 13 (Reuters) - The U.S. government said on Monday it
would issue a new regulation designed to control access to
U.S.-designed artificial intelligence chips and technology by
other countries around the world.
The rule regulates the flow of American AI chips and
technology needed for the most sophisticated AI applications.
Here are more details on the U.S. action:
WHICH CHIPS ARE RESTRICTED?
The rule restricts the export of chips known as graphics
processing units (GPUs), specialized processors originally
created to accelerate graphics rendering.
Although known for their role in gaming, the ability of GPUs
such as those made by U.S.-based industry leader Nvidia ( NVDA )
to process different pieces of data simultaneously has made them
valuable for training and running AI models.
OpenAI's ChatGPT, for example, is trained and improved on
tens of thousands of GPUs.
The number of GPUs needed for an AI model depends on how
advanced the GPU is, how much data is being used to train the
model, the size of the model itself and the time the developer
wants to spend training it.
WHAT IS THE U.S. DOING?
To control global access to AI, the U.S. is expanding
restrictions on advanced GPUs needed to build the clusters used
to train advanced AI models.
The limits on GPUs for most countries in the new rule are
set by compute power, to account for differences in individual
chips.
Total processing performance (TPP) is a metric used to
measure the computational power of a chip. Under the regulation,
countries with caps on compute power are restricted to a total
of 790 million TPP through 2027.
The cap translates into the equivalent of nearly 50,000 H100
Nvidia GPUs, according to Divyansh Kaushik, an AI expert at
Beacon Global Strategies, a Washington-based advisory firm.
"Fifty thousand H100s is an enormous amount of power -
enough to fuel cutting-edge research, run entire AI companies or
support the most demanding AI applications on the planet," he
said.
Those could include running a global-scale chatbot
service or managing advanced real-time systems like fraud
detection or personalized recommendations for massive companies
like Amazon ( AMZN ) or Netflix ( NFLX ), Kaushik added.
But the caps do not reflect the true limit on the number of
H100 chips in a country. Companies like Amazon Web Services or
Microsoft's ( MSFT ) Azure cloud unit that meet the requirements
for special authorizations - also known as "Universal Verified
End User" status - are exempt from the caps.
National authorizations also are available to companies
headquartered in any destination that is not a "country of
concern." Those with national Verified End User status have caps
of roughly 320,000 advanced GPUs over the next two years.
"The country caps are specifically designed to encourage
companies to secure Verified End User status," Kaushik said,
providing greater visibility to U.S. authorities about who is
using them and helping to prevent GPUs from being smuggled into
China.
ARE THERE OTHER EXCEPTIONS TO THE LICENSING?
Yes. If a buyer orders small quantities of GPUs - the
equivalent of up to some 1,700 H100 chips - they will not count
toward the caps, and only require government notification, not a
license.
Most chip orders fall below the limit, especially those
placed by universities, medical institutions, and research
organizations, the U.S. said. This exception is designed to
accelerate low-risk shipments of U.S. chips globally.
There also are exceptions for GPUs for gaming.
WHICH PLACES CAN GET UNLIMITED AI CHIPS?
Eighteen destinations are exempt from country caps on
advanced GPUs, according to a senior administration official.
Those are Australia, Belgium, Britain, Canada, Denmark,
Finland, France, Germany, Ireland, Italy, Japan, the
Netherlands, New Zealand, Norway, South Korea, Spain, Sweden and
Taiwan plus the United States.
WHAT IS BEING DONE WITH 'MODEL WEIGHTS'?
Another item being controlled by the U.S. is known as "model
weights." AI models are trained to produce meaningful material
by being fed large quantities of data. At the same time,
algorithms evaluate the outputs to improve the model's
performance.
The algorithms adjust numerical parameters that weigh the
results of certain operations more than others to better
complete tasks. Those parameters are model weights. The rule
sets security standards to protect the weights of advanced
"closed-weight", or non-public, models.
Overall, Kaushik said, the restrictions are aimed at
ensuring the most advanced AI is developed and deployed in
trusted and secure environments.