Connect with us

Tech

With Its Latest Rule, the U.S. Tries to Govern AI’s Global Spread

Published

on

With Its Latest Rule, the U.S. Tries to Govern AI’s Global Spread

On Monday, President Joe Biden’s administration released one of its most ambitious acts of economic and technological policymaking. In an “interim final rule” whose wonky title—a “Framework for Artificial Intelligence Diffusion”—belies its importance, the Biden administration has sought to reshape the international AI landscape. The rule seeks to set the export and security terms for the AI market that will produce the world’s most powerful technological systems in the coming years.

The rule tightens control over sales of AI chips and turns them into a diplomatic tool. It seeks to enshrine and formalize the use of U.S. AI exports as leverage to extract geopolitical and technological concessions. And it is the Biden administration’s latest attempt to limit Chinese access to the high-end chips that are critical to training advanced AI models.

With a new administration taking office in a week, the rule’s ultimate impact is uncertain. President-elect Donald Trump and his staff will no doubt take a fresh look at how—or whether—to regulate the export of advanced U.S. AI technology. But as they do so, they too will have to reckon with the underlying national security pressures and economic incentives that drove the Biden administration’s development of this policy.

What’s in the Rule?

The rule creates a global licensing regime for the export of advanced AI chips and the parameters that encode a frontier AI system’s core intelligence, known as its model weights. It seeks to encourage AI development in friendly nations and incentivize businesses around the world to adopt U.S. standards. To do so, it creates three tiers of semiconductor and model weight restrictions to govern the sale of AI chips used in data centers.

In tier one, a small group of eighteen allies will maintain essentially unrestricted access to U.S. chips. That group includes the other four countries in the Five Eyes intelligence partnership (Australia, Canada, New Zealand, and the United Kingdom), major partners with key roles in the AI value chain (such as Japan, the Netherlands, South Korea, and Taiwan), and close NATO allies. The vast majority of the world will fall in a middle tier and will face limits on the total computing power they can import, unless that computing power is hosted in trusted and secure environments. In tier three, a group of adversaries will be effectively blocked from importing chips, in essentially no change from the status quo.

Companies can deploy as much computing power as they want in tier one countries. If they are headquartered in tier one countries and wish to expand elsewhere, they can apply for a so-called universal Validated End User designation. That status gives those companies—think of the cloud-service hyperscalers such as Amazon, Google, and Microsoft—blanket permission to deploy chips to data centers in most other countries, so long as they follow relatively straightforward security requirements. And they have to keep at least half of their total computing power on U.S. soil and can deploy no more than a quarter of their total computing power outside of tier one countries—and no more than 7 percent in any singular tier two country. These restrictions allow U.S. tech companies to continue to make large AI infrastructure investments in most of the world, without requiring case-by-case licensing authorizations, while ensuring that the world’s computing power and its most critical data centers largely remain within the United States and its closest partners.

The vast majority of countries fall into the second tier. This group faces caps on the levels of computing power that can go to any one state: roughly around 50,000 advanced AI chips through 2027, although that can double if the state reaches an agreement with the United States. Individual companies headquartered in tier two countries—for example, companies such as Emirati tech giant G42—can access significantly higher limits if they apply for their own national validated end user status. That process involves making verifiable security commitments, both physical and cyber, and assurances that they will not use those chips in ways that violate human rights (for example, by deploying them for large-scale surveillance purposes). If a company obtains this status, its chip imports won’t count toward the country’s overall maximum cap—a move designed to create incentives for foreign firms to adopt U.S. AI standards.

The new rule also limits the export and overseas training of proprietary AI model weights above a certain threshold, which no existing model meets. After a year to adjust, companies will have to abide by security standards to host the model weights of powerful AI systems in tier one or tier two countries. But no open weight models—models that allow the public to access their underlying code—are affected by these restrictions, and the thresholds for controlled models automatically adjust upward as open weight models advance. Overall, the requirements for model weights are less burdensome than leaked versions of the regulation suggested they might be.

The rule is a complex and ambitious piece of economic policymaking. But elements of the framework may appear more dramatic at first glance than they truly are. After all, the majority of global AI compute capacity is already concentrated in the United States, and U.S. compute providers already dominate the global market for cloud infrastructure. Nonetheless, the framework will undoubtedly affect future infrastructure decisions, restricting industry’s ability to plan large-scale clusters of computing power in tier two countries and incentivizing the construction of additional data centers within the United States.

How Did We Get Here?

The rule is the Biden administration’s attempt to answer a question that will soon become central to U.S. foreign policy and economic strategy: How widely should the United States share its AI technologies? It is an attempt to thread the needle between two competing priorities.

On the one hand, the rapidly growing global appetite for U.S. AI technology can generate vast revenues for U.S. tech companies and help lure states that have been drifting toward a Chinese economic ecosystem back into a U.S. technological sphere of influence. This consideration creates powerful incentives for U.S. companies to export products and governance standards overseas as rapidly as possible—to “flood the zone,” as the software giant Oracle has put it. And it encourages U.S. officials to greenlight those exports to places like the Gulf states, which are increasingly interested in acquiring advanced U.S. AI technology, or to southeast Asia, where governments are making major investments in data centers filled with Nvidia chips.

But on the other hand, U.S. policymakers are worried about the proliferation of powerful AI systems that might have critical national security implications. If AI becomes the central strategic technology of the coming years—a technology that might unlock profound economic and military breakthroughs—then U.S. officials want to ensure that the United States and its closest allies retain physical control of the ability to develop and deploy the most capable systems. This imperative cuts in favor of preventing the broad diffusion of cutting-edge AI technology to anyone that isn’t a trusted U.S. ally or partner.

The rule tries to find a compromise between these two considerations. It has its origins in export controls the Biden administration introduced in 2023 that expanded restrictions on the sale of top-end AI chips beyond China to several other countries. These countries included some in the Middle East, such as the UAE and Saudi Arabia, that are hungry for access to U.S. computing power to fuel their AI ambitions but also have close ties with China and other U.S. competitors. U.S. officials were worried that Chinese institutions might be accessing AI chips remotely, circumventing U.S. controls by using cloud computing services overseas or building data centers under shell companies in countries that could still import chips. To mitigate those risks, Washington required countries like the UAE to obtain a license before they could purchase chips. It ultimately granted some of those licenses, but only after months of negotiations that culminated in G42 divesting from Chinese firms, stripping out its Huawei technology, and partnering with Microsoft in exchange for access to Nvidia chips.

Continue Reading