AI Doesn’t Need One-Size-Fits-All Regulation

AI Doesn’t Need One-Size-Fits-All Regulation


Of course, errors such as racial discrimination can be extremely costly, especially if made by chatbots that interact with millions of people. Recognizing this risk, one regulatory approach allows new products to be tested only in tightly controlled settings. Innovators can experiment with a limited group of users, and always under the regulator’s watchful eye. This “sandbox” approach helps to contain any harms from spilling over to the broader public—Europe’s main concern.

But sandboxes might also limit what can go right. Trials with small, restricted groups cannot capture the benefits of network effects, whereby products become more valuable as more people use them. Nor can they reveal unexpected breakthroughs that come when the “wrong” people adopt a product. (For example, online pornography drove early innovations in web technology.) In short, sandbox trials may keep disasters at bay, but they also risk stifling discovery. They are better than outright bans, but they may still cause innovators to bury too many promising ideas before they can scale.

What, then, are the costs of the laissez-faire American approach? Most obviously, the system can blow up because of rogue products, as happened with subprime mortgage-backed securities before the 2008–09 global financial crisis. Today, one hears similar fears about generative AI and the crypto industry (with FTX’s implosion cited as an early warning signal).

Historically, the US, with its deep fiscal pockets, may have been more willing to take such risks, while the fragmented EU may have been more cautious. But with fiscal space shrinking in the US, a rethink may be in the offing.

Even if the US wants to regulate more, though, can the authorities really pull it off? The American way is to wait until an industry is large enough to matter. But by that point, the industry will have grown powerful enough to shape any rules meant to rein it in. Consider crypto: Flush with cash, armed with lobbyists, and laser focused on its interests, it has proven adept at swaying politicians—and public opinion—in its favor. The consequence invariably is underregulation, even when the risks to the public are glaring.

Risk-averse Europe, by contrast, steps in early, when an innovative sector is still small and its voice barely audible. At this stage, it is the incumbents—the banks threatened by crypto, for example—that dominate the debate. Their influence pushes the needle toward excessive caution and heavy-handed rules. The US tends to regulate too little, too late, whereas Europe does too much, too soon. Neither gets the balance quite right.

Even though there is a case for each side moving toward the other, it is worth emphasizing that regulation does not stop at national borders. In fact, the world may benefit from having somewhat different approaches. US chatbots can thrive in a relatively unregulated environment, experimenting and scaling quickly. But once they seek a global presence, they will run into Europe’s stricter standards. With sufficient resources and strong incentives, they will find creative, low-cost ways to comply, and those risk-reduction strategies may eventually flow back into the US, leaving the world with more, safer innovation.

That is the ideal scenario, anyway. Reality is likely to be messier. American companies could cause global harm before European regulators catch up. Europe may continue discouraging innovation before it starts, leaving the world with too little. But perhaps the greatest danger is if regulators on either side of the Atlantic export their own rule book, forcing the other to fall in line. The world may be best served if US and European regulators keep seeing regulations differently.

Raghuram G. Rajan is the Katherine Dusak Miller Distinguished Service Professor of Finance at Chicago Booth. Copyright 2025 by Project Syndicate.



Source link