Will the EU AI Act Shape Global Regulation?

Abstract. The European Union’s AI Act is one of the first attempts to regulate artificial intelligence technologies. Given that previous European digital regulations, such as the GDPR, have influenced laws all over the world, can we expect to see the AI Act as a global standard? This article argues that there are good reasons to believe otherwise.

1. Introduction

Last year, the European Union (EU) adopted its first piece of legislation on artificial intelligence: Regulation (EU) 2024/1689, better known as the EU AI Act.[1] This piece of legislation was widely discussed not just in Europe but around the world, as other jurisdictions are also trying to come to terms with the opportunities and challenges that AI technologies pose in the most varied dimensions of social life. As one of the first pieces of AI-specific legislation[2] to be adopted in the world, commentators have wondered whether and how the decisions made by the EU legislators will influence regulatory efforts elsewhere in the world. In this article, I outline some of the mechanisms that might create such an influence, arguing that their effect is somewhat limited in the current circumstances. Consequently, the EU AI Act is likely to be influential but not to the same extent as the GDPR, as it faces stiffer competition as an international regulatory template.

Before moving forward to that discussion, we must consider what is distinctive about the EU approach to AI regulation. In a previous article,[3] Nicolas Petit and I discussed some key features of the EU AI Act’s approach to regulation. It is, at the same time, a technical regulation and a rights-based instrument, as it uses provisions drawn from the EU’s approach to product safety to address risks to health, safety, and public values such as the protection of fundamental rights. Furthermore, the EU AI Act relies on a knowledge-intensive approach to regulation, which directs its commands towards actors related to two kinds of technical objects—the AI system and the general-purpose AI model. It also adopts a top-down framing of risks that classifies those technologies into pre-determined regulatory frameworks instead of allowing a case-by-case assessment of risks,[4] setting a light-handed approach for most AI technologies and tight requirements for narrowly-defined classes of technologies perceived as particularly risky. Those design choices, and their implications for the 113 articles and thirteen annexes of the regulation, make the EU AI Act a unique piece of legislation.

Some initial signs suggest that this uniqueness is influencing lawmakers elsewhere. The state of Colorado, in the United States, has adopted an AI Act described by US lawyers as influenced by the EU’s regulatory approach,[5] while the AI bill currently under appreciation by the Brazilian legislators takes a framework inspired by the EU AI Act and seeks to reinforce its fundamental rights angle.[6] Yet, those purported influences only tell part of the story. China, a major player in the global landscape of AI research and application, has largely followed its own regulatory path.[7] The United States federal government’s policy on AI has shifted quite a bit in the past few years, but a recent presidential action by Donald Trump signals an interest in pushing back against EU regulatory pathways perceived as “unfair”.[8] Against this background, some recent developments in AI regulation, such as South Korea’s recently adopted law[9] or Japan’s new bill,[10] have focused on innovation-related requirements and avoided the more strict requirements seen in parts of the EU AI Act. Even so, those pieces of legislation are sometimes described as influenced by the EU approach to regulation. Is that the case?

This article cannot offer a definitive answer to the question, which will demand empirical evidence once the facts are more consolidated. What I propose, instead, is that previous experiences with the diffusion of EU digital regulation point out to some mechanisms that might contribute to the spread of key traits of the EU’s AI Act. After outlining those mechanisms, I will present the theoretical and empirical arguments that hint at a limited degree of influence for the EU in the AI regulatory space. Finally, I conclude the post with some brief remarks on what that limited influence entails for AI governance in the EU and abroad.

2. Potential diffusion mechanisms for the AI Act

When talking about the global influence of EU digital regulation, one often hears mentions to the so-called Brussels Effect. In its original sense, this term refers to a theory proposed by legal scholar Anu Bradford in order to explain why businesses often comply with EU legal requirements even when selling their products in jurisdictions that adopt more permissive regulation.[11] The theory identifies five relevant elements:

  1. The EU market size must be such that entry into that market is desirable for the business;
  2. The EU must have regulatory capacity to design and impose its requirements in the EU single market;
  3. The EU’s requirements must be particularly stringent, such that a business complying with EU requirements will likely meet the standards of other jurisdictions;
  4. The target for EU regulation must be inelastic, so that the degree of regulation will not significantly alter demand for the product; and, finally,
  5. The regulated object must be non-divisible for some reason (economic or otherwise), as otherwise a business could simply create separate product lines for the EU and elsewhere.

If those factors are present, the Brussels Effect theory claims that one is likely to see businesses voluntarily complying with the EU’s requirements even beyond its borders. Moreover, the laws of other jurisdictions are likely to change towards convergence with EU standards, in no small part due to the lobbying activities of those businesses that operate both in the EU and in non-EU jurisdictions. The result of these tendencies would be a diffusion de facto and de jure of regulatory standards to beyond the EU’s borders.

Scholars and other commentators have pointed out to numerous examples of successful diffusion of EU standards when those elements are present. The EU’s laws on data protection[12] and pharmaceutics,[13] for instance, have been identified as templates for laws around the world. As a result, the term Brussels Effect is nowadays used not just to refer to the effect outlined above, but as a shorthand for all kinds of influences that the EU can exercise over lawmaking activities elsewhere.

Beyond the narrow form of the Brussels Effect described above, the literature identifies various other mechanisms through which laws can spread from one jurisdiction to another. Some of them refer to the direct application of a law beyond the geographic limits of its jurisdiction of origin: for example, Article 2(1)(d) of the EU AI Act states that it applies to providers and developers of AI systems that have their output used in the EU, regardless of where those market actors are based. Other mechanisms require that the EU attempt to influence the behaviour of other international actors, as it arguably did during the negotiations of the Council of Europe’s Framework Convention on AI.[14] Additionally, the EU can attract others towards its own model through more positive mechanisms, for example by training regulators from non-EU authorities or supporting the development of local institutions. Those and other mechanisms are at play in the context of AI regulation, and it remains to be seen whether and how they supplement—or undermine—the narrow sense of the Brussels Effect. One way or the other, the story of the diffusion of the EU approach to AI regulation is unlikely to be reducible to a single form of influence.

3. Will regulatory convergence happen in practice?

Still, the question that animates this article is not one of disentangling the various mechanisms defined above. Instead, we are dealing with the question of whether the aggregate of those forms of influence is likely to significantly affect the path of AI regulation around the world. The current state-of-the-art on policy diffusion allows us to venture some hypotheses. Testing them in practice will require the passage of time, as more legal instruments emerge around the world. Nonetheless, I posit that identifying those potential sources of influence will help us assess the EU’s influence—or lack thereof—as it unfolds in the global regulatory landscape.

When we look at the contexts in which AI technologies are being developed and used, some factors might help the EU with its declared ambition of being a “global leader” in AI regulation.[15] Researchers such as Charlotte Siegmann and Markus Anderljung have suggested that, at least in some cases, the requirements for the Brussels Effect outlined above are present in the markets for AI.[16] Furthermore, the EU has acted to address some of the perceived gaps, for example by hiring legal and technical experts to its AI Office to reinforce its regulatory capacity. This same body has been tasked with developing international cooperation,[17] thus contributing to the bilateral and multilateral mechanisms that form the basis of the broader sense of Brussels Effect identified above. Those factors notwithstanding, there are other aspects of the current landscape that mitigate the EU’s potential influence at a global level.

We can distinguish between two kinds of influence-reducing factors. The first one is friction, in which existing features of that landscape reduce the effectiveness of some of the proposed mechanisms. For example, Anca Radu and I have proposed that the EU’s limited powers in the regulation of fundamental rights mean that the EU AI Act is not necessarily more stringent than the regulations other jurisdictions might adopt.[18] If that is indeed the case, compliance with the EU AI Act would not be sufficient to meet the legal requirements imposed elsewhere. Another source of friction is that many countries might lack the technical and legal expertise needed to implement the AI Act’s provisions on high-risk AI systems and general-purpose AI models, preventing replication even if it would otherwise be desirable.

In addition to friction, the diffusion of EU standards for AI regulation might be undermined by active resistance. That resistance might come from various sources. The same businesses that, under certain conditions, can promote the Brussels Effect through lobbying might end up wielding their influence against EU interests if those are seen as too cumbersome or addressable through product differentiation. With the recent turn towards digital sovereignty in policy discourses in Europe and countries in the majority world,[19] legislators in jurisdictions that the EU is seeking to influence might deliberately avoid following the EU’s lead to develop local solutions or follow the lead of other allies. Each of those factors means that convergence towards the EU AI Act is not inevitable, and legislators around the world retain various possibilities for divergence. It remains to be seen whether the factors in favour or against convergence will prevail in practice.

4. Concluding remarks

The question of whether the EU AI Act will become a global standard for regulation has consequences all around the world. On the one hand, convergence in AI law might contribute to international dialogue on cross-border regulatory challenges, especially in a moment where tensions between US, the EU, and China make international treaties particularly unlikely. On the other hand, these same factors seem to strengthen the factors pushing towards divergence amongst AI regulations around the world. As such, I would hazard that we are likely to see more approaches to AI regulation emerging in the next few years.

Nonetheless, non-adoption of the EU regulatory standard does not necessarily mean that AI regulation will evolve in incompatible directions. Not only will regulators around the world face many shared challenges, but they also rely on shared conceptual tools. For example, Margot Kaminski has mapped out the shared “policy baggage” that is associated with the risk regulation approaches used in different AI regulatory framework.[20] There is also some alignment between technical framings of issues, too, which can be seen both in the development of international technical standards and the OECD definition of AI system, which found its way to the AI Act’s text. While these shared conceptual underpinnings do not preclude jurisdictions from adopting radically different requirements for AI regulation, they at least ensure some degree of mutual comprehensibility between regulatory approaches.

This shared background is not necessarily an unmitigated good. At its best, it can feed regulatory interoperability and allow legal systems to learn from the good and bad experiences of the others. At its worst, it might lead to the establishment of regulatory monocultures, in which multiple forms of regulation rely on the same concepts and might therefore be vulnerable to future social and technological change. Neither of these futures is a given, but the understanding of the factors that lead to convergence or divergence in AI regulation can contribute to mapping the potential implications of different paths to regulation.

**

Marco Almada

Citation: Marco Almada, Will the EU AI Act Shape Global Regulation?. Network Law Review. Winter 2025.

References

About the author

Marco Almada is a postdoctoral researcher in cyber policy at the Department of Law, University of Luxembourg. His research focuses on the regulatory architectures of artificial intelligence in the European Union, with special attention to issues of cybersecurity and the global influence of European legal instruments. He holds a PhD in Law from the European University Institute, as well as undergraduate and master’s degrees in both law and computer science, having professional experience in both disciplines. In his spare time, he edits AI, Law, and Otter Things, a newsletter on law, technology, and semi-aquatic mustelids.