Perspectives of Global AI Governance after the EU AI Act

The EU AI Act: next steps

Just a few days ago, on December 9th, EU institutions reached a provisional agreement on the EU AI Act. The legislative process started in 2021, with the Commission’s proposal. From then until now, the European Parliament and the Council have expressed their views on this regulation, and several trilogue sessions have taken place to decide the precise form and obligations the new EU regulation will impose[1]. As the next step, the AI Act waits for formal approval from the European Parliament and the Council, after which it will be published in the Official Journal of the EU (taking effect in 20 days). The AI Act becomes enforceable after two years, with certain provisions (like prohibitions) taking effect in 6 months, and rules on General Purpose AI in 12 months. The Commission also proposed an AI Pact, gathering and fostering developers to commit voluntarily to meet key AI Act obligations before the Act’s legal deadlines[2].

Still a Brussels Effect?

The hopes for the Act’s impact in the EU and the world are not modest. In a recent press release, the EU Council refers to the AI Act as “the first legislative proposal of its kind in the world” and that “it can set a global standard for AI regulation in other jurisdictions” (the bold font was used in the original document). The President of the European Commission commented on the provisional agreement by saying that “[t]he AI Act transposes European values to a new era”[3]. During the negotiations, this optimism may have been eclipsed. Two opposing regulatory principles clashed: the economic, looking to avoid overregulation; and the social/political, looking to protect EU values and citizens from AI’s dangers. For example, among other possible reasons, the fear of repelling European and international AI investments possibly justified the reluctant position of larger EU countries in the last phase of the trilogue negotiations, namely Germany, France, and Italy[4]. Contrary to personal data protection, AI will require stronger coordination between international actors. Although a GDPR-type impact is not likely to happen with the Act (something we analysed elsewhere[5]) and the fact that conflicting regulatory forces regarding AI may be driving other blocs’ regulatory actions as well, the EU may have succeeded in providing an example of international alignment to all the other blocs.

The OECD approach and the political example provided by the EU

A Brussels effect stricto sensu may not apply, for sure. However, the AI Act’s impact on global AI regulation is not to be dismissed. It is still too early to judge, but important moves by international organizations, such as the OECD, and other blocs have taken place, especially in recent months, signalling the political will to promote an international approach to AI regulation.

In 2019, the OECD published its Recommendation of the Council on Artificial Intelligence[6] where the international organization defined crucial concepts for AI regulation such as “AI systems” or “AI systems lifecycle”. This Recommendation was signed by EU countries, the UK, and the US (among others). In early 2023, the EU Parliament delivered its version of the AI Act, heavily inspired by OECD 2019 definitions (which reflects the importance of the EU alignment with the international taxonomy), but also second-guessing what the future OECD revision of the concept would include[7] (from 2019, up until now, AI developed and assumed entirely new capabilities). In November 2023, in the middle of stalled trilogue negotiations, the OECD amended its Recommendation, enlarging what should be considered AI (now including generative AI in its scope)[8], matching many of the points in the AI Act’s definition of AI. Apart from having a Brussels effect or not, the point here is that while the EU (via its AI Act) may have not provided principles or objectives to the international approach to AI regulation, it was the first bloc to enter a dialogue, adapt its approach and seemingly earn the support of an important international organization, such as the OECD, to its regulatory approach. For example, the EU may not have dictated the definition of AI, but set the example for other countries, especially the US and the UK, to align with international standards, such as those of the OECD to which they adhered. On October 30th, US President Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence[9]. With it, although the Order seems to align with the 2019 OECD principles (i.e., the old version), it reinforces the OECD approach. The EU may have started what can be called an EU-US effect.

Although a positive development, this uncoordinated convergence around OECD principles seems hardly enough to regulate AI. First, let us not forget an important actor, a country that issued specific regulatory legislation on AI before the EU and the US, which does not belong to the OECD and therefore does not relate to the “EU example”: China. It is not a signatory of the OECD Recommendation, either. Fundamental rules of international regulation of AI will not necessarily be under the influence of either the EU’s political principles or OECD principles. This must not be forgotten, since AI’s potential dangers require cooperation. Second, there are more issues rather than the sole definition of AI to agree upon. These issues are most likely out of scope of any OECD guidance. There is little doubt that AI will require more dialogue and common alignment, preparing future regulation. In recent years, AI has shown not only its potential for wealth creation (which justifies anticipation and the fight for each bloc’s influence in the international regulatory scene) but also for social and political damage (especially concerning what the “Bletchley Declaration” of November 2023 named “frontier AI”[10], namely its “potential intentional misuse or unintended issues of control relating to alignment with human intent”[11]). The latter is crucial because, from the first perspective non-coordination and competition between jurisdictions may be justifiable, but not from the second.

Beyond dialogue and political goodwill

The lack of agreement among the blocs to decide on regulatory trade-offs ended up confounding the fear of overregulating with the fear of being left behind in the discussion of AI and its safety and dangers. This has led to the uncoordinated approach of international jurisdictions as we see it today. Although the adherence to OECD principles allied to the outputs coming from the G7 Hiroshima AI Process[12] (October 30th, 2023) and from the Bletchley Park AI Safety Summit[13] (November 1st and 2nd, 2023) seems to be a promising beginning, there still is a lot of room to improve mutual coordination and political compromise on AI, alongside the likely need for the creation of institutional support to coordinate or govern AI regulations internationally.

Just as with other relevant topics, such as nuclear weapons, civil aviation or climate change, intergovernmental coordination on AI will require a new institutional architecture beyond non-enforceable commitments and goodwill. There are many forms this architecture can assume. First, it may be guided by an institution similar to the Intergovernmental Panel on Climate Change, i.e., by creating a global government-backed panel of experts on AI. Such an organization could be responsible, for example, for the establishment of a scientific position on important AI policy issues such as its opportunities, challenges, and risks (Ho, et al., 2023). This form of institutional architecture has already been supported by the European Commission President Ursula von der Leyen[14] and the UK Prime Minister Rishi Sunak[15]. Second, the new institutional architecture for global governance of AI could assume the form of an intergovernmental and multi-stakeholder organization, as happens with civil aviation and the International Civil Aviation Organization. Such an entity would be responsible for designing norms and standards, assisting in their implementation around the world, monitoring its compliance and certifying jurisdictions (instead of companies[16]). Other forms of international governance have been proposed as well, such as a Global Observatory for AI[17] or an International Agency for AI[18], each with a different set of powers, structure, and responsibilities. The next years will show whether any such institutional setup is feasible and effective.

Ho, L., Barnhart, J., Trager, R., Bengio, Y., Brundage, M., Carnegie, A., . . . Snidal, D. (2023, July 11). International Institutions for Advanced AI. Retrieved from Arxiv:

[1] There were many topics the EU institutions discussed that justified the long negotiation phase. For more on these topics and the EU AI Act’s structure, please check the latest PromethEUs publication on Artificial Intelligence: Opportunities, Risks and Regulation (available in:



[4] ;






[10] Which according to the Bletchley Declaration should be “understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks – as well as relevant specific narrow AI that could exhibit capabilities that cause harm – which match or exceed the capabilities present in today’s most advanced models”.


[12] Which resulted in the publication of the Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems ( and the Hiroshima Process International Code of Conduct for Advanced AI Systems ( See

[13] The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023 (in




[17] ;


Related posts

Latest posts

The revival of industrial policy in the EU and its potential for the digital transformation of the Greek economy

Industrial policy is back in fashion on the European economic policy agenda. Of course, this is not something completely new. Since the...