AI: Regulatory Framework and Public Debate

The Role of AI and EU’s first steps towards regulation

Artificial Intelligence is expected to affect our society in many ways, as its uses are not only limited to the industrial applications but can also be found in the public sector and people’s daily life. For example, AI can be an extremely useful tool in the fields of education, professional training and information, and help tackle major health challenges or security and cybersecurity threats.

Despite the many benefits that the increasing use of AI is expected to produce, this type of technology also presents some concerns, especially with regards to human rights and possible threats to people’s safety. One of the major causes giving rise to these concerns can be traced back to the lack of a well-defined regulatory framework setting clear principles for its development and use. In order to start addressing this problem, in 2018, the European Commission presented the Communication “AI for Europe”, that can be considered as the starting point of the pro-active approach towards AI within the EU. The new approach was based on three main pillars: placing the EU at the cutting-edge of technological developments; preparing the EU for socio-economic changes brought about by AI; and ensuring an appropriate ethical and legal framework.

After the inauguration of this new European approach towards AI, several initiatives followed. These included the Coordinated Plan on AI on 7 December 2018, and before that the appointment of several experts to the High-Level Expert Group on Artificial Intelligence (AI HLEG). In April 2019, the latter then presented the Ethics Guidelines for Trustworthy AI”, according to which trustworthy AI should be: (a) lawful, complying with all applicable laws and regulations; (b) ethical, ensuring adherence to ethical principles and values; and (c) robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm. Following, in February 2020, Von der Leyen’s Commission presented its first proposals in this field. They included two Communications (Shaping Europe’s Digital Future and A European Strategy for Data), a white paper (Artificial Intelligence: a European Approach to Excellence and Trust) and two reports (B2G Expert Group Report: Towards a European Strategy on business-to-government data sharing for the public interest, and the Commission Report on Safety and Liability Implications of AI, the Internet of Things and Robotics).

The AI Package and AI Act Proposal (2021): a first globally

More recently, in April 2021, the European Commission’s presented the “AI Package” made up of three documents: a Communication on Fostering a European Approach to Artificial Intelligence, the 2021 update to the Coordinated Plan with Member States, and a proposal for an AI Regulation laying down harmonized rules for the EU (Artificial Intelligence Act). The core element of this package is of course the proposal of an Artificial Intelligence Act, which has been described as “the first ever legal framework on AI”. The AI Act aims to create an environment of trust among European citizens with regards to AI, by imposing specific obligations on the relevant actors. More specifically, following a risk-based approach, the AI Act establishes a list of prohibited practices for all AI systems and differentiates between AI that creates (i) unacceptable risk, (ii) high risk, and (iii) low or minimal risk. The AI Act also specifies that providers of high-risk AI systems are directly responsible for ensuring the compliance of their systems with the AI Regulation and, at the same time, imposes some obligations also on the users of high-risk AI, such as using such systems in accordance with the instructions for use, making sure input data is relevant, and monitoring the operation of the high-risk AI systems based on the instructions.  More specifically regarding the governance of the AI industry and technology, the AI Act proposal establishes a European Artificial Intelligence Board (the ‘Board’), made up of representatives from the Member States and the Commission.

The public debate: finding the right balance

However, the AI Act proposal is interesting not only for being the first attempt to regulate the economically crucial field of AI, but also for being the source of an increasingly intense debate among stakeholders, both on European and international level. If, on the one hand, stakeholders and experts have praised the Commission for its efforts to lay down a harmonized framework for the field of AI, at the same time it has been pointed out that the EU won’t be able to truly establish itself as a global leader in the field of AI regulation, unless it first achieves a leading role in the development of AI technologies. The proposal has been called on to answer numerous and complex questions at EU level, such as how to protect people from potential harmful uses of AI; how to define AI and risky AI without being too broad or too narrow; how to choose between self-assessment and a third-party assessment model; the need to protect consumers and clearly define their rights when it comes to harm caused by AI technologies, and so on. A widespread opinion that has emerged in the public debate is that while it is important that the institutions keep on thoroughly analyzing the AI Act proposal to ensure fundamental rights are effectively safeguarded, the EU should also work consistently towards better coordination in the field of AI regulation. It has been pointed out that a key element in this cooperation would be a closer collaboration between the private and the public sector. In addition, an enhanced dialogue and exchange between the EU institutions and national authorities should be promoted before provisions at different levels end up pursuing different goals or even contradicting each other. Consequently, the possibility of a multilateral governance approach has emerged from the public debate and it seems to be a viable option that should be further analyzed.

Given these premises, the main questions remain the same – What is the best way to address these rising concerns? And how can we balance what seem to be contrasting but equally important needs concerning the regulation of AI?

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Related posts

Latest posts

PromethEUs Publication: “Artificial Intelligence: opportunities, risks and regulations”

On Tuesday 14 November, at the I-Com premises in Brussels, the network presented its joint paper on the EU Artificial Intelligence Act...