Perspectives over the European Artificial Intelligence Act

Artificial Intelligence (AI) has become an increasingly relevant topic concerning its implications to society and the risks it implies.

In a quote attributed to Elon Musk, founder of OpenAI, it is said: “The pace of progress in artificial intelligence […] is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year time frame. 10 years at most.”[1]

According to a survey organized by consumer defense organizations in 9 EU countries, 41% of people report having a bad experience concerning information provided for loan proposals based on automatized decisions[2].

The AI Act

The European Commission has been the first major regulator to craft a law on Artificial Intelligence on the 21st of April 2021[3]. This Act follows from the release of a White Paper on AI in 2020 – developed after a public consultation on this matter. The proposal will become law once both the Council and the European Parliament agree a common version of the document. The latest revised version of the AI Act– as promulged by the Czech Republic Presidency of the Council – was proposed on the 19th October[4].

The Artificial Intelligence Act is a proposed regulation by the European Commission that sets a common regulatory and legal framework for AI  – integrating a diverse range of sectors and forms of technology. This regulation is meant for both the providers and users of AI systems in a professional capacity.

This document comes with the defined need for “addressing the opacity, complexity, bias, a certain degree of unpredictability and partially autonomous behavior of certain AI systems, to ensure their compatibility with fundamental rights and to facilitate the enforcement of legal rules.” [5] Its objective is to ensure AI systems in the market and being used are “safe and respect existing law on fundamental rights and Union values”; “ensur[ing] legal certainty to facilitate investment and innovation in AI”; the “effective enforcement of existing law on fundamental rights”; and “facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation”.

With these goals in mind, this regulation follows a risk-based approach differentiating AI use according to the risk it can pose “to the health and safety or fundamental rights of natural persons” in (i) unacceptable risk, (ii) high-risk and (iii) low or minimal risk.

While low risk applications are not regulated at all – with Member states precluded via maximum harmonization from regulating them further – medium and high risk systems would require compulsory conformity self-assessment before including them in the market.

This proposal gives particular focus to biometric identification and social scoring, proposing their prohibition.

This Act proposes the integration of an European Artificial Intelligence Board to support the Commission with expertise on this topic.

The Act includes specific attention to measures in support of innovation according to which “national competent authorities may establish AI regulatory sandboxes for the development, training, testing and validation of innovative AI systems […] cooperating with other relevant authorities”.

The On-going Discussion

While this act aims to help setting global standards internationally – reducing risks and pushing further possible benefits, other countries across the globe are following also with proposed regulation on this matter namely Brazil and Canada.

Since first published, this Act has had more than 300 suggestions from different industries and sectors including automotive, pharmaceutical and technology. More than 30 revisions have already been done to this day contributing to the on-going discussion.

The Commission has been advised both the Future of Life Institute[6] and the University of Cambridge Institutions[7] to account for general purpose systems; the impacts consider for society at large; empower the European Artificial Intelligence Board; allow a dynamic legislative development and maximize participation from developers/industry specifications.

The Act has also been targeted for not always accurately recognizing the wrongs and harms associated with the different kinds of AI systems nor allocating responsibility for them adequately. In a paper by Nathalie A. Smuha and Ahmed-Rengers et all[8], the authors defend the proposal falls short from what a legally trustworthy AI would require. Vodafone group[9] also raised concern for the “lack of focus given to incentives to develop Trustworthy AI”.

The Act is criticized for the risk of overburdening SMEs by institutions such as the European Digital SME Alliance and the University of Copenhagen.

The confederation of laboratories of AI Research in Europe (CLAIRE)[10] also warned the original regulation for being “too broad and vague to effectively boost the impactful development and use of ‘AI made in Europe’, and to strengthen, rather than weaken, the global competitiveness of the EU economy.“ The institute also identified the Act “encompasses a vast array of instruments and mechanisms, each of which is rather modestly funded in relation to the ambitious goals of the plan. The plan lacks coordination between these mechanisms. It also lacks large-scale signature initiatives that can address key challenges within the fragmented European AI ecosystem.

Cost burdens to the industry in Europe include some of the most relevant arguments found. The Center for Data Innovation[11] published a report claiming the AI Act to cost €31 billion over the next 5 years and reduce AI investments by almost 20%. CEPS[12] and others have suggested a much cheaper impact.

Institutions such as the Centre for Data Innovation have also warned the Commission that it is better for regulators not to apply precautionary principles to AI as it is too early “to know how AI will develop, how organizations will use it, how consumers will respond, what problems might occur, and how different stakeholders will respond to those problems[13].

In feedback to the original AI Act, OpenAI[14], founding company of AI language model GPT-3, explains how the same AI application can have both a high-risk behavior and a low-risk, depending on its usability.

Google[15] identified the need to “hold providers and deployers to feasible standards” claiming some requirements to be “difficult or impossible to comply with in real-world situations” including for “providers of general-purpose AI systems to meet under most circumstances”.

IBM[16] defended that “existing bodies are best placed to cover high-risk AI in their sectors, having the necessary sectoral expertise, operational relationships and track-record with relevant stakeholders”.

Facebook[17] also warned how Commission’s “approach can threaten to limit AI innovation across Europe, and in some cases, may even prove to be counterproductive [as] it could be interpreted to reach technologies that are either not high-risk AI systems, or are not even AI systems at all”.

Regulating AI directly on risk related metrics can allow neglect over other aspects that can imply further risk in the medium-run and aren’t considered alone by those standards. In addition, overestimating the impact from other overregulated but less prone to error applications can affect European’s innovation rate unreasonably. The right balance is needed.

From a high-level overview other that risk levels alone, Huawei[18] identified a multi-actor governance framework as a “a “one-size-fit-all” approach cannot meet the requirements for all of the participants”. This segregation should allow an agent based level of regulation – accounting for the perspectives from different agents involved.

Source: Huawei Feedback to the European Commission

Overall, regulation over AI has had an important advancement over the past few years with European Commission’s AI Act. The collaboration between different fields of expertise allows all involved parties to have their perspectives combined towards the most optimal policy implications for the future benefit of society.

“AI should be a tool for people and be a force for good in society with the ultimate aim of increasing human well-being.” – AI Act




















Related posts

Latest posts

PromethEUs Position Paper analysing the role of Digital for Growth for Reviving the Single Market and EU Competitiveness

The network recently presented its position paper Digital for Growth: Strengthening the Single Market and Reviving EU Competitiveness, analysing the role played...