Written by 18:24 Media

The European AI Act on artificial intelligence still leaves many questions unanswered

The Commission’s proposal was eagerly awaited and is the first attempt to bring order to a complex and sensitive issue. But there are many knots to unravel.

wired_post

by Vincenzo Tiani

On 21 April, the European Commission published its proposal for a regulation on artificial intelligence (Ai). The text was long awaited, given that the topic of Ai was immediately prominent in the programme of president Ursula Von der Leyen and represents the fruit of years of preparatory work. Following the publication of a White Paper on Ai in February last year, which opened a consultation period, the Commission received 1215 external contributions, of which 352 came from companies, 152 from universities and academics, and 160 from civil society. Even earlier, the Commission had mandated 52 international experts to prepare a list of ethical and policy recommendations on how to regulate artificial intelligence. According to the expert group, ” trustworthy Ai should be: legal (respecting all applicable laws and regulations), ethical (respecting ethical principles and values), robust (both from a technical perspective while taking into account its social environment)”.

The proposal for a regulation (henceforth Ai Act), which consists of 85 articles plus annexes, is enormous if not revolutionary in scope. It is the first legislative text that aims to regulate such a vast subject as Ai. Regardless of the merits of the regulatory provisions, the Commission’s efforts are commendable and will undoubtedly facilitate an international dialogue on this issue, with the United States and China in particular.

The need for a common framework of rules

Contrary to the critics, who think that the EU is adding an extra and unnecessary burden to those companies rushing to provide innovation in this field, the Commission wants to provide a shared set of rules that on the one hand make it easier to launch new products and services on the European market and at the same time increase the degree of user confidence in the use of this technology. It is precisely this lack of confidence that could damage businesses once these products and services become mainstream. For start-ups and SMEs, support is provided both from an organisational and economic point of view.

Similar to the approach of the GDPR, the European Data Protection Regulation, the new set of rules is not prescriptive in the strict sense. The underlying principle is one of accountability and self-assessment. Except in rare cases, companies do not have to undergo prior external scrutiny but must be able to demonstrate that the way they have developed a certain type of AI does not infringe fundamental rights or pose a risk to individuals.

Four levels of risk

AI risk levels - European Commission
AI risk levels – European Commission

The AI act sets out four levels of risk. Unacceptable risk (Article 5) covers cases where AI is considered a threat to the security and fundamental rights of individuals. Examples include biometric recognition by the police (with the many exceptions we shall see), social scoring (like the one used in China), and systems that subliminally modify a person’s behaviour to such an extent as to cause them physical or psychological harm.

High-risk covers cases where Ai is used for critical infrastructures such as transport; access to education or professional training; product safety components (e.g. the application of Ai in robot-assisted surgery); employment and worker management (such as CV selection software for recruitment procedures); essential public and private services; in uses that interfere with fundamental rights; at borders; and in the legal sphere.

In this case, before they can be placed on the market, they will have to demonstrate that they have implemented “adequate risk assessment and mitigation systems; high quality of the data sets feeding the system to minimise risk and discriminatory outcomes; logging of activities to ensure traceability of results; detailed documentation providing all necessary information about the system and its purpose so that authorities can assess compliance; clear and adequate user information; appropriate human supervision measures to minimise risk; high level of robustness, security and accuracy”.

Limited risk only requires specific transparency obligations for the Ai provider (Article 52). For instance, in the case of the use of chatbots, the user should be informed that he/she is not interacting with a human being, just as he/she should know when he/she is viewing a video generated with deep fakes. Finally, the minimal risk is limited to those cases where the AI can be used freely, such as in applications like AI-enabled video games or spam filters.

Biometric recognition: too many exceptions

The adoption of real-time remote biometric identification systems (RTRBIS) in public spaces by law enforcement officers is prohibited, unless it is strictly necessary to search for victims of crime or missing children; to prevent a specific threat, such as a terrorist attack; to detect, locate, identify or prosecute a criminal or suspect punishable by a sentence of at least three years for certain offences (the ones listed in Article 2 of Decision 2002/584/JHA: including terrorism, trafficking in human beings, child pornography, but also fraud, forgery, bribery).
It is true that RTRBIS may be used only after careful assessment of the seriousness of the situation and of the greater risk to safety if it is not used, as well as of the risk to fundamental values. Furthermore, it should only be used “with necessary and proportionate safeguards and conditions in relation to the use, in particular as regards the temporal, geographic and personal limitations”. It must then receive prior authorisation from the judge, except in cases of emergency where authorisation may be granted later.

It will be up to individual Member States to decide whether to authorise these forms biometric identification systems, how and for which crimes. This room for manoeuvre, although normal when it comes to national security, leaves room for some concerns. The first is that not all EU countries guarantee the same level of democracy and independence of the judiciary. The second is that the police forces and municipalities have already been using this technology without even complying with the rules of the Gdpr, which have been in place for almost three years. Lastly, in order to use these systems, the cameras must already have been installed, and this could lead to a green light justified by the fact that they will only be activated when strictly necessary. Changes in government and high cybersecurity risks make it difficult to think that something will not go wrong.

It is no coincidence that the European Data Protection Supervisor, in announcing a more in-depth review of the Ai Act, immediately regretted the lack of a block on the use of biometric recognition in public spaces, as called for by his predecessor, Giovanni Buttarelli.

Some safeguards in place

This new system will be overseen by a new board composed of one representative of each national authority, the European Data Protection Supervisor and the Commission. The board will facilitate cooperation and exchange of ideas and help the Commission to draft opinions and guidelines. The national authorities will have inspection powers and in specific cases will also be able to have access to the source code of the Ai. Each year, the Commission will make sure that the national authorities in charge have the necessary human and financial resources to be up to the difficult task assigned to them. All high-risk Ai applications will then be collected in a database kept by the Commission, which will be publicly accessible and will have the references of the company providing the AI systems.

Sanctions

Following the model of the GDPR penalties, the penalties are graduated according to the type of infringement and their amount may be a figure or a percentage of the global turnover. The maximum penalty provided for in Article 71 is 30 million euro or 6 % of global turnover in those cases where the provisions on the prohibited use of AI (Article 5) have not been complied with or when, in cases involving the use of high-risk AI, the provisions on how the data are used for training, validation, and testing (Article 10) are not met.
In other cases of infringement, the penalty would be a maximum of EUR 20 million or 4 % of turnover, or EUR 10 million or 2 % of turnover if the offender did not provide the required information to the competent authorities. In imposing the penalty, the authorities will have to take into account the nature, gravity and duration of the infringement, the size and market share of the company, and whether administrative fines have been already applied by other market surveillance authorities to the same operator for the same infringement.

There is one point that deserves some attention. Article 71(7) states that it will be up to each State to decide whether and how high the penalties for public and independent authorities will be in the event of an infringement. Without due control, there is a risk of giving carte blanche to the adoption of artificial intelligence systems by the public administration. Even if the Data Protection Authority does not lose control over these entities, this choice by the European legislator is questionable. In fact, Article 72 for example says on the other hand that the European Data Protection Authority is called upon to sanction the European institutions in the event of a breach, with fines of up to EUR 500 000 in the most serious cases.

This proposal does not deal with the issue of civil liability, which will be the subject of a different and separate text to be presented by the Commission early next year. This is another fundamental step on the path towards trustworthy Ai.

Scope

The Ai Act will apply to anyone placing artificial intelligence systems on the EU market, even when the system is produced outside the EU but its output is used within the EU. It will not apply to exclusively military uses or to international organisations and public authorities when there is an international agreement for judicial cooperation (Article 2). Finally, it will only apply to high-risk Ai systems that were already on the market when, after the entry into force of the law, the systems have undergone a significant change in their design or purpose.

There are obligations not only for the supplier but also for the importer and distributor. This means that they will have to be able to fully understand the documentation that the manufacturer will have to provide them with before they can put the Ai system on the market. This will obviously require adequate human and economic resources.

The next steps

The text will be at the heart of the European debate for a long time, probably at least two years, and once the final text has been approved it will be another two years before it becomes operational. Given the huge interests at stake, which affect all industrial and economic sectors, not just big tech, it will not be an easy challenge. The fact that we are talking about a norm that will regulate something that by its very nature is in continuous evolution makes the challenge even more difficult.

Article originally published on Wired Italia.

Close