by Vincenzo Tiani and Gianclaudio Malgieri
On 21 April 2021, the European Commission published its proposal for a regulation for artificial intelligence, the AI Act. Seven months later, the European Council, under the Slovenian presidency, presented a compromise text introducing some noteworthy changes and anticipating further discussion points. In sum, the proposed changes have lights and shadows, but also some gaps.
First of all, the definition of AI systems (which was accused to be too broad in the European Commission proposal) has been narrowed. The Council proposes to exclude all the “more traditional software systems” which cannot “achieve a given set of human defined objectives by learning, reasoning or modelling” from the definition. In addition, the Council version excludes all “general purpose AI systems” from the scope of the AIA. In other terms, if a general AI system (as many developed by big techs) has merely the potential to perform risky practices, but is not (yet) developed in those risky contexts, it should not be per se subject to the AIA.
On the one hand, these amendments could bring more clarity and fewer burdens to AI developers, while on the other hand, they could prove to a be slippery slope towards protection gaps and fewer design duties.
A risk-based approach
As AI cannot be dealt with on a sector-specific basis given its possible application in every field, after the stakeholder consultations following the first AI White Paper published in February 2020, the Commission opted for a risk-based approach. The AI Act therefore provides for four degrees of risk: unacceptable risk, high risk, limited risk, and minimal risk.