by Vincenzo Tiani
On 9 December 2021, the European Commission published a proposal for a directive whose aim is to improve the working conditions of people working through digital working platforms. To do so, it will promote transparency, fairness and accountability in the management of the algorithms underpinning the operation of platforms, with further attention to the processing of personal data.
The new approach introduced is the legal presumption that a worker whose work is controlled by the platform is to be considered an employee, beyond what is contractually agreed between the parties, when at least two of the five requirements of Article 4 are met:
(a) effectively determining, or setting upper limits for the level of remuneration;
(b) requiring the person performing platform work to respect specific binding rules with regard to appearance, conduct towards the recipient of the service or performance of the work;
(c) supervising the performance of work or verifying the quality of the results of the work including by electronic means;
(d) effectively restricting the freedom, including through sanctions, to organise one’s work, in particular the discretion to choose one’s working hours or periods of absence, to accept or to refuse tasks or to use subcontractors or substitutes;
(e) effectively restricting the possibility to build a client base or to perform work for any third party.
A risk-based approach
As AI cannot be dealt with on a sector-specific basis given its possible application in every field, after the stakeholder consultations following the first AI White Paper published in February 2020, the Commission opted for a risk-based approach. The AI Act therefore provides for four degrees of risk: unacceptable risk, high risk, limited risk, and minimal risk.