by Vincenzo Tiani
On 25 May, precisely on the occasion of the birthday of the GDPR, the General Data Protection Regulation, an important judgment of the Court of Cassation was published on the subject. The ruling clarified a fundamental principle: where an algorithm automatically profiles us with the possible consequence of limiting our rights, the consent given is only valid if we have been explained how that algorithm works. This is the direct consequence of the fact that consent, in order to be considered valid, must be ‘freely and specifically expressed with reference to a clearly identified processing operation’.
The judgment concerns events that took place in 2016, before the entry into force of the GDPR, when the old text of the Privacy Code (Legislative Decree 196 of 2003) was in force. In this case, an association used a platform capable of “processing reputational profiles concerning natural and legal persons in order to counteract phenomena based on the creation of artificial or untrue profiles and to calculate, instead, in an impartial manner, the so-called ‘reputational rating’ of the subjects surveyed, so as to allow any third parties to verify the real credibility“. This data processing was deemed unlawful by the Italian Data Protection Authority (the Garante), which ordered it to be blocked. The association appealed against this decision before the Court of Rome, which partially overturned the Garante’s decision.
In the Court of Rome’s view, it was legitimate for the association to be able to offer this rating service, given also the express consent of those concerned to its use. Since, according to the Court, there was no specific regulatory framework for “reputational rating”, similar to that existing for the “company rating” provided for in the public contracts code, the system could not be considered unlawful.
The Garante took an entirely different view, stating that ‘the unknowability of the algorithm used to assign the rating score, with the consequent lack of the necessary requirement of transparency of the system‘ did not allow the person concerned to give informed consent. The data subject cannot give valid consent when he/she does not have sufficient information to establish which data processing he/she is accepting.
The Italian Supreme Court’s decision
The Court of Cassation ruled in favour of the Garante because, in order to be valid, the consent must relate to a data processing “clearly identified” and, to be so, the association should have adequately informed how and which data would be used by the algorithm in providing the result. Interestingly, the Court of Rome, in its judgment, did not deny that the algorithm was opaque but resolved the problem by simply relying on the market to determine its reliability. Therefore, according to the Court of Rome, if an algorithm is badly constructed and perhaps makes an incorrect assessment of a subject’s reputation with the consequence that he will not get a job or a mortgage, the interested party should not turn to a judge but hope that the market will make it obsolete and favour better algorithms. The Supreme Court rejected this interpretation because it was not a matter of assessing a problem of competition in the market between different systems but whether the consent given was valid or not..
More transparency in algorithm decisions
The GDPR recognises a general right of the individual not to be subject to a decision taken in an automated way, for instance by an algorithm or an artificial intelligence system, that has a legal effect or significantly affects his or her life. This could be the case of an algorithm that selects CVs automatically on the basis of keywords alone, or a system for assessing the creditworthiness of a bank loan applicant. Article 22 GDPR provides that one can consent to the use of such systems but only where the company guarantees the rights and legitimate interests of the individual and his right to have human intervention to assess his profile and to be able to comment on and challenge the decision.
At the same time, transparency of algorithms has become indispensable in the new legislative proposals under discussion in Brussels. In the Digital Services Act, platforms must be able to justify their decisions and in some cases must allow external experts and researchers to scrutinise their algorithms. The same is said in the proposal for a European regulation on artificial intelligence. Only with greater transparency will it be possible to govern the possible negative effects of an increasingly widespread automation of decisions affecting citizens.
The case will now go back to the Court of Rome, in a different composition from the one that dealt with it, for a new examination.
Originally published on Wired Italia
License Creative Commons Attribution, Non Commercial, Non Derivs 3.0