EU – Cyber liability, automated decisions and other festive updates from the CJEU

The CJEU issued six important judgments in December spanning a range of issues, from liability for cyber attacks and the scope of the rules on automated decision-making, to the circumstances in which a fine can be imposed under the GDPR.

We consider the role of the CJEU in shaping the EU’s data protection laws and the practical implications of these judgments below.

The CJEU decisions in context

The GDPR has a very broad scope. Its impact is felt by almost every business active in the EU – from sole traders to technology behemoths – and it also regulates public authorities and private citizens.

The breadth of the law means that the GDPR does not, and cannot, set out prescriptive rules. Instead, it uses a principle-based framework that allows the broad and flexible application of data protection laws but raises real questions about what the law means in any given situation. It is therefore unsurprising to see a large number of cases ascend from the national courts to the CJEU.

In line with the rights to privacy and data protection in the EU Charter, the CJEU places great weight on protecting the rights of individuals and has not shied away from decisions that are controversial or create significant practical difficulties for businesses. The CJEU has delivered nearly 30 judgments on the GDPR this year alone and still has around 50 pending decisions in front of it. These decisions address a wide range of issues, some of which are so fundamental that taken together they risk creating GDPR 2.0.

We consider how the CJEU’s December decisions contribute to this further development of the EU data protection framework.

Liability for cyber attacks – Fear alone can justify compensation

The first case is NAZP (C-340/21) which arose after a Bulgarian public authority suffered a cyber attack resulting in the disclosure of personal data of about 6 million individuals.

Several hundred of them brought claims for compensation, including the appellant in this case who sought compensation for non-material damage. In particular, she claimed compensation for “fear that her personal data, having been published without her consent, might be misused in the future, or that she herself might be blackmailed, assaulted or even kidnapped”.

The CJEU made the following findings:

  • Fear of misuse can justify compensation: The CJEU concluded that there is no minimum threshold for non-material damage, a conclusion also reached in VX (C-456/22). In this context, “fear experienced by a data subject with regard to a possible misuse of his or her personal data by third parties as a result of an infringement of that regulation is capable, in itself, of constituting ‘non-material damage’”. This risks opening the floodgates to compensation claims following a cyber attack, albeit the CJEU cautions that any such fear must be well founded and demonstrated.
  • “Loss of control” as a form of loss: In this respect, the CJEU notes that the concept of damages, for which compensation is available under the GDPR, includes “the mere ‘loss of control’ over their own data, as a result of an infringement of that regulation, even if there had been no misuse of the data in question to the detriment of those data subjects” (recital 85).
  • No strict liability for cyber attacks: On a more positive note for companies having become the victim of a cyber attack, the CJEU confirms that a controller or processor will only be liable for a cyber attack if they fail to implement appropriate technical and organisational measures – the mere fact there was a successful cyber attack does not mean an infringement of the GDPR or that the measures employed were not appropriate. Instead, the correct test is to first assess the risks to personal data from a breach and second assess whether the measures taken by the controller or processor are appropriate. The controller or processor has “some discretion” assessing what measures should be implemented.
Automated decision-making – Broad control over robo-decisions

The next decision is Schufa (C-634/21) which relates to a credit reference agency that provides credit scores to its commercial partners. The judgment relates to an individual who was refused a loan based on their credit score. Importantly, the reference was made in a case where the referring court had already established that a poor credit score would “in almost all cases” lead to a refusal to provide a loan (which is not always the way in which these scores are used in practice).

The following points arise from the CJEU’s decision:

  • Broad scope of automated decision-making: Even though Schufa does not itself decide if a loan will be provided, the CJEU decided that its provision of credit scores constitutes automated decision-making for the purposes of Article 22 of the GDPR. It still involves a “decision” that has legal effect and/or significantly affects a data subject. In coming to this conclusion, the CJEU considered that “decision” should be interpreted broadly and that in this particular case the credit score “plays a determining role” in the ultimate decision of the bank.
  • Need to consider downstream use: In practice, this means that controllers need to consider “downstream” use of the information they provide to determine if they are carrying out automated decision-making. For example, if in fact the credit scores are not determinative of whether a loan is granted (but simply one factor in a larger assessment), that might mean the provision of those scores is not subject to Article 22.
  • Prohibition in principle of automated decision-making, subject to exemptions: The fact the provision of a credit score is an automated decision means it is only lawful if one of the exceptions of Article 22(2) applies – i.e. (1) explicit consent, (2) performance of or entry into a contract or (3) authorisation under EU or Member State law. In relation to this:
  • The CJEU also indicated that any national law allowing automated decision-making must meet strict criteria, such as mandating appropriate mathematical or statistical procedures, measures to minimise the risk of errors and rights to prevent discriminatory effects. There must also be a right to human intervention.
  • The CJEU did not consider the other two options in any detail. However, to the extent the credit score is generated dynamically on request, it seems arguable this could be necessary for the entry into of a contract – i.e. the “decision” (i.e. credit score generation) is only made where necessary for a specific loan application (or similar) by the individual. Similarly, it might be possible to get explicit consent from the individual for the specific credit score to be generated by Schufa and provided to the lender. However, this may not extend to pre-calculated scores and/or scores relying on databases of service providers containing personal data of individuals collected prior to any scoring requests.
  • “Right to an explanation”: The CJEU’s rationale is to avoid a lacuna in the law whereby the individual would be unable to effectively understand and challenge the credit score allocated to them. Accordingly, the CJEU places great emphasis on the individual’s right to information about automated decision-making (including the logic underpinning that decision-making) under Article 15(1)(h). This is a topic the CJEU will return to in Dun & Bradstreet Austria (C-203/22) where the claimant has asked for detailed and extensive information on the logic and algorithms used for credit scoring.
Storage of personal data

A related, but equally important, decision also involves Schufa (see joined cases Schufa C-26/22 & C-64/22). Information in the public German insolvency register is removed after six months. However, Schufa, in its role as credit reference provider, keeps that information for three years. Importantly, that three-year period is set out in a code of conduct approved by the German supervisory authorities.

Schufa retained information from the insolvency register for three years on the basis of the ‘legitimate interests’ test (Article 6(1)(f)). That involves a three-stage test: (a) Is there a legitimate interest (noting this encompasses a wide range of interests)? (b) Is the processing necessary to achieve that interest or can it reasonably be achieved by other less intrusive means? (c) Do the interests of the individual take precedence over that interest?

The CJEU made the following findings:

  • The legitimate interests test does not justify keeping information for three years: This involved the balancing of different interests:
    • On the one hand there are good reasons to provide accurate credit scores which is reflected in part of EU law, such as the obligations in Directives 2008/48 and 2014/17 to make a thorough assessment of the applicant’s credit worthiness in relation to credit and mortgage applications. This not only protects the applicant but also the financial system as a whole. Added to that, the retention of this information was in accordance with a code of conduct approved by the German supervisory authorities.
    • However, the three year retention period was longer than the statutory period of six months applied to the public insolvency register and retaining this information for that longer period could compromise the ability of individuals to re-enter economic life after a period of insolvency. Moreover, whereas the discharge from remaining debts included in that register is intended as a benefit for the individual, Schufa drew a negative inference from this information when assessing solvency. The CJEU thus concluded that keeping this information for three years would be a “serious interference” with the rights of data subjects under Articles 7 and 8 of the EU Charter and so was unlawful.

Reconciling these interests involves a highly contextual balancing exercise with strong justifications on both sides. It is therefore interesting to see the CJEU using the EU Charter as the deus ex machina rule to conclude the retention was unlawful.

  • Parallel storage of information for six months could also be an infringement: Schufa collects information from the insolvency register in advance of any credit applications being made. It does this so it can provide a credit score promptly on request. If it had to specifically source that information from the insolvency register each time a credit score was needed, that would slow this process down and/or risk no answer being provided if the registers were unavailable. However, the CJEU considers that even Schufa’s parallel storage of public information for six months is an interference with Articles 7 and 8 of the EU Charter. In this case, the CJEU made no decision as to the lawfulness of the storage, leaving it up to the national court to decide.
IT development and broad scope of controllership and responsibility for processors

The decision in NVSC (C-683/12) considers the question of when an entity is a controller. Following the Covid outbreak, a public body in Lithuania (NVSC) commissioned a third-party IT company (ITSS) to create a mobile application for the registration and monitoring of individuals who were exposed to Covid.

That arrangement appears to be on fairly loose terms. An individual from NVSC emailed ITSS to ask them to develop the App and an NDA was signed identifying each party as controller. ITSS went ahead by developing and launching the App (e.g. through the Google App Store) without seemingly obtaining the consent of NVSC. The App was launched in April 2020. However, in May, NVSC asked ITSS to remove any reference to it from the App and in June it terminated the project.

The CJEU made the following findings:

  • NVSC was controller of data collected via the ITSS App: The CJEU concluded that NVSC was controller in respect of data collected via the App as it had exerted influence, for its own purposes, over the determination of the purpose and means of the processing in question. This was despite the fact that NVSC didn’t develop the App, didn’t perform any of the processing and didn’t consent to the App being launched. This is yet another example of controllership arising where there is only a relatively tenuous link with the underlying processing.
  • Use of synthetic data is not subject to the GDPR: The CJEU also confirmed that the use of synthetic data for testing does not fall under the GDPR where that synthetic data is not personal data. This conclusion is not surprising.
  • Controllers are generally responsible for processors: Finally, the CJEU concludes that a controller is generally responsible for the actions of a processor acting on its behalf. (This was relevant because ITSS claimed it was a processor and so not liable on that basis). However, a controller would not be liable if the processor acts in a manner that is incompatible with its instructions or in a situation where it “cannot reasonably be considered that [the] controllers consented to such processing”. In practical terms, this suggests it is sensible to tightly scope the instructions given to any processor.
Limitations on administrative fines

The final festive decision is Deutsche Wohnen (C-807/21). In this case, the controller was fined Euro 14m for retaining personal data in breach of the storage limitation principle.

The CJEU made the following findings:

  • Only intentional or negligent conduct justified a fine: The CJEU concluded that an administrative fine may only be imposed where it is established that the controller intentionally or negligently committed an infringement. This is a very significant decision that will require supervisory authorities to consider closely if the “controller could not be unaware of the infringing nature of its conduct, whether or not it is aware that it is infringing the provisions of the GDPR”. The CJEU reached the same conclusion in NVSC (C-683/12) (discussed above).
  • Infringement need not be by directors: The CJEU also concluded that controllers are not just liable for infringements committed by their directors or managers but by any other person acting in the course of the business of those legal persons and on their behalf.
  • “Undertaking” concept limited to fine calculation: Finally, the CJEU suggests that the use of the “undertaking” concept in the GDPR should be limited to the calculation of the size of any administrative fine and is not, for example, relevant in whether an infringement has been committed in the first place.
UK post-Brexit

None of these decisions are binding in the UK post-Brexit. Having said that, the UK courts can still “take account” of CJEU judgments. The UK GDPR remains very similar to the EU GDPR, so new CJEU judgments continue to be highly persuasive in the UK. For example, in Delo, R v Information Commissioner [2023] EWCA Civ 1141, the Court of Appeal relied on the post-Brexit CJEU’s judgment in BE (C-132/21) to conclude that the Information Commissioner is not obliged to reach a definitive decision on the merits of every complaint made to it.

Looking at the decisions by the CJEU set out above, it seems likely that many would be relevant to the interpretation of the UK GDPR, save that:

  • Cyber compensation: The UK courts have suggested that there is a “threshold of seriousness” that any compensation claim must cross (Lloyd v Google [2021] UKSC 50). It is therefore unlikely that mere loss of control or generalised fear would trigger compensation under the UK GDPR.
  • AI & automated decision-making: The UK is in the process of passing the Data Protection and Digital Information Bill. This is likely to reduce the scope of Article 22 of the UK GDPR, and thus the restrictions on automated decision-making, significantly.
  • Margin of discretion: Similarly, it is likely the UK courts would afford controllers a greater margin of discretion (particularly given the EU Charter no longer applies to the UK). For example, it is not clear it would consider the storage of credit information for three years to be excessive without a more concrete assessment of the wider social and economic impact.