The EU General Data Protection Regulation (GDPR), which came into force on 25 May 2018, will give effect to individual rights to information about using your own data for fully automated ADM systems (automated decision making, in short: ADM) that work without human intervention. However, the individual rights to information of the GDPR cannot uncover systematic deficiencies or discrimination of entire groups of people. This shows an analysis drawn up by legal scholars Wolfgang Schulz
and Stephan Dreyer
from the Hans-Bredow-Institut / Alexander von Humboldt Institute for Internet and Society (HIIG)
, as commissioned by the Bertelsmann Stiftung.
Photo by ev
Bank lending, the preselection of applications, police work – algorithms evaluate people and decide on them, but so far almost without social control. It is not known which algorithmic decision-making systems are used for which purpose and to what effect. One example of fully automated ADM systems with no human participation in decision making is the pre-selection of job applications. In some companies, software programmes screen CVs and sort out applicants without an employee ever having looked at their documents. The GDPR ensures that an unsuccessful applicant can find out which of his data was decisive for the negative decision. However, for the most systems in which people are involved in the decision-making process, the ADM specific obligations of the GDPR to provide information and explanations do not apply.
GDPR Is an Important but not Sufficient Step towards ADM
Despite this major shortcoming, the analysis shows that the GDPR represents an improvement for individuals in terms of algorithmic decision-making. The comprehensibility of automated decisions is significantly strengthened by basic information obligations and information rights. In addition, the restrictive documentation requirements have led to a higher awareness of data protection issues among data-processing actors.
However, the GDPR regulates primarily the protection of individuals. “The new regulation is blind to the incorrect evaluation or systematic discrimination of entire groups through automated decisions,” explains Wolfgang Schulz. Individual rights to information and rights of defence as they are anchored in data protection were not sufficient to expose and stop discrimination against certain groups. The example of job applications: it is good if individuals can understand how the decision about their rejection came about. However, it remains unclear whether certain characteristics such as gender or the place of residence lead to an unfair reduction in the chances of entire groups.
More about the Study
It is software programmes that increasingly evaluate people and decide about them. From a credit application to a job application to a personality profile for suitable advertisements, citizens are the object of automated decision-making systems (ADM). However, the algorithms relevant to these systems and the training data required for the decision entail risk potential for the individual, for entire social subgroups or even society as a whole. In addition to the general requirements for the processing of personal data, the General Data Protection Regulation which took effect on 25 May 2018 contains special provisions for these ADM systems.
In the report „Was bringt die Datenschutzgrundverordnung für automatisierte Entscheidungssysteme? Potenziale und Grenzen der Absicherung individueller, gruppenbezogener und gesellschaftlicher Interessen [What Are the Benefits of the General Data Protection Regulation for Automated Decision-Making Systems? Potentials and Limits of the Protection of Individual, Group and Social interests]“, the two legal experts, commissioned by the Bertelsmann Stiftung, examine whether and to what extent the GDPR and the new Federal Data Protection Act (BDSG), which will enter into force at the same time, are in a position to reduce these risks.
The analysis, which was carried out as part of a cooperation between the Humboldt Institute for Internet and Society (HIIG) and the Hans-Bredow-Institut for Media Research, makes it clear: The general prohibition of pure ADM systems has a very limited scope and includes broad statutory exceptions to the prohibition. Above all, with the consent of the person concerned, automated decisions will continue to prevail in practice. For all “exceptionally” authorised ADM systems, the GDPR provides legal regulations that are suitable to partially safeguard the individual interests of users. The persons responsible for data protection of ADM systems must comply with information and transparency obligations towards the user regarding the use of an ADM system as well as the basic mechanisms of data processing and the decision.
However, the scope and depth of these obligations to inform are limited. As a result of an automated decision, those affected are entitled to information about the use of an ADM system and the basic mechanisms of data processing and the decision. They also have the right to call in a human decision-maker. These rights help to review and, if necessary, correct an automated decision. However, this does not entitle the parties concerned or independent third parties to access the system.
System and process-related requirements of the GDPR for the design and use of ADM systems are suitable for identifying risks at an early stage for individuals (and in some cases indirectly also for groups of people) on the part of the responsible parties and to secure minimum quality standards. These include in particular privacy by design, compulsory impact assessments and internal data protection legislation as well as the appointment of a data protection officer. These regulatory instruments have the potential to ensure a high level of reflection on data protection issues among those responsible and therefore guarantee at least individual rights and freedoms.
Thus, the GDPR can create transparency, verifiability and a high level of reflection and help to secure individual rights to some extent. However, the GDPR offers hardly any starting points for aims related to groups or society, such as non-discrimination or participation.
Against the backdrop of these results, the report provides supplementary controlling approaches within and outside the scope of the GDPR. For this purpose, certain starting points within the regulation can be strengthened, which would have, above all, a preventive effect:
- Data protection authorities could also establish data protection impact assessments for ADM systems exempted from the GDPR
- In addition, within the framework of the GDPR, the role of regulating authorities in practice can be guided more strongly in the direction of informing the public and raising awareness of social injustices.
- However, further controlling instruments beyond the GDPR should also be discussed as to how supra-individual and social aims can actually be safeguarded.
- The possibilities of a better verifiability of ADM systems can be improved by approaches that increase the explicability of the systems or which guarantee the verification by external third parties by means of extended transparency requirements (e.g. in form of in camera procedures).
- For the correction of ADM systems already in use, control elements from competition and consumer protection law would be conceivable, which would enable faster law enforcement.
- The diversity of ADM systems can, if necessary, be supported by the adoption of antitrust regulatory instruments; media regulations can also contribute to pluralism in ADM systems, as far as they control the flow of information and influence the formation of opinion.
This enables alternative controlling approaches to protect supra-individual interests that are not covered by the GDPR, which focuses primarily on safeguarding individual interests.