Treatment of Algorithmic Communication in German Constitutional Law
Every app, every website, every search query uses algorithms. In his PhD project, Felix Krupar analyses the legal classification of algorithmic communication.
Every app, every program and every website uses algorithmic operations, which determine their behaviour, appearance and functions. However, despite this omnipresence, the legal treatment of the results of algorithmic calculations is characterized by uncertainty. Although algorithms are not more than a sequence of arithmetic operations and, thus, the result should be comparable to a fact, some algorithmic results seem to have a nature that is close to a statement of opinion. How relevant and complex this problem is, can be seen with these two cases: the judgement of the Federal High Court of Justice on the autocomplete functions of Google and the judgements on the rating portal Yelp. In both cases, the parties argued about the output of algorithmic calculations, by which the plaintiffs claimed to be violated in their personal rights. The judgements however, are divergent in their argumentation as well as their results. As the jurisdiction lacks a clear line, and research does not pay much attention to this topic, it is all the more important to take a closer look at this issue. This dissertation will approach this problem from a constitutional perspective.
This work’s broad aim is to develop a consistent handling of various forms of algorithmic communication from a legal perspective.
An algorithm is described as “[…] a clear set of rules of action for solving a problem or a number of problems.” Thus, an algorithm has to have the same result over and over again if the starting point and the input are always the same. Therefore, it is logical that every output of an algorithm is a fact. There is (yet) no algorithm that is able to think like a human being and, thus, can form a “personal opinion”. Every result of an algorithm can always be reconstructed like the solution of an arithmetic problem, e.g. 1 + 1 = 2. The best example is an algorithm for identifying prime numbers: its only task is to extract all prime numbers from a set of numbers. The result and, therefore, the output of the algorithm is always a fact. A prime number is a prime number and as such not subject to an evaluation by others.
The more complicated an algorithm gets, the more variables it has to consider, the more constants and parameters are added by the human hand, the more unmanageable it gets: even the programmer won’t be able to foresee the result, and in the end, it can look like an opinion.
The legal relevance of the classification results from the peculiarities of the German right to free speech. Thus, it is already vague whether there is a statement of legal relevance if the remark was not or only indirectly given by a human. Besides, the different treatment of facts and opinions is quite problematic within German law. This runs through the whole legal system and, as a result, leads to serious differences. Unlike opinions, facts can only be right or wrong. The difference is the relationship between the statement and the reality. Facts can be proven, e.g. by taking evidence. Opinions, on the other hand, cannot be right or wrong, they can be true or false, considerate or inconsiderate. So, opinions are, in short, such statements that have a value judgement.
Surprisingly, only a few lawsuits had to deal with the classification of algorithms and the problems mentioned above. Google’s autocomplete function is one example that the public is aware of, due to the rumours about the wife of the former German Federal President, Bettina Wulff. This case will serve as an example because it is particularly suited to show the problems and the relevance of a classification.
In the case of AutoComplete, the suggestions for (word) completion on a smaller scale already contain opinion-forming tendencies, statements and much more than the initial information people searched for. This effect becomes obvious if you search for people: Google completed the search for Bettina Wulff with suggestions leading to the red-light district until the judgment of the court. The search for Tom Cruise is automatically completed with “Scientology.” In another case, which ended up in court and finally the BGH [Federal Supreme Court], the name of the plaintiff was connected to “fraud” and “bankruptcy.” In the final instance, Google was sentenced to refrain from doing so. As a result, the BGH did not refer to the fact that the words mentioned above were searched within a context but to the subjective information that were given to all people searching for information. A deeper analysis is needed in order to see whether and what information is connected to a search, what the legal scope is and who is responsible for it.
Project Information
Overview
Duration: 2015-2017
Research programme: RP2 - Regulatory Structures and the Emergence of Rules in Digital Communication