Element 68Element 45Element 44Element 63Element 64Element 43Element 41Element 46Element 47Element 69Element 76Element 62Element 61Element 81Element 82Element 50Element 52Element 79Element 79Element 7Element 8Element 73Element 74Element 17Element 16Element 75Element 13Element 12Element 14Element 15Element 31Element 32Element 59Element 58Element 71Element 70Element 88Element 88Element 56Element 57Element 54Element 55Element 18Element 20Element 23Element 65Element 21Element 22iconsiconsElement 83iconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsiconsElement 84iconsiconsElement 36Element 35Element 1Element 27Element 28Element 30Element 29Element 24Element 25Element 2Element 1Element 66

MEDIA RESEARCH BLOG

Blog Header Background Image
Zur Übersicht
When unconsent is no option

When unconsent is no option

29.06.2022

Digital identity systems must not be mandatory until they are human rights-compliant 
by Rosanna Fanni

As states ramp up their digital policy strategies to curb the power of dominant technology platforms, they also seize greater control over digital infrastructures. While the past decade favoured global connectivity and export-oriented digital innovation by private actors, today states assume a new role as digital gatekeepers – on social media platforms, in “smart cities”, and perhaps soon also in the metaverse. For many of these services, it is paramount for governments and companies to identify citizens online. This is why digital identity (ID) schemes gain popularity worldwide, along with the speedy uptake of personal data and biometrics for identity verification. With the speedy uptake, human rights safeguards for digital ID technology and data governance insufficient. This warrants acute attention for mandatory digital ID schemes, which have already been implemented in several countries.

 
Identifying and authenticating citizens’ identity has become a de-facto standard for many citizens to access the most basic online services and platforms to communicate or work. Digital ID systems are run by governments, sometimes by private companies, or by a combination of both. The recent international trend – or rather, push – to implement digital ID projects is seemingly rooted in digitising public administration and digital development. Digital ID systems are said to provide legal identity for citizens without an ID card – an universal right enshrined in Article 6, UN Human Rights Declaration (right to recognition as a person before the law) and a sustainable development goal 16.9 (right to legal identity). This is why states around the world are increasingly mandating citizens to register in national digital ID systems. It gets problematic when governments are at the same time self-attributing greater power to determine the rules for digital ID systems if identification online becomes a prerequisite in citizens’ daily lives.
 
According to World Bank data, of 168 countries that have a national identification scheme, 94% are classed as a ‘digitised ID system’, and approximately 65% of those schemes use biometric information, such as fingerprints or iris scans. While biometric data promises greater accuracy and more seamless identification checks, the use and processing of such data bears risks and thus cannot be implemented without proper scrutiny. Refugee camps are often testing sites for biometric technologies. Because biometric registrations become increasingly mandatory for refugees to receive welfare provisions, power inequalities between refugees and humanitarian agencies are reinforced. The implementation of new biometric technology is also costly and a market opportunity for technology companies.
 
In light of the rapid and often unscrutinised adoption, it is paramount to first understand the impact of digital ID schemes on human rights. As states take an increasingly ambiguous role in controlling the implementation and governance of digital identification, civil rights organisations point to a number of potential harms arising in numerous use cases . Numerous documented disproportionate impacts on vulnerable and marginalised populations make it imperative to better understand the design and features of digital ID systems and the impact on human rights.

How digital ID systems challenge the right to equality, privacy and good administration

The right to equality is at risk because digital ID schemes structurally exclude individuals and groups because of them not being recognized by the system, so-called false negatives and false positives. Digital ID systems can also be discriminatory based on the categories that determine whether an individual can register or not, as well as other technical or logistical barriers. Individuals can also be excluded from enrollment and verification processes due to gender categories or braised fingers from physical work, a distinct challenge for farming communities. The right to privacy is a second challenge around the implementation of digital ID systems. Personal data can be exploited by government agencies, whether intentionally or by accident. For example, the linking of different databases for identity verification can reveal sensitive information, leaving personal data exposed to unauthorized entities. Connecting datasets beyond the initial legitimate purpose is not only illegal but also creates risks for abuse by foreign governments or criminal actors. The third risk stemming from the implementation of digital ID systems is surveillance through tracing and tracking individuals both nationally and internationally, leading to disproportionate and unnecessary interference with privacy and human rights and a chilling effect on surveillance practices in general. The right to good administration is put at risk when governments self-assert the right to centralise datasets and run automated analysis, and in turn, the potential for misuse or unwarranted inferences on people’s lives increases.

Mandatory digital ID systems declared invalid in Tunisia and Jamaica

In Tunisia, a biometric ID draft law was introduced in 2016 that would require Tunisians to link their mandatory national ID to biometric data for the digital version. After significant pushback from civil society and digital rights organisations in 2018, the proposal was withdrawn by the Tunisian Ministry of the Interior due to the insufficient data protection and privacy safeguards. However, a new biometric ID bill was introduced without further additional human rights safeguards in 2019 in the wake of rising protests, during which the Tunisian police has also rolled out remote identification technology to prosecute opponents. Independent scrutiny and human rights safeguards are necessary to prevent public surveillance through abuse of the biometric database.
In Jamaica, the Supreme Court declared the digital ID scheme as unconstitutional in 2019. According to the ruling, the mandatory digital ID system fundamentally challenged citizens’ freedoms and puts privacy rights at risk. The obligatory digital ID system, the breadth of application and the lack to opt out, were all “not justified or justifiable in a free and democratic society” as stated by the Supreme Court. Similar to the case of Tunisia, a new national ID bill was introduced in 2022 including a biometric ID card issuance pilot. Details about human rights protection and independent scrutiny are yet to be announced.
 
To summarise, the dual function of states as provider of digital IDs and rulemakers thereof is incompatible in its current setting and requires in-depth scrutiny. Digital ID systems challenge human rights, in particular the right to equality, privacy and good administration, and no such system should be made mandatory until it is proven to be fully compliant with the international human rights framework.


Cover photo: George Prentzas / unsplash

 

Weitere Artikel

mehr anzeigen

Newsletter

Infos über aktuelle Projekte, Veranstaltungen und Publikationen des Instituts.

NEWSLETTER ABONNIEREN!