Governance Framework

Research and Writing by Yesha Tshering Paul

As governments implement new and foundational digital ID, or modernize existing ID programs, there is an urgent need for more research and discussion about appropriate uses of digital ID systems. This also raises concerns about privacy, surveillance and exclusion harms caused by state-issued digital IDs in several parts of the world. Given the sweeping range of considerations required to evaluate Digital ID projects, it is necessary to formulate a framework for evaluation that can be used for this purpose.

This framework provides tests that can help evaluate the governance of digital ID across jurisdictions, as well as determine whether a particular use of digital ID is legitimate. Through three kinds of checks — Rule of Law tests, Rights based tests, and Risks based tests — this scheme is a ready guide for evaluation of digital ID.

Rule of Law Tests

Legislative Mandate

Is the project backed by a validly enacted law? Does the law amount to excessive delegation?


Does the law have a ‘legitimate aim?’ Are all purposes flowing from a ‘legitimate aim’ identified in the valid law?


Does the law clearly specify the actors and the purposes that would flow from the legitimate aim?


Does the law provide for adequate redressal mechanisms against actors who use the Digital ID and govern its use?


Are there adequate systems for accountability of governing bodies, users of Digital ID and other actors?


Is there a legislative and judicial oversight mechanism to deal with cases of mission creep in the use of Digital ID?

Rule of Law Tests

The use of digital ID by state and private actors requires a rule of law framework to prevent its misuse. Digital ID systems must aim to meet basic rule of law parameters, and any potential infringement of an individual’s rights must be sanctioned by a statutory law passed by the appropriate legislative body and not merely an executive instruction. This law must be accessible to all persons who may be impacted, and precise enough to limit discretion and prevent executive abuse. It must have a legitimate aim, to which all the purposes of the digital ID must correspond. All actors and purposes that arise from this legitimate aim must be clearly identified, as well as how it applies to State and private actors. Potential mission creep should be mitigated by clearly expressed purpose limitations backed by law, to ensure that the executive authority cannot use the digital ID for unspecified purposes without a proper legislative or judicial examination of additional uses, or fresh consent from users. The law must also provide ex-ante and ex-post accountability measures.

Rights Based Tests


Are the privacy violations arising from the use of Digital ID necessary and proportionate to achieve the legitimate aim?


Are there clear limitations on what data may be collected, how it may be processed and how long it is retained for, during the use of Digital ID?


Are there protections in place to limit access to the digital trail of personally identifiable information created through the use of Digital ID by both state and private actors?


Are there adequate mechanisms to ensure that the adoption of Digital ID does not lead to exclusion or restriction of access to entitlements or services?


In case enrolment and use of Digital ID are made mandatory, are there any valid legal grounds for doing so?

Rights Based Tests

Any digital ID will inherently infringe on certain fundamental rights. At every stage of implementation, the identity framework must be examined against the rights it may violate, and if these violations are necessary and proportionate to any potential benefits. Such an examination is critical because failure or absence of identification can lead to exclusions from basic entitlements. Principles of data minimisation must clearly dictate the amount and nature of data to be collected and stored. Access control mechanisms that regulate access to data by different actors must be laid out in the surrounding legal framework and enforced through strict civil and criminal penalties for any violations. Exclusions arise out of not only poor implementation, but also design flaws in the system. If the intended use of ID can lead to denial of services, mechanisms must be employed to ensure that individuals are not deprived. Most importantly, digital ID must not be mandatory to access benefits, and multiple alternative identification mechanisms should be provided. An opt out option that does not restrict access to the service, and mandatorily erases collected information must also be provided.

Risk Based Tests


Are decisions regarding the legitimacy of uses, benefits of using Digital ID, and their impact on individual rights informed by risk assessment?


Do the laws and regulations envisage a differentiated approach to governing uses of Digital ID, based on the risks it entails?


Does the law on Digital ID envisage governance, which is proportional to the likelihood and severity of the possible risks of its use?


In cases of demonstrable high risk from uses of Digital ID, are there mechanisms in place to prohibit or restrict the use? Do the laws and regulations envisage a differentiated approach to governing uses of Digital ID, based on the likelihood and severity of risk?

Risk Based Tests

A digital ID system must account for any potential harms. This approach to privacy requires that the system be examined against tangible risks to individuals, allowing the administrator to prioritise risks in order of severity and respond accordingly. These risks can be classified into privacy harms, exclusion harms and discriminatory harms. A differentiated approach to governance would involve categorising various uses of digital ID as per se harmful (which can be prohibited outright), per se not harmful (which can avoid regulation), and sensitive (where regulation is based on various factors). The risk level arising out of a digital ID is measured in terms of severity and likelihood. These harms must then be proportionately addressed by law. Threats to the ID system can be analysed based on its uses, with a wider number of uses resulting in a higher level of risk. If the risks arising from the system are demonstrably high, mechanisms to restrict use must be employed until mitigating factors are introduced. Mitigating strategies would include notifications in case of breach, having a tested business continuity plan and increased capacity building. The choice of strategies depends on the design of the ID system and its reliance on private entities for different functions.