A project of the Centre for Internet and Society, India
Supported by Omidyar Network
This post presents our comments to the ID4D Practitioners’ Guide: Draft For Consultation (“Practitioners’ Guide”) released by ID4D in June, 2019. CIS has conducted research on issues related to digital identity since 2012. This submission is divided into three main parts. The first part (General Comments) contains the high-level comments on the Practitioners’ Guide, while the second part (Specific Comments) addresses individual sections in the Guide. The third and final part (Additional Comments) does not relate to particulars in the Practitioners' Guide but other documents that it relies upon. We submitted these comments to ID4D on August 5, 2019.
In its explanation of the basic roles of an ID system, the Practioners’ Guide defines authorisation as “checking specific attributes necessary to determine whether a person is authorised or eligible for something”, with its primary scope being limited to the verification of an attribute for a specific purpose. The Draft points out that this is usually the purpose of a functional ID system and therefore outside the purview of a foundational ID system. This definition views authorisation as an eligibility test. We submit that this fails to take into account that the eligibility of the ID holder is already determined during the authentication stage.
Our understanding of authorisation aligns with the definition of the term by Dave Birch et al as “the process of determining what actions may be performed or services accessed on the basis of the asserted and authenticated identity”.1 This understanding of authorisation rightly centers the process around the individual by empowering them (as a principal) to authorise an entity (as an agent) to act on their behalf or access their data (i.e. controlling access to their data and resources), rather than reversing the process and authorising the individual to act or access a service.
An enabling legal framework should clearly state the scope and purpose of the proposed digital ID system. A central question to articulate is the policy goals and core requirements sought to be addressed by the ID system, as this will ideally dictate the design of the ID system. After identifying the purposes of the ID and what it will be used for, it must be established that these purposes, in addition to having their basis in a validly enacted law, must also correspond to a legitimate aim in that law. The law should identify all the entities who may use the digital ID, and the databases to which this ID system will be linked along with their role and responsibilities.2
An additional safeguard calls for a harms-based analysis of the ID framework to regulate the risks and challenges raised by various existing and potential use cases of digital ID. This analysis would include investigating whether critical decisions regarding digital ID are based on any form of risk assessment and the likelihood of harm, whether there are sufficient mechanisms in place to control the use of digital ID in cases of high risk, whether the framework allows for a differentiated approach to regulating the uses of digital ID depending on the likelihood and severity of harm. To this end, it is critical to carry out a Human Rights Impact Assessment that employs a human rights-based approach and is consistent with the United Nations Guiding Principles on Business and Human Rights.
The centralised storage and retention of personal information is an inherently insecure means of storing personal data as it creates a single point of failure, posing serious risks to individual privacy rights. These centralised systems, particularly if coupled with mandatory use and biometric linking, also run the risk of turning digital ID into a pervasive tool that can be misused to identify, track and control individuals. These centralised “honeypots” are subject to attacks and hacks by malicious actors, as well as misuse by the authorities that control them for surveillance and profiling. There is a need to explore decentralised and federated identity systems which minimise the risks from security breaches, as well as have adequately designed access control mechanisms to ensure that access only on a ‘need to know’ basis, informed by the legal principles of necessity and proportionality.
On Page 5 of the Practitioners’ Guide, the risks of a digital identity systems are listed. We would like to highlight the following additional risks which are not covered in the guide:
On Page 7 of the Practitioners’ Guide, the challenges specific to low and middle income countries have been highlighted. We would recommend the addition of these challenges:
On Page 11, the Practitioners’ Guide defines ‘authorisation’ one of the key components of an identity system as “checking specific attributes necessary to determine whether a person is authorised or eligible for something”, with its primary scope being limited to the verification of an attribute for a specific purpose. The Draft points out that this is usually the purpose of a functional ID system and therefore outside the purview of a foundational ID system. This definition views authorisation as an eligibility test. We submit that this fails to take into account that the eligibility of the ID holder is already determined during the authentication stage.
As mentioned above, our understanding of authorisation aligns with the definition of the term by Dave Birch et al as “the process of determining what actions may be performed or services accessed on the basis of the asserted and authenticated identity”. This understanding of authorisation rightly centers the process around the individual by empowering them (as a principal) to authorise an entity (as an agent) to act on their behalf or access their data (i.e. controlling access to their data and resources), rather than reversing the process and authorising the individual to act or access a service.
In Section II of the Practitioners’ Guide, the principles for Governance are discussed. Principle 8 deals with safeguarding of data privacy, security, and user rights through a comprehensive legal and regulatory framework. It discusses a comprehensive legal framework to govern ID systems. We would strongly recommend that in addition to the provisions of the law, it should also specify that the law itself must be valid, clearly specify purposes for which ID shall be used, and these purposes must flow from a “legitimate aim” 3 (identifying different actors in the ecosystem, as well as redressal mechanisms have been covered in the principles).
In the section on Planning Roadmap, on Page 32, the Practitioners’ Guide looks at the question of taking stock of identity ecosystems and stakeholders. We would recommend that when building on an existing legacy system, one must account for the specific market conditions that the existing ID system may have been created to address (which may no longer apply in the present case), and a proper privacy impact assessment may be required to address new privacy needs. Addressing these issues is potentially difficult due to the costs of implementing changes and their potential impact, particularly if the ID system is already integrated into many different uses. Further, on Page 35, there is a discussion of assessing the trustworthiness of the system. The extent of availability and functionality of a Digital ID System will also add to the trustworthiness of the system. If the system is not widely ‘available’ or does not have too many functions in the economy or state it is unlikely that people will accept it as legitimate.
Under the section on Vision, the development goals of a digital identity system are discussed. On Page 38, Table 8 discusses the Design implications of potential cross-cutting goals for ID system. We would make the following recommendations:
On Page 41, there are factors enumerated one the basis of which a country’s digital infrastructure can be assessed. We would make the following additional recommendations:
Page 42 of the Practitioners’ Guide discusses the design implications for digital infrastructure. In case of low electrification or frequent blackouts in a country, the issues faced by physical data centres can be addressed by adopting a cloud-based approach or using an alternative certified data centre in another country. However, this could bring up regulatory and jurisdictional issues, as well as cost and latency issues.
We would make the additional recommendations with regard to how geography and population size could impact design decisions:
While it is important to account for socioeconomic factors that may make it harder for certain groups of people to obtain ID, the growing trend to make use of such ID mandatory to access benefits and services will further exclude these people.
On Page 51, the Practitioners’ Guide discussed the risks to privacy and data protection. Additional ways in which internal or external actors can intentionally or unintentionally compromise privacy in any ID system include:
Table 16 enumerates the threats to privacy and data protection throughout the identity lifecycle on Page 54. We would recommend the following additions:
Some additional economic and social barriers faced by women and girls include ID systems that require facial capture which may exclude women who may not wish to expose their face for religious reasons. Women being forced to remove their headscarves for Aadhaar registration even in the absence of a rule mandating this is one such example.8 Technological barriers faced by linguistic minorities could be inaccurate translations of names or other important details by translation software which can result in unintended exclusion from benefits/ services because of non-matching of records.9 Procedural barriers faced by areas with high rates of illiteracy could arise from having less reliable personal data, because those enrolling themselves may have trouble corroborating if their personal information is correct.10
Under Safeguards listed on Page 66, we would recommend that accountability is also an important safeguard. The data controller should be accountable to the data subject for complying with the legal framework.
On Page 73, there is a discussion on cross-border data transfers. The issue of cross-border transfers of data arises for digital ID schemes which may rely on private sector entities that store and process this data in other jurisdictions. Cross-border transfers of data, particularly sensitive data from ID systems, should not be carried out in the absence of an adequate and equivalent level of protection. User consent and control must also include the right to obtain their personal data from the data controller in a reasonable manner and form, and at a reasonable charge.
In the section of Non-exclusion and non-discrimination on Page 76, we would make the following recommendations. Amongst the policies that have the potential to impact the inclusivity of the ID system, documentation required to prove citizenship/ residency should also be included as an important factor. Undocumented adults are also potential targets of exclusion and discrimination, and should be explicitly mentioned alongside undocumented children.
While mandating the use of a single form of digital ID offers the benefits of boosting adoption rates, governments must refrain from making this an essential requirement for accessing public services and benefits. This should apply to all state action that attempts to make such ID mandatory - either explicitly legislated or through coercion by making access to services dependent on digital ID and pressuring private companies to make use of this ID mandatory. National ID systems should work towards enhancing user agency and choice, and mandatory enrollment and use vitiates this principle. Making the use of digital ID compulsory threatens privacy and anonymity. This is a growing trend that is particularly worrying in light of governments increasingly attempting to undermine individuals’ anonymity online. Further, this could further exclude those who cannot access this form of ID for various social, political and structural reasons, who are often the ostensible targets for the benefits that digital ID claims to offer.
On Page 79, the Guide discussed the issue of public engagement. We recommend that in all stages of the public consultation, the government must ensure the participation of civil society groups, and also keep publicly available records with all meetings with both civil society and industry groups and lobbies. Transparency in their dealings with industry groups must be maintained, and due weightage given to civil society voices due to their relatively lesser influence in comparison to industry groups.
The Practitioners’ Guide points out that dismissing citizen concerns instead of directly responding to them is only likely to increase mistrust in the ID system. Authorities in countries such as India have gone a step further and actively threatened prosecution of individuals who have pointed out security flaws and vulnerabilities in the Aadhaar ecosystem.11 This approach will inevitably backfire on trust-building attempts, while also disincentivising any party from highlighting critical flaws that may need to be immediately addressed.
Page 82 discussed the question of grievance redress. We would state that a potential grievance is also unauthorised access/ misuse of personal data, and data breaches. For this purpose, data controllers should maintain detailed, publicly available logs of when and the purpose for which data has been accessed.
An independent ID Authority should also have the power to conduct investigations into any violations, impose penalties, and order compensation. They should also maintain records of issues raised by data subjects and action taken to address these issues. There is also the need for an appellate adjudicatory authority that is also independent and has sufficient expertise. It is also necessary that the regulatory body charged with grievance redressal, complaints, investigations and judicial powers is separate from the ID Authority in order to avoid conflict of interest.
On Page 108, there is a discussion on business models in the Practitioners’ Guide. It is important to note that different business models also involve different privacy risks. This is an important consideration in the case of public-private partnerships, where there may be a possibility of a commercial interest in compromising privacy. For instance, a business model where the government pays to provide digital identity does not incentivise inappropriate sharing of user data, but can lead to questions around whether there is a “customer” or who this customer is, possibly leading to poor operation practices that could pose a passive threat to privacy. Models where the user pays to obtain a digital identity do not face this issue as there is a clear customer in this scenario. However, few would be willing to pay for an identity (especially a first copy/ initial issue) as they may consider this the responsibility of the state. A business model in which a service provider pays an identity provider, could encourage the identity provider to process and analyse user data in order to widen the base of service providers that pay to access it.
The Draft suggests that biometrics are needed for countries without a strong civil registry. While biometrics are often touted as an efficient means of identification and authentication, they are expensive, can be inaccurate, and pose significant risks to individual privacy and freedom. ID systems that require biometrics are based on the flawed assumption that they are foolproof means of preventing identity fraud. Unlike other forms of data, however, biometric data is intrinsic and unique, and once compromised cannot be reissued. The privacy harms that arise out of such a data breach are irreparable.
Centralised storage of biometric data poses additional risks. Any database of aggregated personal information is susceptible to exploitation, and the potential harms are magnified when they are irreversible. Apart from theft of data, data sharing and access in the absence of a strong data protection framework raises concerns of third-party misuse of data, and surveillance and profiling by the state and private entities. This is amplified when multiple data points about an individual are connected to a single identifier. Certain countries may allow access to biometric databases without a warrant or notice, and the use of biometric data poses the threat of mission creep when this data is used for purposes secondary to the original intended purpose.
Making the delivery of services dependent on forcing individuals to provide their biometric data is violative of the fundamental right to privacy, and does not align with the necessity and proportionality standards that determine the exceptions to that right. In the Supreme Court hearings on the validity of Aadhaar in India12, additional arguments centered around the collection - particularly mandatory collection - of biometrics being violative of one’s dignity and bodily integrity. Written submissions discuss how the inviolability of the human body remains vital even in instances that “appear innocuous, or at least, do not seem to present a tangible or expressible harm. The core issue then, is not whether an identifiable physical harm to the body can be pointed out, but whether the individual’s decision about how to use her body is taken over by another entity (in this case the State), who decides for her instead.” 13
The dissenting opinion of Chandrachud J. in the Aadhaar judgment was echoed by the Jamaican Supreme Court when it struck down the Jamaican National Identification and Registration Act, its national biometric identification system, as unconstitutional.14 The Chief Justice recognised the issues of privacy and freedom raised by biometric systems, pointing out the dangers of large-scale collection of biometrics outside of a criminal law context as “likely to result in violations of fundamental rights unless there are very strict and rigorous safeguards because once there is a breach of the database the information taken is unlikely to be recovered in full.” The Chief Justice and Chandrachud J. both recognised the dangers posed by combining biometric and biographic information and automating the process, merging silos of information to create entirely new sets of information and facilitate profiling.
A privacy enhanced digital ID system should have clear legal procedures and evidentiary standards that uphold human rights and due process. Providing biometrics should be voluntary and opt-in, and the system should place clear limits on the use of these biometrics only for the purpose of deduplication. In such a system, any other use of this data would be considered an illegitimate use case. Storage of this data should be in a decentralised database to avoid a single point of failure. Minimising the collection and transfer of biometric information will reduce risk in case of compromise.
In absence of reliable supporting documents, the Draft suggests using a reliable person (also known as an introducer) to vouch for an individual’s identity. This system is useful in contexts where civil registry systems are weak and a person may not have any identification documents, and is used in India for Aadhaar registration. The introducer system is a welcome move to make ID systems more inclusive. However, it also raises certain concerns. In its report on the National Identification Authority of India Bill, the Standing Committee on Finance pointed out the possibility of illegal residents obtaining Aadhaar numbers by exploiting the introducer system. The introducer system can also prove to be cumbersome, particularly since those who require introducers generally lack educational, financial or technological resources. In order to ensure inclusivity, introducers must be easy to access for all. Inclusivity can be furthered by allowing for different levels of identity assurance - certain services can allow for self-assertion of identity, others would require an introducer, and so forth. A similar system was followed by the Indian Constituent Assembly Secretariat to create an electoral roll soon after India’s independence. Given the absence of any system to document the influx of refugees, they were allowed to self-assert their identities by filing a declaration to reside permanently in that constituency.
The following comments do not directly refer to the ‘ID4D Practitioner's Guide’ but are important to note since the findings and estimations in the Guide draw heavily from case studies and reports from specific countries.
Unit cost for each country is calculated based on a suitable time horizon for reaching full scale enrolment. However the time horizon can vary once the enrolment process begins based on country specific factors.The average unit cost per person estimated by each country is based on major assumptions taken from other developing nations but may vary greatly. These average costs vary in the choice of biometric, cost of labor, existence/ construction of technological infrastructure. For example, countries with large populations and poor connectivity will most likely have longer time horizons for full enrolment as well as greater delays in accruing benefits or revenue.
The calculation of unit costs is also based on an estimation of average enrolments by service providers however it is not necessary that these providers will continue enrolling people into the system over time. Both in India’s Aadhaar and UK’s Verify many service providers shut down and were no longer able to enroll people.
While calculating the unit cost of setting up a digital identity system countries sometimes take into consideration revenues that will offset the unit cost. The reliance on revenue during the set up stage can be economically unfeasible because there will be no pay off until after the enrolment process has been completed and all systems and subsidiary processes that rely on the digital identity system have been put into place.
Costs may include factors such as enrolment incentives given to individuals or service providers. These costs may be based on the amount of incentive given in other countries and can be misleading. Additionally giving an incentive of a lower value (one that is within permissible budgetary limits of the Digital ID project) may not serve as an encouragement to increase the number of enrolments.15
There may be additional costs associated with updating institutional frameworks so that there can be collaboration between different government agencies. For example, in Burkina Faso the existing framework does not allow for collaboration between the agency in charge of civil registration and the one in charge of identification.16
To keep citizens updated with the progress of the project so that they may keep up with and complete the various procedural stages there need to be additional campaigns organized across media platforms. “Operation statistics should be published, so that targeted campaigns may be carried out to correct any distortions or failure to make declarations”.17 The cost of the campaigns and surveys that need to be undertaken after the initial enrolment process should be taken into account.
For countries that rely on health centers to report birth notifications for registristration may not account for citizens who choose alternatives to health centers that are not affiliated with a health institute.
Some registration processes may not require the child to be present for the registration. This may be problematic because authenticating the caregiver or guardian gives no assurance that the child is alive or if the child will receive benefits. People may also register “ghost” or non-existent identities to take advantage of the benefits offered by digital identity systems; a problem that was encountered in India’s Aadhaar project.18
There can be great discrepancies in the number of children registered and those that exist within the population. Such a problem was found in Burkina Faso where “it was estimated for the year 2015 that 91 percent of the children between 12 and 23 months had received a diphtheria, pertussis, and tetanus (DPT) vaccination,10 which is well above the 77 percent birth registration rate”.19
Exclusionary factors in the digital ID systems such as fees for enrollment or travel costs to reach enrolment can negatively impact economies as a whole if the digital ID systems are strongly enforced. For example, in Pakistan the CNIC (Computerised National Identity Card) became mandatory for conducting transactions such as purchasing vehicles and land, obtaining a drivers licence, purchasing a plane or train ticket, obtaining a mobile phone SIM card, obtaining electricity, gas, and water, securing admission to college and other post-graduate institutes, conducting major financial transactions. Thus excluding those without identification from contributing to the economy through their purchases.
Estonia reported that e-ID saves them 2% a year in GDP. It is important to compare this to the costs of potential privacy risks and data leaks.20 Under UK’s Verify Project costs reported were lower than what was actually spent because department costs for reconfiguring their systems had not been included. There has been no cost reduction as a function of increased volume of registration as was presumed by the government,“GDS expected that prices paid per sign-up would fall over time as user numbers increased. In practice, user volumes did not increase as expected and average prices paid to providers remained above £20 for new verifications. High prices meant that GDS has continued to subsidise departments for using Verify. Moreover, most departments have not paid the Cabinet Office and GDS even for subsidised”.
The Verify project disclosed that entering into a PPP or handing employing private service providers cause government departments to pay the entire usage cost under a market-based system. However, it may be possible to mitigate these costs by having multiple private providers manage the system such that they will offer competitive pricing. The low success rates of online verification increase costs in the form of increased operational costs to fix the problem as well as simultaneous manual verification costs. “The 75% reduction in benefits between 2016 and 2019 estimates is largely due to the lower than expected take-up of Verify.” 21
|1||Birch, Dave et al. “Digital Identity: Issue Analysis.” June 8, 2016. Accessed August 2, 2019. Available at https://www.chyp.com/wp-content/uploads/2016/07/PRJ.1578-Digital-Identity-Issue-Analysis-Report-v1_6-1.pdf ↑|
|2||Centre for Internet and Society. “Towards a Framework for Evaluation of Digital ID.” Digital Identities: Design and Uses. Accessed August 5, 2019. Available at https://digitalid.design/evaluation-framework-01.html ↑|
|4||Supra Note 1 ↑|
|6||Sharma, Devansh.“49,000 Aadhaar Centres Blacklisted for Fleecing: Here's How to Avoid Getting Conned.” The Economic Times. February 06, 2018. Accessed August 05, 2019. Available at https://economictimes.indiatimes.com/wealth/personal-finance-news/49000-aadhaar-centres-blacklisted-for-fleecing-heres-how-to-avoid-getting-conned/articleshow/60707836.cms?from=mdr. ↑|
|7||Ramakrishnan, Deepa H., and Sofia Juliet R. “Fading Fingerprints Disconnect Aadhaar Link for the Elderly.” The Hindu. December 04, 2017. Accessed August 05, 2019. Available at https://www.thehindu.com/news/cities/chennai/fading-fingerprints-disconnect-aadhaar-link-for-the-elderly/article21255867.ece. ↑|
|8||TNN. “Women Decry Decree to Remove Headscarves for Aadhaar Photo: Hyderabad News - Times of India.” The Times of India. August 03, 2015. Accessed August 05, 2019. Available at https://timesofindia.indiatimes.com/city/hyderabad/Women-decry-decree-to-remove-headscarves-for-Aadhaar-photo/articleshow/48322706.cms ↑|
|9||NH Web Desk. "J Indu Remains J Hindu on New Aadhaar Card, Denied Scholarship." National Herald. January 17, 2018. Accessed August 05, 2019. Available at https://www.nationalheraldindia.com/national/agonies-of-aadhaar-students-denied-scholarships-over-misspelled-names ↑|
|10||Ghatge, Shraddha. "No Mechanism for Welfare: Aadhaar Turns into a Burden Instead of Benefit for 3,000 Tribal Women in Maharashtra." Firstpost. September 03, 2018. Accessed August 05, 2019. Available at https://www.firstpost.com/india/no-mechanism-for-welfare-aadhaar-turns-into-a-burden-instead-of-benefit-for-3000-tribal-women-in-maharashtra-5080071.html ↑|
|11||Bhatia, Rahul. "Critics of Aadhaar Say They Are Under Surveillance, Allege Government Harassment." The Wire. February 14, 2018. Accessed August 05, 2019. Available at https://thewire.in/economy/critics-aadhaar-say-surveillance-allege-government-harassment ↑|
|12||Justice Puttaswamy (Retd.) and Anr. v Union of India and Ors 2018. 2018 SCC OnLine SC 1642. ↑|
|13||Nachiket Udupa and Anr. v Union of India and Aruna Roy and Anr. v Union of India “Written Submissions on Behalf of Mr. K.V.Viswanathan”, (March 13, 2018), https://drive.google.com/file/d/13nU4Qzy_KfJEPwD6GLuvxpgHyX7bGHmL/view ↑|
|14||Julian Robinson v Attorney General of Jamaica 2019.  JMFC Full 04 ↑|
|15||World Bank. “ID4D Country Diagnostic: Nigeria.” 2016. Accessed August 05, 2019. Available at http://documents.worldbank.org/curated/en/136541489666581589/pdf/113567-REPL-Nigeria-ID4D-Diagnostics-Web.pdf ↑|
|16||World Bank. “ID4D Country Diagnostic: Burkina Faso.” 2017. Accessed August 05, 2019. Available at http://pubdocs.worldbank.org/en/653431522763079651/Burkina-Faso-ID4D-WebFinal040318.pdf ↑|
|17||World Bank. “ID4D Country Diagnostic: Guinea.” 2016. Accessed August 05, 2019. Available at http://documents.worldbank.org/curated/en/630611472120072748/pdf/Guinea-ID4D-Diagnostics-WebV42018.pdf ↑|
|18||World Bank. “ID4D Country Diagnostic: Kenya.” 2016. Accessed August 05, 2019. Available at http://documents.worldbank.org/curated/en/575001469771718036/pdf/Kenya-ID4D-Diagnostic-WebV42018.pdf ↑|
|19||Supra Note 16 ↑|
|20||E-estonia. "Estonia - We Have Built a Digital Society and so Can You." E-estonia. Accessed August 02, 2019. Available at https://e-estonia.com/ ↑|
|21||National Audit Office. “Investigation into Verify.” 2019. Accessed August 02, 2019. Available at https://www.nao.org.uk/wp-content/uploads/2019/03/Investigation-into-verify.pdf/ ↑|