Digital Privacy

Break privacy to make privacy? Digital ID checks aren’t the answer

The Information Commissioner’s Office (ICO) recently published an open letter to online platforms providers, calling them “to strengthen age assurance measures to ensure young children are not accessing services that are not designed for them”.

The letter comes after a £14m fine issued against Reddit for its reliance on self-declaratory age checks. According to the ICO, children under the age of 13 would have been able to provide false information when self-declaring their age — thus, Reddit would have processed their data unlawfully. 

This decision is concerning on several fronts. The ICO are actively encouraging platforms to adopt more invasive verification technologies, at a time when privacy violations and malpractice are starting to emerge within the industry. Further, the ICO characterisation of age assurance technology as “advanced” and “readily available” comes as hundreds of computer scientists call for a moratorium on the role out of this technology and warn about the technical limitations and infeasibility of this approach.

In the meanwhile, Parliament is getting ready to approve proposals to further restrict children’s access to the Internet, informed by unrealistic expectations over the effectiveness of age-gating. As dysfunctional legislative and regulatory policies converge, they risk fuelling a never-ending demand for more layers of Internet control.

The Online Safety Act requires online service providers to introduce age checks, and to prevent children from accessing certain kind of content (also known as age-gating). Contrary to the age checks we are familiar with in the offline world, online these are digital identity process that are often invasive of our privacy. If users are not allowed to self-declare their age, these digital identity systems will be performed by:

  • Scanning your face, or other biometric data, to guess your age; or
  • Tracking and profiling your behaviour on a given online platform, and sharing these profiles with commercial data brokers, to guess who you are and how old are you;
  • Asking you to submit your identity documents.

All these processes attempt to tie someone using a service to fixed digital identity and representation of them. As online service providers started to implement these invasive digital identity proceses in the UK, significant privacy harms started to emerge. Already last year, Open Rights Group sounded the alarm about the incentives the Online Safety Act creates for adopting cheap technology providers, at the expenses of users’ privacy and security. We also drew attention to Grindr, Bluesky, and Reddit adopting providers whose general terms and conditions state they can repurpose data they collect for marketing and advertising purposes. It is often very unclear to users whether the Platform’s or the digital identity providers terms and conditions apply.

More recently, Discord forced users who had failed to pass through their main providers process into a different customer service process in which people had to submit proof of identification. This resulted in a data breach affecting 70,000 users’ government issued IDs. Afterwards, Discord attempted to roll-out a ‘teen by default’ setting with global digital identity checks. The resulting consumer backlash and controversy around Persona being funded by Peter Thiel, a far-right investor and Palantir’s founder resulted in them backtracking and dropping Persona as a supplier.

If this wasn’t enough, security researchers were able to hack Persona’s age checks systems, and gain access to its code on a US government authorised server. Their analysis revealed that Persona had a software stack that was not only capable of estimating users’ age, but also used face scans to carry “269 individual verification checks”. These included screening users “against 14 categories of adverse media from terrorism to espionage, and tags reports with codenames from active intelligence programs”. Without access to Persona’s back-end and servers it’s hard to know when and where this software was deployed.

If Persona had used this software as part of the digital identity checks people on Reddit or Discord had to go through it would not be legal under UK data protection law. Age assurance providers are not allowed to use digital identity data obtained in an age assurance process beyond its purpose, be it to sell advertising or to surreptitiously check if your child is a terrorist. You would expect the ICO to be investigating these providers. Their priorities, however, seem to lie elsewhere.

The ICO issued a £14m fine to Reddit for a number of failures, including “checking the age of users accessing its platform”. Underpinning this fine, is a recent open letter to online platforms, as well as the December 2025 Children’s Code Strategy progress update. In short, the ICO are leveraging monetary penalties to force online platforms to make “use of current viable technologies – examples include but are not limited to, facial age estimation, digital ID, or one-time photo matching – when enforcing minimum age requirements”. The rationale of this approach appears problematic on many fronts.

Lacking meaningful regulatory intervention against abuses, the ICO request will inevitably favour the adoption of age assurance providers that trump privacy standards to lower costs and increase their own profit. Indeed, providers like Persona (used by Reddit and Discord) and Facetec (used by Grindr) already reuse data they collect through age verification services for advertising purposes. Their deployment has gone unchallenged by the ICO, a posture that benefits the commercial interests of creepy age assurance providers but does little to protect our privacy.

On top of that, the ICO strategy focuses heavily on the implementation of “effective age gates”, thus adopting a reductionist interpretation of its legal mandate. Data protection law is concerned about protecting personal data, not about restricting children’s access to certain content. As a matter of fact, the UK GDPR clarifies that specific protection for children’s use of data apply, in particular, “to the use of personal data of children for the purposes of marketing or creating personality or user profiles”. Even within the narrow scope of age assurance data processing, the ICO cannot prioritise technological adoption by closing an eye on surreptitious and unlawful uses of age checks data.

If you believe that age gating is important to protect children online, you would want the ICO to make sure the age verification industry does not weaponise age-checks mandates to violate our online privacy. By failing to play their part, the ICO is instead contributing to a growing sense of unease and loss of trust over age assurance policies.

Another important misconception which underpins the push against self-declaration is the notion that what the ICO characterise as “robust” age assurance checks would be harder to circumvent, and thus more effective. However, children have been able to bypass face scans by drawing a moustache on their face, or by using their parent’s online accounts. Likewise, digital identity processes can be circumvented by using Virtual Private Networks (VPNs), by buying or borrowing accounts’ credentials, or by using deepfakes or AI-generated profiles.

Contrary to the ICO assessment, asking online providers to adopt more invasive age checks will not prevent motivated users from bypassing them. Circumvention is trivial not because of only platforms’ age verification tools of choice, but due to the open nature and structure of the Internet itself. Even in China, where the State spares no expenses to control and surveil Internet use, children learnt how to bypass restrictions in order to keep playing video games. As outlined in an open letter signed by 400+ information security researchers and academics, effective age verification would require a gargantuan digital identity infrastructure deployed at scale, and laws to enforce its use globally.

It is, therefore, unrealistic to expect that circumvention can be stopped by technical means. These same expectations are informing ongoing political attempts to introduce more age checks, supposedly to make age-gating more difficult to bypass. As proposals already being discussed in parliament show, age-gating could soon apply to VPNs. These measures are, however, bound to fail just like the existing ones, fuelling a vicious cycle where politicians will attempt to fix failing policies by introducing more layers of surveillance and content restrictions, over and over again.

Indeed, by adopting the ICO own logic, VPNs would have to scan their users’ faces, an odd practice for services which are meant to protect from online tracking, surveillance and censorship. Once users start bypassing VPN age-gates, pressure will build to extend them to email services, also the encrypted ones, to prevent children from using an adult’s email to register. Or, Internet Service Providers may be asked to monitor user behaviour to detect if someone is trying to use file-sharing services to download age restricted software or content.

UK data protection law should provide checks and balances against an arms race towards pervasive Internet surveillance, but the ICO’s posture only adds fuel to the fire.

A certain degree of empathy is due for regulators like the ICO, and their tasks of navigating the inner tension between online safety legislation that requires mass surveillance and UK data protection law, that protects us against it. However, the ICO is actively making this worse by pushing online providers to quickly adopt age verification services while disregarding compliance failures within the industry. It does not need to be this way.

The ICO could enforce against age verification providers, like the Spanish data protection watchdog recently did by fining Yoti, the erstwhile poster-child of privacy-friendly age verification tech, €950,000. By stopping age verification providers from exploiting data they collect for advertising or other spurious purposes, the ICO could protect the public and address widespread concerns about age verification. By keeping data practices in check, the ICO would also make it easier for online platform to adopt this technology without unduly trumping the privacy of their users.

Such a posture would reflect the ICO statutory obligations under the law, but it would also displease the government and the political bandwagon that supports age verification at all costs. Given the ICO’s past record, such a reckoning appears unlikely.

Indeed, the ICO recently signed a Memorandum of Understanding with the government, which Ministers characterised as a shift “to a relationship of partnership rather than opposition”. Even before the MoU, the ICO have frequently shied away from asserting their own independence and regulatory integrity, as shown by the decision not to investigate the Ministry of Defence for the Afghan data breach, or the initiative to deregulate cookie consent requirements to support the government’s growth agenda.

We need a regulator with the independence and willpower to protect us, and stand up against the government when they get it wrong. This is why Open Rights Group has been asking the Select Committee for Science, Innovation and Technology to open a formal inquiry into the performance of the ICO. We are also wrote to the Department of Science, Innovation and Technology to regulate companies performing digital identity checks to ascertain people’s age, but months later have not had any response.

Fix the Online Safety Act