Digital Privacy

Online safety needs structural change, not more layers of control

Concerns about online safety expose a deeper problem about who controls the digital world children grow up in. Parents and young people want safer spaces, yet we as citizens have little power to shape them.

the online safety act isn’t working

Write to your MP

Google, Meta, and Amazon are estimated to generate over 60% of global digital advertising revenue. The Internet is increasingly centralised with a few large companies wielding influence over digital infrastructure, user access, and online content. Digital advertising revenue is the oil running the machinery of harmful platforms. Big tech relies on targeted advertising and attention capture. The safety features, or types of content citizens say they want, are often in direct conflict with the emotional hooks that capture attention and are monetized.

The Government’s response to platforms running amok is to focus on how we can add more rules to this broken system. New measures announced for consultation include expanding age checks for AI chat bots, features such as ‘infinite scrolling’ and looking at how software can restrict suspected CSAM images from even being sent or received on a device (sounds like client-side scanning).

Perhaps this isn’t surprising. Throughout its progress through Parliament the Online Safety Act grew and grew. Many safety campaigners rightly highlighted a wide variety of problems with platforms, in the hope that a rule for each could quash them. Like the Hydra of Greek mythology, as one head is slain, two more appear. This occurs as harms are an emergent property of the domination of the internet by big tech. To kill the serpent, we need a different approach.

Whilst Government seeks to expand restrictions on particular features or services, safety campaigners call for safety by design. Safety by design is a useful concept in product engineering, where risks can be anticipated and mitigated through technical and process controls. But speech environments are not closed, predictable systems. They are shaped by economic incentives, social dynamics, and power structures. In these contexts, design interventions alone cannot significantly improve safety if the surrounding market rewards behaviors and outcomes that generate harm.

We have created a regulatory treadmill, where new rules are constantly added while harms evolve around them. Perhaps a solution instead is seeking to build something better.

Age assurance systems often require sharing sensitive information. This can include biometric face scans or official identity documents. On platforms such as Reddit users are required to provide their exact age. This data is then used to target them with more personally tailored adverts. The same things happens on pornography websites where users are now encouraged to log-in and create an account. Tying a specific user to algorithmic profiles, enables the facilitation of customised porn feeds designed to keep them on the platform for longer.

We know that once collected, data rarely stays in one place. Discord is one example of a platform that suffered a major data-leak as a result of the introduction of age-assurance. As ORG’s work on Ad Tech shows, data tends to flow across the wider advertising and data broker ecosystem.

Providing your biometric information, such as a face scan, to an age assurance provider is like handing your house keys to a stranger who promises to keep them safe. They might mean well, but you cannot see where your keys end up next. The resulting cost to society is the proliferation of more personal data, and a permanent expansion of tracking infrastructure.

Children risk growing up inside identity-linked monitoring systems where proving who you are becomes a routine condition of accessing everyday services. Over time, this normalises large-scale identity and tracking architectures across society. A society that requires identification to access information is a society that has accepted permanent monitoring.

This also reflects a wider trend in which surveillance technologies move between military, intelligence, and commercial markets. For example, figures associated with the growth of modern surveillance technology, such as Peter Thiel, who co-founded Palantir, have also invested in consumer identity and verification companies, including firms operating in the age assurance space.

Restrictions on VPN use create another risk. VPNs are often described as tools for avoiding rules. Yet they are also tools for safety, privacy, and free expression. Young people, especially those in controversial political debates use them to avoid harassment or protect location data. Organisations use them to secure remote log-in to their networks. Restricting them also gives political cover to oppressive regimes that do the same thing for political control. The point is each new safety measure comes with a new social cost.

Complex compliance rules create a third cost. Large companies can absorb legal and engineering overheads. Smaller services often cannot. This reduces competition and locks users into dominant platforms. Regulatory and structural barriers to entry isn’t a new or novel concept in economic.

Fix the Online Safety Act