The Online Safety Act comes for livestreaming


Ofcom has launched a blandly titled consultation on “Additional Safety Measures” designed to implement the next round of duties within the Online Safety Act (OSA). The proposals target livestreaming and harmful algorithmic recommendations, with the aim of preventing grooming, and the sharing of child sexual abuse material (CSAM) and terrorism content.

While the goals of keeping children and vulnerable users safer online are laudable, the proposals represent a new wave of censorship for video platforms. This time, given that many people have a good idea that the OSA is deeply flawed, we hope the public can use this opportunity to react to the proposals before they are agreed and made live. If you are a Youtuber, Twitch streamer or TikToker, read on. The time to have this debate is now.

The proposals create new powers for government control over online content in crisis situations, and risk censoring legitimate debate. Powers over terrorist content would, for example, expect positive support for Palestine Action to be checked for and removed.

A new wave of Age Verification will force anyone who live streams to prove they are an adult. If they don’t, no one will be able to comment, react, record or gift to their streams. Some of the measures such as proposals around live streaming for young people (ICUF3) also seem unrealistic and unfair for teenagers, who will be preventing from having audience interaction with their streaming content. It is worth noting many famous streamers started producing content before they turned 18.

New Age Verification barriers

Several measures, particularly highly effective age assurance (HEAA) and the expanded use of proactive technology (proposals PCU C10), will require online services to gather and process more user data. For example, HEAA could involve facial scans or document checks to verify age.

AV is always likely to compromise user privacy, by creating additional needs to track individuals and what they access. It should not be considered except in the most extreme circumstances; most livestreaming is innocuous. Making AV a requirement before being able to receive gifts, record your stream or have audiences react to your stream risks causing new and additional free expression barriers in the UK.

People without passports or driver’s licences, many of whom will be from lower socio-economic groups would be disproportionately excluded. Ofcom has suggested privacy-preserving alternatives, but lacks the powers to ensure that privacy is properly considered; they can only point to data protection law, which is general and poorly enforced. Instead, the government should specifically regulate age verification and demand the highest privacy standards.

Proactive technologies, such as scanning private communications to detect CSAM or self-harm content, raise serious concerns. Even where these tools are accurate, they risk intruding on private, lawful conversations, and false positives could result in perfectly legitimate correspondence being flagged or removed. Knowing this, people may start to self-censor. For example phrases such as ‘unalived’ instead of ‘suicide’ or ‘grape’ instead of ‘rape’ have become common code words in recent years.

Freedom of expression risks

A recurring risk across the proposals is the potential for over moderation. Platforms may err on the side of caution to avoid enforcement action, sweeping away lawful but controversial speech. While platforms have a duty to consider free expression, this is weak, as it is merely to have “regard” to it. It delivers nothing concrete, despite the claims of current and former ministers; rather Ofcom must attempt to patch up the gaps as best it can. In doing so, it must strain not to exceed its own duties by demanding safeguards that the OSA does not require.

Livestreaming restrictions would ban users from commenting, reacting, or gifting during broadcasts of anyone under the age of 18. This clearly reduces opportunities for adolescents and teenagers own self-expression but is judged necessary to prevent grooming. The act does not distinguish between younger children, and adolescents who need to start learning how they can interact with the wider world.

Given how important livestreaming has been for many young people’s careers, could there be a means to protect younger children while also giving adolescents and older teenagers access to this technology. In any case, there is a risk that young people will simply circumvent such controls, if they are prevented from sharing online in this manner, which could leave them genuinely less protected.

Algorithms – called “Recommender systems” under the OSA – would need to non-prioritise potentially illegal content from algorithmic feeds until reviewed. This aims to slow the spread of hateful and extremist content, but it could also suppress lawful political activism.

This is made worse by crisis response protocols which could result in the rapid removal of lawful but inflammatory content during moments of unrest, chilling political expression and assembly. For example in the 2019 Hong Kong protests, protesters and journalists frequently livestreamed clashes with police, offering real-time footage that contradicted official narratives. Content such as this that could contain violence would be flagged for review before it could be promoted through algorithms to reach a significant audience.

Livestreaming is a contentious and difficult area: nobody wants to see acts of violence broadcast in order to justify the beliefs of extremists or others. However, creating duties to check that material does not fall into such a category before it reaches a wider public could suppress legitimate material, when it is most needed. Given that the vast majority of livestreaming, even when controversial, does not fall into categories that should be restricted, this looks likely to cause disproportionately restrictive responses.

Equality implications

The proposals are justified as aiming to protect those most at risk: children, women and girls targeted by abuse, and minority groups subject to online hate. Measures such as hash-matching for intimate image abuse (IIA) directly address harms that disproportionately affect women. Similarly, limiting the algorithmic spread of extremist or hateful material aims at benefiting people from racialised and marginalised communities.

However, those most as risk are also likely to be harmed the most from these proposals. There is strong evidence that automated moderation disproportionately impacts languages spoken by predominantly Muslim people such as Arabic, Pashto, Dari, Persian etc. Hash databases are likely to focus disproportionately on Islamist material, unfairly targeting Muslims. Age assurance will exclude marginalised teenagers without ID, reinforcing digital divides. All under 18s as a whole will be prevented from having people react to live-streamed events such as a cello performance, or an educational presentation, and other low-risk events and topics where they are not at risk.

Weak safeguards

There is no question that online harms demand decisive action. Victims of grooming, intimate image abuse, and extremist propaganda deserve stronger protections. But in a democratic society, safety must not come at the expense of rights to privacy, freedom of expression, and equality.

The use of pro-active technology to detect and remove content raises questions as to how we can asses the proprtionality of the restrictions on our freedom of expression rights. This is a point picked up by Graham Smith in his blog article.

In addition back in 2023, ORG sought an opinon from Dan Squires KC on prior restraint censorship measures within the Act. That opinion concluded that there are “likely to be significant interference with freedom of expression that is unforeseeable and which is thus not prescribed by law”

Ofcom has tried to recognised these tensions, building in safeguards such as appeals processes, discretion in banning policies, and requirements for bias monitoring in technology. However, these are retrospective and likely to be weak in practice. Appeals are rarely accessed, as they are a significant burden; they may not right the wrong of a takedown in any case, because the moment has passed; biases in technology might be reduced, but the question is what they are targeting, rather than just whether they are accurate. It raises the question as to how we are able to judge the proportionatility of these measures and the extent to which they will impact rights such as our freedom of expression and respect for our private and family life. The OSA is fundamentally a framework for content removal, and does not give room for strong free expression protections.