Free expression online

Joint Briefing: Petition Debate on Repealing the Online Safety Act

Prepared by UK civil society, digital rights, and open-knowledge organisations.

Over 550,000 people have petitioned Parliament to repeal the Online Safety Act (OSA), making it one of the largest public expressions of concern about a UK digital law in recent history. A petition to reform the Act would likely have attracted even more support. While it may seem unusual for so many people to challenge a law framed around “online safety,” this briefing explains what those concerns actually are.

These concerns have hit a nerve. Parliament needs to ensure the OSA works without unfairly restricting people’s day to day activities. The balance needs adjusting, and some clear changes could resolve some of the problems, and reduce the arguments for a wholesale rollback.

We highlight how the Act affects freedom of expression and access to information, and how its requirements risk undermining the ability of small, non-profit, and public-interest websites to operate. This document focuses specifically on these free-expression impacts, rather than the broader range of issues raised by the Act.

The Online Safety Act imposes several dozen duties that service providers must interpret and apply These duties are highly complex and written largely with major commercial social media platforms in mind. Yet they also extend to small businesses, community forums, charities, hobby sites, federated platforms, and public-interest resources such as Wikipedia.

While a small number of community-run or not-for-profit services may present higher risks, most are low-risk spaces. These low-risk sites are often run by volunteers who simply do not have the capacity, expertise, or resources to take on the liabilities and operational burdens created by the Act.

Many of those services – such as bulletin boards, collaborative mapmaking or collaborative encyclopedia-writing sites, are also not run or designed like the social media and filesharing services that officials had in mind when designing the duties in question.

Some services, like the LGFSS cycling forum, have managed to continue under new management. Others, such as a support project for fathers with young children, have had to abandon their independent sites and migrate onto large social media platforms.

Many small providers also struggle to engage meaningfully with Ofcom’s consultations, which are extensive, technical, and time-consuming – effectively excluding the very communities the Act impacts.

Wikimedia Foundation – the charity that operates Wikipedia and a dozen other nonprofit education projects – and hundreds of allied organizations and specialists have warned that the Act creates major burdens for public interest projects.

Wikimedia also warned that secondary legislation, passed in February 2025, added to those challenges, by exposing the most popular UK public interest projects, like Wikipedia, to “Category 1” status under the Act.

Category 1 status seems set to require moderation changes incompatible with Wikipedia’s global, open, volunteer-run model, such as platform-level, globally-applied identity verification that increases costs, conflicts with privacy-by-design principles, exposes individuals around the world to major risks (such as political persecution), and is likely to require privacy-protective volunteers to lose some of their ability to keep Wikipedia free of harmful or low-quality content.

  • Wikimedia warned that these “Category 1” side-effects might theoretically only be avoidable by reducing UK participation, to disqualify Wikipedia from Category 1 status entirely.

Sites without capacity to comply are blocking the UK to avoid prosecution

ORG’s Blocked project tracks sites geoblocking UK users due to OSA compliance pressures.

These sites are often small, low-risk, and community-driven, with no history of safety issues. Yet there is evidence the Act is forcing them to close, restrict access for UK users, or shift onto larger commercial platforms, which may be less safe. A list of affected websites is included in Appendix 1.

Some people have argued that platforms taking down the wrong type of content is simply them failing to implement the law correctly. However it is both the Act and Ofcom’s code of guidance that have created the following drivers for this behaviour:

Strong financial penalties

The Act allows Ofcom to fine noncompliant services up to 10 per cent of qualifying worldwide revenue or block services in the UK for serious noncompliance (Online Safety Act 2023, sch. 13, para. 4).

Broad risk reduction duties

For user to user services likely to be accessed by children, the Act requires a suitable children’s risk assessment and ongoing measures to mitigate identified risks (Online Safety Act 2023, Pt 3 Ch 2 ss 11–12).

Vague definitions of harmful content

The Act defers to Ofcom guidance and Codes for definitions of content harmful to children, creating uncertainty as to the precise type of content to be removed [Online Safety Act 2023, ss 60-61 (with Ofcom guidance per s. 53)].

Pressure to demonstrate proactive compliance

Platforms are pressured to design, operation, and mitigation measures including automated moderation, and age-gating.

Ofcom codes recommending preemptive measures

The Protection of Children Code of Practice requires highly effective age assurance where high risk content is not prohibited for all users [Ofcom, Guidance to Proactive Technology Measures (Draft, June 2025) Online Safety Act 2023, s. 231 (definition of “proactive technology”) and Sch 4 para 13 (constraints on its use for analysing user-generated content)].

Low threshold for removal

Platforms only need to reasonably suspect that content is illegal before removing it. Because the Act does not define illegal content in a way that automatically prescribes censorship, users cannot know in advance whether their content will be removed. This means removals are driven by platform discretion rather than clear legal rules, making it impossible to assess whether each removal is proportionate from a rights perspective, including freedom of expression.

Practical effects and pressures

  • Platforms may delete or restrict lawful content preemptively to avoid risk.
  • Political, controversial, or minority community speech may be disproportionately suppressed or age gated.
  • Certain communities may face disproportionate impact if their speech is more likely to be judged risky.
  • Users may adopt euphemistic or indirect language to avoid automated filters.
  • Appeal and redress mechanisms may be limited. Reporting and complaints procedures exist but there is no independent body to determine if content is lawful.
  • Content may be placed behind age gates incorrectly or overcautiously if risk is interpreted broadly or age assurance is uncertain.
  • Online content may face stricter age restrictions than traditional media such as films or TV, as platforms must satisfy legal safety duties rather than voluntary industry ratings. For example depictions of serious violence to imaginary creatures are classified as priority content that’s harmful to children (anyone aged 17 or less). And yet they could easily watch mythical creatures getting slayed on TV series / films like The Witcher (15 Rating) or the War of the Worlds (12 Rating).

Evidence of these patterns on major platforms is provided in Appendix 2

People are rightly angry that their Article 10 freedom of expression rights are being curtailed in the name of safety. Especially when the content they have had removed was not harmful, was lawful and when it applies to protected categories of speech involving political expression.

In addition to the freedom of expression harms, wrongful censorship and account bans or take-downs can have real economic impacts on content creators, streamers and small online businesses that rely on user-to-user services regulated by the Act for their livelihoods.

Many small providers don’t have the ‘clout’ to get wrongfully removed content or accounts reinstated, and currently there is no third-party appeals or adjudication process that determines if content was harmful or lawful.

Under the Online Safety Act, platforms likely to be accessed by children must prevent them from seeing harmful content. The Act does not clearly define the type of content that should be age-gated, giving Ofcom discretion to shape interpretation and creating ambiguity for platforms. Ofcom’s Protection of Children Codes explicitly require platforms that rely on age-based denial of access, in order to remain safe, to know, or make a reasonable attempt to know, whether a user is a child through age assurance, which can be age verification or age estimation9. Because platforms face heavy compliance costs, reputational risk, and possible penalties for noncompliance, they often apply age-gating more broadly than strictly necessary. This includes content that is legally safe for children but carries any perceived risk. As a result, even borderline or lawful content may be placed behind an age gate, creating stricter restrictions online than in other media and turning age-gating into a default safety measure rather than a targeted one.

Age-gating is now applied to a wide range of content, from literature on Substack and sexual health advice on Reddit to social gaming features on Xbox. This restricts the freedom of expression of both young people and adults who cannot pass age-assurance checks.

While some MPs may have read about a surge in VPN use to bypass age-gating, the latest evidence suggests most of this increase comes from adults (who are perhaps worried about the data protection risks) rather than children. On 4 December, Ofcom’s Online Safety Group Director told the Today Programme that VPN use has recently fallen after an initial spike.

Teenagers aged 16 to 18 face online restrictions that are now stricter than the BBFC content classification system for film and other media. There is also evidence that young people are being blocked from accessing political news, including stories on Ukraine or Gaza. This is particularly concerning given the Government’s intention to allow 16 year olds to vote.

Without legal limits on when age-assurance technology can be used, or regulation of the technology itself, platforms and third-party vendors have economic incentives to collect more data than necessary. Implementing age-assurance at the platform level also creates a significant barrier for non-commercial websites and small services due to the associated costs. Platforms may choose cheaper and less secure vendors in countries with weaker data protection standards. Poorly implemented solutions have already caused harm, as demonstrated by a Discord data breach that exposed IDs for up to 70,000 users. The ICO and current data protection regime have proven ineffective at mitigating these risks

Those raising concerns about the Online Safety Act are not opposing child safety. They are asking for a law that does both: protects children and respects fundamental rights, including children’s own freedom of expression rights.

The petition shows that hundreds of thousands of people feel the current Act tilts too far, creating unnecessary risks for free expression and ordinary online life. With sensible adjustments, Parliament can restore confidence that online safety and freedom of expression rights can co-exist.

Fix the Online Safety Act