What’s the Harm in the Online Safety Bill?

Throughout the development of the government’s Online Harms policy, a central concern of ORG and other human rights organisations is how any legally mandated content moderation policy could practically be achieved. The algorithmic moderation deployed by most social media companies is notoriously literal, and the human review of content is often performed by people who are unaware of the context in which messages are sent.

These flaws result in false positives (acceptable content being removed) and false negatives (unacceptable content remaining visible).

The draft Online Safety Bill considers two distinct types of content: illegal content, and content that is legal but which has the potential to cause harm. The social media companies will have to abide by OFCOM’s code of practice in relation to both.

The definitions of these two types of content are defined are therefore crucial to the coherence of the new regulatory system. Ambiguous definitions will make it harder for social media platforms to moderate their content. If the new system causes more acceptable content to be taken down, while allowing illegal and/or harmful content to remain on the platforms, then the law will be a failure.

Illegal content

For illegal content, the Bill does not set out a standard of harm that must be caused, before the social media companies have to deal with it. An assumption of harm is “baked in” to the designation — Parliament would not have criminalised such content if it had not first agreed that such content is harmful.

That is not to say that all illegal content is held to a consistent standard of harm. While in some cases (child sexual abuse images, or death threats) the harm caused is obvious, in other cases it is less apparent. Hate speech laws are based on assumed harm to society, without requiring evidence of harm to any particular individual. And the existing communication offences incorporate subjective terms such as “grossly offensive” and “anxiety” without requiring proof of harm.

Nevertheless, it is understandable why the draft Online Safety Bill does not second-guess the “harm” standard in the case of content that is already illegal. The definitions exist elsewhere in the statute books and common law.

Of course, this presents a problem for moderation. To ensure that no illegal content appear on a platform, an algorithm (or a low-paid human reviewer) must know all the various laws that govern illegal content, and make a split-second decision on whether the content is either grossly offensive, abusive, threatening, harassment, libellous, fraudulent or an invasion of privacy.

Content moderation systems already attempt to do this, with varying degrees of success. But with a new threat of criminal sanctions imposed on platforms that allow illegal content to appear, those platforms will become more cautious… and so will their moderation systems. There will be more ‘false positives’ and more perfectly legal content will be removed. This will be a significant and ongoing violation of everyone’s right to freedom of expression.

The Bill also demands that legal content, which is harmful to children or adults, must also be properly moderated. By definition, such content is not criminalised, and so there is no existing definition on which to draw. The Online Safety Bill must therefore define “harmful content” for itself.

It does this at clauses 44 and 45, which considers content harmful to children and adults, respectively. In both cases, the Bill presents two definitions.

  • First, it says that content is harmful if it has “a significant adverse physical or psychological impact” on the adult or the child.
  • Second, it allows the Secretary of State to simply designate content as harmful, through secondary legislation.

Both of these definitions are problematic.

“Adverse physical or psychological impact”

This is a phrase that itself requires a definition. What is “adverse”? What is “impact”? The Government’s approach appears to be “we know it when we see it” and several evidence sessions to the joint committee on the draft bill were given over to witnesses who gave specific examples. These will undoubtedly be incorporated into the codes of practice and departmental guidance that the Bill requires to be written. However, this means that any new kinds of psychological harm, or any harm suffered by people that do not have a pressure group working on their behalf, are likely to find that the social media companies are slow to respond to their concerns.

Meanwhile, keywords and phrases associated with accepted psychological harms will be heavily moderated by the platforms, regardless of the context in which they are posted. This will be an unnecessary and disproportionate infringement on freedom of expression. It is a fundamental flaw at the heart of the Bill.

On the Minister’s Say-So

The power of the Secretary of State to make regulations means that “harmful” becomes whatever the Minister says it means. The Minister is required to meet with OFCOM before making regulations, but there is no provision for wider consultation.

Nor is there any requirement for an evidence base.

Furthermore, the type of content to which these powers will apply is content that should not be within the scope of the regulator at all. For content to be designated under this provision, it would have to be neither physically nor psychologically harmful. So what other kind of harm could it possibly pose?

Two possible answers are social harm and economic harm, both of which are extremely broad and contested concepts in themselves. Designating any such content as harmful would profoundly extend the scope of the regulation, trampling into areas of life and law that are better handled by other regulators, or by other, specific laws.

The extent of the Secretary of State’s power is deeply worrying. Anything designated “harmful” must be closely moderated by the social media companies. Algorithmic takedowns will be the inevitable result. The link between designation and content removal might be a staged and circuitous process, but it is nevertheless a form of censorship.

Such a potent power should only ever be the subject of primary legislation. It is not an appropriate power to be wielded by statutory instruments.

During her evidence session with the joint committee on the draft Bill, Secretary of State for Digital, Culture, Media and Sport, Rt. Hon. Nadine Dorries MP repeatedly stated that she did not want to expand the scope of the regulation too widely. One way to ensure that there is no “mission creep” or bloating of the regulator’s remit would be to remove the powers of designation in clauses 45 and 46 of the Bill. This would ensure that the focus of the legislation remains on its stated purpose — protecting the well being of individuals.

New Communications Offences

If the proposals were not ambiguous enough, the Government are considering adding an extra layer of confusion into the mix. According to media reports, the Government are planning to accept the Law Commission’s proposal that a new communications offence be created.

The current offences in the Malicious Communications Act 1988 and the Communications Act 2003 criminalise abuse, threats and ‘grossly offensive’ messages. In July 2021 the Law Commission recommended that they be replaced with an offence of posting a message that is intended, or is likely to cause “psychological harm amounting to at least serious distress” in the likely audience.

This is similar, but not quite identical, to the “psychological impact” benchmark that the Government proposes to set for legal-but-harmful content. The Online Safety Bill may become a legislative oxymoron, where the definition of legal-but-harmful content includes content that is made illegal in another clause of the Bill.

The conception of “harm” in the draft legislation lacks objectivity and precision. It is instead a heady cocktail of subjectivity and Ministerial fiat that will confuse social media users, algorithms, moderators, regulators, the police and judges. The DDCMS must come up with something less vulnerable to whim and sensibility, before the Bill is introduced to parliament.

Hear the latest

Sign up to receive updates about Open Rights Group’s work to protect our digital rights.

Subscribe