ORG Regulation Report II

This report follows our research into current Internet content regulation efforts, which found a lack of accountable, balanced and independent procedures governing content removal, both formally and informally by the state.

There is a legacy of Internet regulation in the UK that does not comply with due process, fairness and fundamental rights requirements. This includes: bulk domain suspensions by Nominet at police request without prior authorisation; the lack of an independent legal authorisation process for Internet Watch Foundation (IWF) blocking at Internet Service Providers (ISPs) and in the future by the British Board of Film Classification (BBFC), as well as for Counter-Terrorism Internet Referral Unit (CTIRU) notifications to platforms of illegal content for takedown. These were detailed in our previous report.

The UK government now proposes new controls on Internet content, claiming that it wants to ensure “the same rules online as offline”. It says it wants “harmful” content removed, while respecting human rights and protecting free expression.

Yet proposals in the DCMS/Home Office White Paper on Online Harms will create incentives for Internet platforms such as Google, Twitter and Facebook to remove content without legal processes. This is not “the same rules online as offline”. It instead implies a privatisation of justice online, with the assumption that corporate policing must replace public justice for reasons of convenience. This goes against the advice of human rights standards that government has itself agreed to and against the advice of UN Special Rapporteurs.

The government as yet has not proposed any means to define the “harms” it seeks to address, nor identified any objective evidence base to show what in fact needs to be addressed. It instead merely states that various harms exist in society. The harms it lists are often vague and general. The types of content specified may be harmful in certain circumstances, but even with an assumption that some content is genuinely harmful, there remains no attempt to show how any restriction on that content might work in law. Instead, it appears that platforms will be expected to remove swathes of legal-but-unwanted content, with as as-yet-unidentified regulator given a broad duty to decide if a risk of harm exists. Legal action would follow non-compliance by a platform. The result is the state proposing censorship and sanctions for actors publishing material that it is legal to publish.