Online abuse: Why management liability isn’t the answer

As part of recent high-profile campaigns against social media abuse, calls have been repeated to impose criminal sanctions and penalties onto the directors and managers of companies where that abuse takes place. Many of these calls demand that these sanctions and penalties should be mandated in the upcoming Online Safety Bill, the successor to the Online Harms White Paper, which will be announced in next week’s Queen Speech.

We agree that online abuse has no place in society, that accountability must be established, and that online harms must be brought to bear. But we also know that the Bill, as it is taking shape, will restrict freedom of speech for all of us, impose the presumption of guilt onto all of us, and solve online abuse for none of us. We also know that imposing criminal sanctions and penalties onto individuals, rather than onto the companies which employ them, will be the fastest way to lead us down that slippery slope.

Let’s explore some of the reasons why demands for imposing criminal liability onto senior managers and directors, in the context of the Online Safety Bill, constitute a series of threats to your right to freedom of expression. First, let’s explain how this debate came about.

The subjective “duty of care”

The Online Safety Bill, and the online harms framework which created it, centre around a concept called the “duty of care”. This concept has been borrowed from health and safety legislation. It will require service providers, whether they are biggest social media giant or the smallest online business, to be held to a “duty of care” over any possible misuse of their services. Put another way, online services will become legally liable for failing to take steps to prevent bad things happening on them. 

Critically, the “duty of care” will be subjective, meaning that it will not just apply to actions (including online abuse) which are already illegal both offline and online. It would apply to any content or actions which are legal and fall within the realm of free speech. Add demands for managerial liability, sanctions, and prosecutions as punishments for violations of the “duty of care” on top of that, and you have set the stage for requiring service providers to take on the role of arbiters of society, behaviour, and truth. But who defines what is true and what is false? Who defines what is acceptable and what is not? Who defines what is right and what is wrong?

That, again, is entirely subjective, up until the point where a tech sector worker will face entirely objective personal consequences for getting it wrong.

The slippery slope is already here

Before the Bill even exists in law, we are already seeing calls for the “duty of care”, and the subjective risks within it, to be dragged down a very slippery slope. For example, a report from the Centre for Social Justice think tank, headed by former Home Secretary Sajid Javid, calls for Ofcom (the upcoming online harms regulator) to “treat high-risk design features like end-to-end encryption as breaches of the Duty of Care and sanctions should be able to apply retroactively.” The report also notes that “It will be insufficient for a platform to argue that introducing such a high-risk design feature will have other benefits in spaces like user privacy and preventing online financial crime.”

So before the Queen even announces the legislation, we are already looking at demands for the “duty of care” to be used as leverage to impose criminal penalties, retroactively, onto developers and services that are currently using end-to-end encryption as a basic security protocol. It beggars belief to think of where this logic might go next.

That think tank report, among many other things, makes a sweeping policy recommendation based around the example of one specific service provided by one specific company. That is a classic example of the fact that the demands for managerial liability, and much of the publicity around it, have clearly been crafted to target a small handful of specific, high-profile tech billionaires, all of whom are American. In our engagement with policymakers, it has been difficult to encourage them to think of the consequences of that legislation beyond that literal handful of people. There is very little comprehension that imposing senior management liability will miss its intended targets – who can more than afford to duck – and will hit everyday site owners, administrators, and content moderators here in the UK instead.

So what will be the practical consequences of that?

Collateral censorship

The first and most immediate impact of the imposition of senior management liability will be a chilling effect on free speech. This is always a consequence of content moderation laws which are overly prescriptive and rigid, or conversely, overly vague and sweeping. 

When everything falls into a legally ambiguous middle ground, but the law says that legally ambiguous content must be dealt with, then service providers find themselves backed into a corner. What they do in response is take down vast swathes of user-generated content, the majority of which is perfectly legal and perhaps subjectively harmful, rather than run the risk of getting it wrong. 

This phenomenon, known as “collateral censorship” – with your content being the collateral – has an immediate effect on the right to freedom of expression.

Now add the risk of management liability to the mix, and the notion that tech sector workers might face personal sanctions and criminal charges for getting it wrong, and you create an environment where collateral censorship, and the systematic takedowns of any content which might cause someone to feel subjectively offended, becomes a tool for personal as well as professional survival.

In response to this chilling effect, anyone who is creating any kind of public-facing content whatsoever – be that a social media update, a video, or a blog post – will feel the need to self-censor their personal opinions, and their legal speech, rather than face the risk of their content being taken down by a senior manager who does not want to get arrested for violating a “duty of care”.

Now it should be said, in our political climate, that not everyone will simply acquiesce to the collateral censorship of their words and opinions. Many will seize upon the opportunity to depict themselves as martyrs to free speech or as victims of “cancel culture”. These will include provocateurs who have ulterior motives regarding their own approach to free speech, and who will not hesitate to rally supporters to their cause. 

All of this will poison the dialogue on online abuse that we should be having instead.

A poor global example

Criminal sanctions and penalties for speech-related offences are never the sign of a healthy democracy. Criminal sanctions and penalties placed onto administrators and site operators, but not necessarily the authors of the content, would place the UK into the company of authoritarian nations such as Turkey, Tunisia, and the Philippines.

“I was convicted of a crime that didn’t exist for a story I didn’t write, edit, or supervise. While I and a former colleague were found guilty, our company was not.” – Maria Ressa 

The world looks up to the UK; indeed, the drafters of the online harms framework have been clear from the start that they intend it to be a “world-leading” example for other nations to follow. But we cannot pick and choose how other nations will follow the example we set. Rules on subjective content may well be followed by democratic nations with good intentions. Rules on senior management liability may well be followed by governments with little respect for human rights or citizen freedoms.

As the Bill takes shape, its drafters must be conscious of the cover it could provide for authoritarian nations, autocratic leaders, and nations with state-restricted internet to impose their own chilling effects onto freedom of expression. 

If the UK wants senior managers to take responsibility for user content, the UK must also be prepared to take responsibility for the consequences its model will inspire abroad. Those consequences, for many of its victims, will be far graver than a suit-and-tie appearance in a British courtroom.

Would you take on this job?

To tackle online abuse, social media companies, as well as service providers of all sizes, will need to call on the talents and expertise of the world’s best thinkers across law, policy, freedom of expression, and content moderation. That dialogue takes place between government and industry, publicly and privately, every day. But the push to impose criminal liability and sanctions onto senior staff, as is likely to be demanded within the Online Safety Bill, will bring that cooperation to an end.

Why is that? To put it simply: what sort of tech sector professional, in their right mind, will want to offer themselves up for the job of tackling these issues, in the very companies where they need to be tackled, with the constant threat of arrests hanging over their head? Who would apply for a role where they are legally mandated to be the company “fall guy” at risk of arrest for something that a user of their service did? 

Indeed, who will put themselves up to be the designated British “fall guy” for their American billionaire boss? Who will want to contribute to a dialogue, in good faith, which has been constructed specifically to facilitate criminal charges against them if they get it wrong? Who will accept employment in a role which places a newspaper target on their back?

Would you?

Aside from that, it cannot be said enough: the Online Safety Bill, and the online harms framework, are not just about regulating the so-called “social media” “tech giants”. Leaving aside that the regulation is targeted at the speech of every UK citizen, the regulations themselves will be applicable to any company of any size with any amount of UK users, whether or not those companies are based in the UK, including many of the companies which readers of this post will run or work for. 

Regulatory burdens on small businesses normally involve paperwork and compliance obligations, not the risk of personal prosecution, yet that is precisely where some advocates want to take the Bill. Who will want to start or grow a tech business of any size or ambition, if the law requires them to designate a staff member to be at constant risk of personal penalties? 

At the end of the day, managerial liability and sanctions would render the UK a no-go zone for the professional experts, and the creative minds, whose talents are needed to tackle online harms when and where they occur. It will also create a disincentive for anyone seeking to start or grow a business with any online component whatsoever, or indeed, devote one more day to supporting the UK market. And perhaps rightly so.

“World-leading”, but at what?

The consequences of imposing personal managerial liability onto sites and services over speech offences – the imposition of collateral censorship, the creation of free speech martyrs, the inspiration it would provide to authoritarian regimes, and the positioning of the UK as a “no go” zone for tech sector talent – are clear. 

What is also clear is that heated public rhetoric is piling pressure on the government to put the cart before the horse, and design the Online Safety Bill in a way that would specifically allow them to bring about the arrests of several high-profile individuals, regardless of those wider consequences, as one of the main goals of the Bill itself.

So let’s say it plays out exactly that way. What comes next? What happens after American celebrity billionaires are arrested at Heathrow, politicians have their victories, and newspapers have their campaigning “wins”? 

What would happen is that new fall guys will need to be created, new targets for blame will need to be found, and new newspaper crusades will need to be launched. The “fall guys” on the receiving end of them will not be household names from abroad; they will be middle managers, content moderators, and online news editors, here in the UK, working for the services you use every day. Yet despite that culture of witch-hunts and arrests – or perhaps, because of it – the misuse of services of all sizes as a vector for online abuse will continue unabated, by the people who are actually committing the abuse. And collateral censorship, by necessity, will continue as well.

This is why grievances and vendettas are never a healthy basis for public policy.

Whether policymakers want to hear it or not, online abuse will not be tackled by turning the issue into a “winnable” series of ​ad hominem campaigns, witch-hunts, arrests, and show trials. Nor would a system of accountability be created by imposing heavy regulatory burdens, and constant threats of criminal sanctions, onto everyday UK site administrators, content moderators, and business owners for the ways that members of the public misuse their tools and services. 

Along the way, the arrests and prosecutions of UK tech sector employees, in order to demonstrate the full force of the law, would dissuade any professional in their right mind from taking on the hard work that needs to be done to tackle online abuse.

The Online Safety Bill aims to be “world-leading”. Politicians should think carefully about what sort of world they are about to create. There are ways to create accountability for online abuse, including competition policy, better consistency between online and offline offences, and public education, which will not position the UK as a “world-leader” in vindictive persecutions for speech-related offences. 

There are also ways to create accountability for online abuse which work within internationally recognised frameworks of human rights-based approaches to free speech, rather than working against them. These options are not as easy, and perhaps not as momentarily satisfying, as the desire to play the man rather than the ball. But the alternative is one which should not be under active consideration as a policy choice. We are better than that.

As the Online Safety Bill takes shape over the upcoming months, we will continue to advocate to protect your right to freedom of expression from the unintended consequences of vindictive crusades and grievance politics. We hope you’ll join us.

Hear the latest

Sign up to receive updates about Open Rights Group’s work to protect digital rights.