Trump takedowns need accountability

Late last week, in response to the insurrection at the Capitol, Twitter and Facebook removed Donald Trump’s accounts. This revoked his ability to broadcast his views, spread disinformation about the election, and mobilise support for his views. Likewise, Apple and Google have banned the social media network Parler from their app stores, and Amazon has removed hosting support from Parler as well.

In most respects, this outcome will seem entirely reasonable: here are accounts and platforms being used in an unacceptable manner to sow public discord, incite violence resulting in deaths, and undermine the democratic transfer of power in the USA. That cannot be supported. Tech companies were placed in a situation where they were morally obliged to take action. Nevertheless, the blunt exercise of corporate power without accountability should concern us.

Here in the UK, prominent voices including Damian Collins and Matt Hancock have raised the implications; Hancock has asserted that platforms are in effect accepting liability for their content. What the public needs to ask is who is making these decisions, what processes they use, and to whom they are accountable.

This is important from a UK perspective. Under the forthcoming online harms legislation, Government intends to empower a state regulator, Ofcom, to ensure that platforms and online service providers take active steps to reduce ‘harms’, enforce terms & conditions and content moderation guidelines, and to remove content when it breaks the law.

Our position is that platforms’ decisions about content need to be held to independent account. This oversight should be done either by the courts, or by an independent, non-state regulator. 

Ofcom, while technically independent, is accountable to the state: it is paid for by Government and operates through laws passed through Parliament. Its high-level leadership appointments are political. Thus it is potentially open to Government pressure, and can have its remit easily adjusted if Parliament does not like the outcomes it delivers.

In a case like that of Donald Trump, this would make for very uncomfortable decisions which could easily begin to undermine the credibility of the regulator. The regulator would, in the first instance, claim that individual takedowns of individual content are not its job: which would be true. If terms and conditions, or free expression considerations, prevented questionable content from being removed, then political pressure could easily be applied to ensure that the preferred outcome is delivered.

This is not a small problem. Here in the UK, even outside the online harms framework, a great deal of legal content gets removed as the result of terms and conditions which are either too broad or are misapplied. Content takedowns are already a big problem for legitimate political discourse, often for marginalised groups and communities.

While the UK proposals do include mechanisms for appeals against takedown decisions, these ultimately rely on the terms and conditions of the platform or online service. If a particular kind of content is disallowed, the appeals would need to be made on the basis of those restrictions, rather than the law. Thus the contest could easily become about the limits of terms and conditions, and whether these need to be more restrictive or permissive. The door is open to political pressure on the regulator to rule that terms and conditions need to be made more restrictive for reasons of ‘safety’.

Content decisions should be based on open and independent processes. Facebook has a better story to tell here, and we expect this decision to ban Donald Trump to be a test of that. The Facebook Oversight Board, which is administratively independent of Facebook, is very likely to look at this decision and to decide for itself whether this decision should stand, and on what basis. In its work, the Board has to consider freedom of expression as well as the terms and conditions set out by Facebook. It can recommend adjustments to policy, as well as instruct Facebook to reinstate content, unless this would cause Facebook to break the law. If content should stay down, then the Oversight Board makes that decision and explains why.

All of that is much better than Apple, Amazon or Twitter’s decisions to remove Trump’s account or the Parler app with no comeback, and through exceptional decisions taken in response to specific events, set precedents which could be misused in the future. While we have not yet seen how Facebook’s Oversight board works overall, and how influential it becomes to make content moderation accountable more widely, it has at least advanced enough to be asked to make a decision on this.

Independent accountability matters a lot more than just this one decision. Conspiratorial thinking, and disempowerment, have fuelled Trump’s polititcs; and those sentiments are not absent from our own politics in the UK. Institutions which are independent and effective are critical to reestablishing public trust, establishing precedents based in the rule of law, and marginalising the politics of bad faith. Indeed, the right decisions made now can help to prevent future populist movements from ever going as far as they have.

While UK politicians demand more control over social media platforms, they would be careful to design systems that do this without politicising decisions and continuing to undermine trust. Independent external oversight is a key to this, but the online harms proposals do not yet establish this. Yet they could: particularly if they move away from a state regulator, and look to a model like Co-regulation which is less susceptible to political interference.

Let’s hope that UK politicians learn the right lessons from this week’s events.

Get the latest

Sign up for updates on ORG’s work to protect free expression online and digital privacy.

Subscribe