June 13, 2017 | Ed Johnson-Williams

UK and France propose automated censorship of online content

Theresa May and Emmanuel Macron's plans to make Internet companies liable for 'extremist' content on their platforms are fraught with challenges. They entail automated censorship, risking the removal of unobjectionable content and harming everyone's right to free expression.


The Government announced this morning that Theresa May and the French President Emmanuel Macron will talk today about making tech companies legally liable if they “fail to remove unacceptable content”. The UK and France would work with tech companies “to develop tools to identify and remove harmful material automatically”.

No one would deny that extremists use mainstream Internet platforms to share content that incites people to hate others and, in some cases, to commit violent acts. Tech companies may well have a role in helping the authorities challenge such propaganda but attempting to close it down is not as straightforward or consequence-free as politicians would like us to believe.

First things first, how would this work? It almost certainly entails the use of algorithms and machine learning to censor content. With this sort of automated takedown process, the companies instruct the algorithms to behave in certain ways. Given the economic and reputational incentives on the companies to avoid fines, it seems highly likely that the companies will go down the route of using hair-trigger, error-prone algorithms that will end up removing unobjectionable content.

May and Macron’s proposal is to identify and remove new extremist content. It is unclear whose rules they want Internet companies to enforce. The Facebook Files showed Facebook's own policies are to delete a lot of legal but potentially objectionable content, often in a seemingly arbitrary way. Alternatively, if the companies are to enforce UK and French laws on hate speech and so on, that will probably be a lot less censorious than May and Macron are hoping for.

The history of automated content takedown suggests removing extremist content without removing harmless content will be an enormous challenge. The mistakes made by YouTube’s ContentID system that automate takedowns of alleged copyright-infringing content on YouTube are well-documented.

Context is king when it comes to judging content. Will these automated systems really be able to tell the difference between posts that criticise terrorism while using video of terrorists and posts promoting terrorism that use the same video?

There are some that will say this is a small price to pay if it stops the spread of extremist propaganda but it will lead to a framework for censorship that can be used against anything that is perceived as harmful. All of this might result in extremists moving to other platforms to promote their material. But will they actually be less able to communicate?

Questions abound. What incentives will the companies have to get it right? Will there be any safeguards? If so, how transparent will those safeguards be? Will the companies be fined for censoring legal content as well as failing to censor illegal content?

And what about the global picture? Internet companies like Facebook, Twitter and Youtube have a global reach. Will they be expected to create a system that can be used by any national government – even those with poor human rights records? It’s unclear whether May and Macron have thought through whether they are happy for Internet platforms to become an arm of every state that they operate in.

All this of course is in the context of Theresa May entering a new Parliament with a very fragile majority. She will be careful only to bring legislation to Parliament that she is confident of getting through. Opposition in Parliament to these plans is far from guaranteed. In April the Labour MP Yvette Cooper recommended fines for tech companies in a report she headed up on the Home Affairs select committee.

ORG will challenge these proposals both inside and outside Parliament. If you'd like to support our work you can do so by joining ORG. It's £6 a month and we'll send you a copy of our fantastic new book when you join.


Comments (3)

  1. Robert Seddon:
    Jun 13, 2017 at 04:20 PM

    Never lets details like electoral disaster, a subsequent reshuffle, ongoing confidence-and-supply negotiations and impending Brexit negotiations get in the way of her pet obsessions, does she?

    A more far-sighted thinker might now be thinking that if a government that takes inspiration from the Little Red Book is now a real future possibility then building more means for online speech to be restricted might not be the cleverest of moves.

  2. Filipescu Mircea Alexandru:
    Jun 13, 2017 at 05:37 PM

    It's alarming at which rate those people are losing touch with reality. Apparently they're living in some parallel dimension based off Star Trek, where there exist artificial intelligence programs that can identify and understand and act upon specific content! Mankind won't have anything like this for decades to come, what the heck people? We should pass new laws demanding presidents and prime ministers to see a psychiatrist before taking their seats, in my opinion this is much more urgent right now.

  3. Matt:
    Jun 15, 2017 at 11:16 AM

    I get the feeling that Nay's real goal is to shut down blogs like AAV and squarkbox, which point out all the hypocrisy in Tory policy, regularly go viral on Facebook, and probably inspired more people than usual to get out and vote for the opposition.



This thread has been closed from taking new comments.