Facebook censorship complaints could award government immense control

The leaked Facebook Files, the social media company’s internal policies for content regulation published by the Guardian, show that, like a relationship status on Facebook, content moderation is complicated.

It is complicated because Facebook is a near-monopoly in the social media market, making them both a power player and a target for regulation. It is complicated because there is an uneasy balance to strike between what is law, what is code, and what is community decency.

It is complicated because Facebook finds itself in a media landscape determined to label them as either a publisher or a platform, when neither of which are suitable titles. And ultimately, it is complicated because we are talking about human interaction and regulation of speech at a scale never seen before.

Big player. Big target

Facebook are a monopoly. And that is a big problem. With almost 2 billion users on the site, operating in almost all countries around the world, hoarding the data generated by a community of a size never before seen. The leaks show that even they seem unclear how best to police it.

It could be argued that as a private company, they can create their terms and conditions as they see fit but their global domination means that their decisions have global impact on free speech. This impact creates obligations for them to uphold standards of free expression that are not normally expected of a private company.

Operating in so many countries also means that Facebook are an easy target for criticism from many different governments and media, who will blame them for things that go wrong because of their sheer scale. They can see an easy way to impose control, by by targeting them through the media or regulation. Most recently seen in the Home Affairs Committee report where social media companies were accused of behaving irresponsibly in failing to police their platforms.

World Policing in the Community

Facebook’s business model is premised on users being on their site and sharing as much information as possible so that they can use personal data to sell highly targeted advertising.  Facebook do not want to lose customers who are offended, which means that offence is a much lower threshold than what is legal or not.

Facebook is not unregulated. The company has to comply with court orders when served but as the leaked files show making judgments about content that is probably legal but offensive or graphic is much more difficult.

Being a community police for the world is a deeply complicated position, even moreso if your platform is often seen as the Internet.

Law versus community standards

Facebook will takedown material reported to them that is illegal. However, the material highlighted by the Guardian as inappropriate for publication falls into a category of offensiveness, such as graphic material or sick jokes, rather than illegality.

Where people are objecting to illegal material appearing and not being removed fast enough, we should also bear in mind the actual impacts. For instance, how widely has it actually circulated?  In social media, longevity and contacts is what tends to produce visibility for your content. We suspect a lot of ‘extremist’ postings are not widely seen as the accounts will be swiftly deleted.

In both cases, there is a serious argument that it is society, not Facebook, generating unwanted material. While Facebook can be targeted to remove it, this won’t stop its existence. At best, it might move off its platform, and arrive in a less censored, probably less responsible environment, even one that caters and encourages bad behaviour. 4Chan is a great example of this, in that its uncensored message boards attract abuse, sick jokes and co-ordination of attacks.

Ultimately behaviour such as abuse, bullying and harassment needs to be dealt with by law enforcement. Only law enforcement that can deliver protection, prosecutions and work with individuals to correct their behaviour to reduce actual offending. Failing to take serious action against real offenders encourages bad behaviour.

Publisher v Platform

What happens when your idea of bringing the world together, suddenly puts you in the position of a publisher? When people are no longer just sharing their holiday pictures, but organising protests, running campaigns, even publishing breaking news.

Some areas of the media have long delighted in the awkward positioning of Facebook as a publisher (subject to editorial controls and speech regulation) and not a platform (a service where user’s can express their ideas that are not representative of the service). It might be worth those media remembering that they too rely on “safe harbour” regulations designed to protect platforms for all those comments that their readers post below their articles. To place regulatory burdens creating new legal liabilities for user generated content would be onerous and likely to limit free expression, which no-one should want to see.

Safe harbour arrangements typically allow user content to be published without liability, and place a duty on platforms to take down material when it is shown to be illegal. Such arrangements are only truly fair when courts are involved. Where an individual, or the police, can notify without a court, platforms are forced to become risk averse. Under the DMCA copyright arrangements, for instance, a user can contest their right for material to be re-published after takedown, but has also to make arrangement to be taken to court. All of this places the burden of risk on the defendant rather than the accuser. Only a few who are accused will opt to take legal risk, whereas normally, accusers would the ones to be careful about who they take to court for their content.

Money to burn. People to hire

Facebook have enough money that they should be able to go further in their hiring of humans to do this job better. They appear to be doing that and should be trying to involve more human judgement in speech regulation, not less.   

Considering the other options on the market, more human involvement would seem the most reasonable approach. Facebook have tried and failed miserably  to moderate content by algorithm.  

However, the sheer size of the task in moderating content across so many different cultures and countries, reportedly leaving human moderators only 10 seconds to make a decision whether to take down content, is a massive task that will only grow as Facebook expands.

We need to understand that moderation is rules based, not principle based. Moderators strictly match against Facebook’s “rules” rather than working from principles whether something is reasonable or not. The result is that decisions will often seem arbitrary or just bad. The need for rules rather than principles stems from making judgements at scale, and seems unavoidable.

Algorithms, to be clear, can only make rules-based approaches less likely to be sane, and more likely to miss human, cultural and contextual nuances. Judgement is an exclusively human capability; machine learning only simulates it. When a technologist embodies their or their employer’s view of what’s fair into a technology, any potential for the exercise of discretion is turned from a scale to a step and humanity is quantised. That quantisation of discretion is always in the interest of the person controlling the technology.

One possible solution to the rigidity of rules-based moderation is to create more formal flexibility, such as appeals mechanisms. However, Facebook are most likely to prefer to deal with exceptional cases as they come to light through public attention, rather than impose costs on themselves.

Manifesto pledges

Any push for more regulation such as suggested by the Conservative manifesto is highly likely to encourage automation of judgements to reduce costs—and validates this demand being made by every other government. The Conservative pledges here seem to us to be a route straight to the Computer saying No.

Thus, if you are concerned about what would seem to be arbitrary, opaque rules in place for Facebook’s content moderators set by the company, then you should be doubly concerned by the Conservative’s manifesto pledge to bring further state regulation to the Internet.

Governments have been the home of opaque and arbitrary rules for years, and the Conservatives, if elected, would deliver an internet where the state creates incentives to remove anything potentially objectionable (that could create adverse publicity, perhaps) and what level of security citizens should be able to enjoy from the platforms they use everyday . That is not the future we want to see.

So we have a private monopoly whose immense power in deciding what content people view is concerning, but a concerning proposal for too much state involvement in that decision too. A situation where you want to see better rules in place, but not rules that turn platforms into publishers, and a problem so vast that it seems that just hiring more people would not solve the problem alone.  Like we said, it’s complicated.

What is simple, however is that Facebook present a great opportunity for media stories, and complaints followed by power grabs from government to police the acceptability of speech that they would never dare make illegal. We may regret it if these political openings translate into legislation.