Blog


August 09, 2017 | Alec Muffett

The British Public & its Freedom to Tinker with Data

This is a guest blog by Alec Muffett, a security researcher and member of ORG’s Board of Directors.

Britain - after a somewhat shaky start - has recognised its proud tradition of leadership in cryptographic research and development.

jigsaw being put togetherFrom Alan Turing's success at breaking the Enigma cipher at Bletchley Park, and Tommy Flowers' "Colossus" (also there) to break the Lorenz cipher, to early and secret research into what later became known as "Public Key Encryption" by Clifford Cox, to GCHQ's vast deployment of technology to enable mass-surveillance of undersea cable communications — whatever one's opinion of the fruits of the work, Britain is recognised as a world leader in the fields of cryptography.

And one of the great truths of cryptography is: cryptography only improves when people are trying to break it. From academics to crossword-puzzle fans, cryptography does not evolve unless people are permitted to attack its means, methods and mechanisms.

This brings us to the recently announced "Data Protection Bill", in which you will find a well-intentioned paragraph: (our emphasis)

Create a new offence of intentionally or recklessly re-identifying individuals from anonymised or pseudonymised data. Offenders who knowingly handle or process such data will also be guilty of an offence. The maximum penalty would be an unlimited fine.

https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/635900/2017-08-07_DP_Bill_-_Statement_of_Intent.pdf (page 10)

This speaks to the matter of "data anonymisation" (and the reverse processes of re-identification or de-anonymisation) where the intention is that some database — for instance a local hospital admissions list — could be stripped of patient names and yet still usefully processed/shared to research the prevalences of disease in a population.

Done improperly, this can go wrong:

…leading to failures where the "anonymity" can be defeated by combining several data sources, or by attacking the data set analytically, in order to return some semblance of the original data.

Ban it?

So it might sound like a good idea to ban re-identification, yes?

Well, no; the techniques of data anonymisation are mostly a form of "code book" cryptography, and (as above) if it's not legal to prod, poke, and try to break the mechanisms of anonymisation, then anonymisation, like cryptography, will not improve.

Therefore: banning re-identification will harm all of our individual security; it should be explicitly legal for anyone — the professionals, the crossword-puzzlers — to "have a go" at re-identification of data. Certainly it should be illegal for anyone to exploit or share the fruits of any successful re-identification — as is currently suggested — but the act of re-identification itself should not be prevented nor chilled in any way.

To swap metaphors: if you drive a car in the UK then it will have been crash-tested by experts in order to determine how safe it is; but that is not sufficient. We do not rely upon experts to crash them once, declare them safe, and then ban members of the public from crashing their cars. Instead, much of our learning and standards in car safety are from analysing actual, real-world incidents.

Similarly: anonymisation is hard to do correctly, and the failures in how people and organisations have deployed it will only be evident if the many eyes of the general public are permitted to dig into the flaws that may have arisen from one example to the next. It will not be sufficient, as this bill announcement continues, for "…the important role of journalists and whistleblowers […to…] be protected by exemptions."

Everyone has a stake in the collective security of our information, and we — the public — are the code-breakers who should be able to research, and hold to account, any instances of diverse and shoddy anonymisation that may be foisted upon us. Therefore this bill proposal must be amended and the freedom of the public to attempt re-identification must not be abridged.

 — Alec Muffett, security researcher & member of the Board of Directors, ORG

Further reading

https://en.wikipedia.org/wiki/Bombe
https://en.wikipedia.org/wiki/Tommy_Flowers
https://en.wikipedia.org/wiki/Clifford_Cocks
https://en.wikipedia.org/wiki/Tempora
https://en.wikipedia.org/wiki/Cryptanalysis
https://en.wikipedia.org/wiki/De-anonymization
https://en.wikipedia.org/wiki/Data_Re-Identification
https://en.wikipedia.org/wiki/Euro_NCAP
http://www.theregister.co.uk/2017/08/07/data_protection_bill_draft/ 

[Read more]


August 01, 2017 | Ed Johnson-Williams

Sorry Amber Rudd, real people do value their security

It’s not for the home secretary to tell the public they don’t need encryption

Amber Rudd has been out doing the media rounds this morning (£) talking about the issues end-to-end encryption poses to law enforcement. One comment in particular caught our eye:

“Real people often prefer ease of use and a multitude of features to perfect, unbreakable security. Who uses WhatsApp because it is end-to-end encrypted, rather than because it is an incredibly user-friendly and cheap way of staying in touch with friends and family?”

This is a little like saying: "Who uses a car because it has airbags and seatbelts, rather than because it’s a convenient way to get around?"

The Home Office strategy here may be to persuade internet companies to take action by telling them that ordinary people don’t care about security. This would be dangerous and misleading.

Clearly, real people (who are Rudd’s not real people?) do value security in their communication, just as they do with safety in their cars. Security is not – or at least does not have to be – the opposite of usability.

For many people, good security makes a service usable and useful. Some people want privacy from corporations, abusive partners or employers. Others may be worried about confidential information, sensitive medical conversations, or be working in countries with a record of human rights abuses.

Whatever the reasons people want secure communications, it is not for the Home Secretary to tell the public that they don’t have any real need for end-to-end encryption.

While Rudd seems to be saying she does not want encryption to be “removed” or bypassed, there are other things she might be looking for. It is possible that she wants the internet companies to assist the police with “computer network exploitation” – that’s hacking people’s devices.

It could mean providing communications data about users which could include data such as: "This user uses this device, often these IP addresses, this version of their operating system with these known vulnerabilities, talks to these people at these times, is online now, is using this IP address, is likely at this address and has visited these websites this many times."

Alternatively, Rudd might mean pushing out compromised app updates with end-to-end encryption disabled.

However, it is likely to be police rather than security services asking for this help. While targeted hacking does provide an investigative option that avoids blanket communications surveillance, it would be risky for the police to have these powers. Training and oversight, after all, are not as thorough or exacting as in the security services.

What is completely lacking is any serious attempt to tell the public what the Home Office wants internet companies to do to make people’s end-to-end communications accessible.

We should be told what risks the public would be exposed to if the companies were to agree to the Home Office’s private requests. Have these risks been properly weighed up and scrutinised? What safeguards and oversight would there be?

One risk is that users may start to distrust tech companies and the apps, operating systems and devices that they make. When security vulnerabilities are identified, firms push out updates to users. Keeping devices and apps up-to-date is one of the most important ways of keeping them secure. But if people are unsure whether they can trust pending updates, will they keep their devices up-to-date?

It would be incredibly damaging to UK security if large numbers of people were dissuaded from doing so. A prime example is the WannaCry ransomware attack that paralysed parts of the NHS in May. It spread through old Windows computers that hadn’t been updated, forcing doctors to cancel thousands of appointments.

The government must spell out its plans in clear, precise legislation and subject that legislation to full parliamentary scrutiny, and it should bring security and usability experts into a public debate about these questions.

Measures that deeply affect everybody’s privacy, freedom of expression, and access to information must not be decided behind closed doors.

[Read more] (3 comments)


June 21, 2017 | Jim Killock

Queen’s speech 2017—threats to privacy and free speech

First analyses of the Queen’s Speech are focussing on what isn’t included, as a weakened Conservative Government appears to have dropped a number of its manifesto commitments but there are several worrying things for digital rights. One welcome development could be data protection legislation, to fill the options in the GDPR.

There are references to a review of Counter-terrorism and a Commision for Countering Extremism which will include Internet-related policies. Although details are lacking, these may contain threats to privacy and free speech. The government has opted for a “Digital Charter”, which isn’t a Bill, but something else.

Here are the key areas that will affect digital rights:

Digital Charter

This isn’t a Bill, but some kind of policy intervention, backed up by “regulation”. This could be the system of fines for social media companies previously mentioned, but this is not explained.

The Digital Charter appears to address both unwanted and illegal content or activity online, and the protection of vulnerable people. The work of CTIRU and the IWF are mentioned as examples of work to remove illegal or extremist content.

At this point, it is hard to know exactly what harms will emerge, but pushing enforcement into the hands of private companies is problematic. It means that decisions never involve courts and are not fully transparent and legally accountable.

Counterterrorism review

There will be a review of counterterrorism powers. The review includes “working with online companies to reduce and restrict the availability of extremist material online”.

This appears to be a watered down version of the Conservative manifesto commitment to give greater responsibility for companies to take down extremist material from their platforms. Already Google and Facebook have issued public statements about how they intend to improve the removal of extremist material from their platforms.

Commission for Countering Extremism

A Commission will look at the topic of countering extremism, likely including on the Internet.

This appears to be a measure to generate ideas and thinking, which could be a positive approach, if it involves considering different approaches, rather than pressing ahead with policies in order to be seen to be doing something. The quality of the Commission will therefore depend on their ability to take a wide range of evidence and assimilate it impartially; it faces a significant challenge in ensuring that fundamental rights are respected within any policy suggestions they suggest.

Data Protection Bill

A new Data Protection Bill, “will fulfil a manifesto commitment to ensure the UK has a data protection regime that is fit for the 21st century”. This will replace the Data Protection Act 1998, which is in any case being removed as the result of the new General Data Protection Regulation passed by the European Parliament last year. Regulations apply directly, so the GDPR does not need to be ‘implemented’ in UK law before Brexit.

We welcome that (at least parts of) the GDPR will be implemented in primary legislation with a full debate in Parliament. It is not clear if the text of the GDPR will be brought into this Bill, or whether it supplements it.

This appears to be a bill to at least implement some of the ‘derogations’ (options) in the GDPR, plus the new rules for law enforcement agencies, that came in with the new law enforcement-related Directive and have to be applied by EU member states.

The bulk of the important rights are in the GDPR, and cannot be tampered with before Brexit. We welcome the chance to debate the choices, and especially to press for the right of privacy groups to bring complaints directly.

Missing: sex and relationships education

There is no mention of the introduction of compulsory sex and relationship education in schools, which was a manifesto commitment for all the main parties, Labour, Lib Dem and Conservative. As there appeared to be a consensus on this issue, it is not clear why this seems to have been dropped.

Encryption is also not mentioned, but that’s because the powers will be brought in through a statutory instrument enabling Technical Capability Notices.

Help us win new rights and fight off censorship

There’s lots to do. Please help us fight proposals for privatised and unaccountable censorship, and to establish rights for privacy groups to complain directly about data protection breaches. Join ORG for £6/month so we can defend your rights.

 

[Read more] (2 comments)


June 13, 2017 | Ed Johnson-Williams

UK and France propose automated censorship of online content

Theresa May and Emmanuel Macron's plans to make Internet companies liable for 'extremist' content on their platforms are fraught with challenges. They entail automated censorship, risking the removal of unobjectionable content and harming everyone's right to free expression.

The Government announced this morning that Theresa May and the French President Emmanuel Macron will talk today about making tech companies legally liable if they “fail to remove unacceptable content”. The UK and France would work with tech companies “to develop tools to identify and remove harmful material automatically”.

No one would deny that extremists use mainstream Internet platforms to share content that incites people to hate others and, in some cases, to commit violent acts. Tech companies may well have a role in helping the authorities challenge such propaganda but attempting to close it down is not as straightforward or consequence-free as politicians would like us to believe.

First things first, how would this work? It almost certainly entails the use of algorithms and machine learning to censor content. With this sort of automated takedown process, the companies instruct the algorithms to behave in certain ways. Given the economic and reputational incentives on the companies to avoid fines, it seems highly likely that the companies will go down the route of using hair-trigger, error-prone algorithms that will end up removing unobjectionable content.

May and Macron’s proposal is to identify and remove new extremist content. It is unclear whose rules they want Internet companies to enforce. The Facebook Files showed Facebook's own policies are to delete a lot of legal but potentially objectionable content, often in a seemingly arbitrary way. Alternatively, if the companies are to enforce UK and French laws on hate speech and so on, that will probably be a lot less censorious than May and Macron are hoping for.

The history of automated content takedown suggests removing extremist content without removing harmless content will be an enormous challenge. The mistakes made by YouTube’s ContentID system that automate takedowns of alleged copyright-infringing content on YouTube are well-documented.

Context is king when it comes to judging content. Will these automated systems really be able to tell the difference between posts that criticise terrorism while using video of terrorists and posts promoting terrorism that use the same video?

There are some that will say this is a small price to pay if it stops the spread of extremist propaganda but it will lead to a framework for censorship that can be used against anything that is perceived as harmful. All of this might result in extremists moving to other platforms to promote their material. But will they actually be less able to communicate?

Questions abound. What incentives will the companies have to get it right? Will there be any safeguards? If so, how transparent will those safeguards be? Will the companies be fined for censoring legal content as well as failing to censor illegal content?

And what about the global picture? Internet companies like Facebook, Twitter and Youtube have a global reach. Will they be expected to create a system that can be used by any national government – even those with poor human rights records? It’s unclear whether May and Macron have thought through whether they are happy for Internet platforms to become an arm of every state that they operate in.

All this of course is in the context of Theresa May entering a new Parliament with a very fragile majority. She will be careful only to bring legislation to Parliament that she is confident of getting through. Opposition in Parliament to these plans is far from guaranteed. In April the Labour MP Yvette Cooper recommended fines for tech companies in a report she headed up on the Home Affairs select committee.

ORG will challenge these proposals both inside and outside Parliament. If you'd like to support our work you can do so by joining ORG. It's £6 a month and we'll send you a copy of our fantastic new book when you join.

[Read more] (3 comments)


June 05, 2017 | Jim Killock

Our response to the London and Manchester Attacks

Some of you will know that ORG for many years had our offices in Borough. It was a daily occurrence for us until summer 2015 to walk and to eat in the places where where Saturday’s appalling events took place.

As Londoners, we are relieved that we do not know anyone who has been directly affected. It is also genuinely shocking, as it was for some of us during the 2005 bombings, to have personal connections with the places involved in brutal terrorist killings. It is a reminder of the personal trauma that is also being felt by our friends and colleagues in Manchester. Many of us feel very exposed in the face of terrorism and violence.

As individuals, it is also natural to ask whether our own views can withstand this kind of onslaught. Is it right to resist or question measures that the government wishes to pursue, which it claims could improve security, or could at least reassure people that everything possible is being done. Is it selfish, or unrealistic, to argue against potential protections when people are seeking to ensure that, as Theresa May put it, “enough is enough”?

However, many people in London and Manchester will not wish these events to be exploited and used to usher in policies that are ill-thought out, illiberal or otherwise seek to exploit the situation. This is not a denial of the vulnerability that we feel, but a desire to ensure that terrorism does not win. These attacks so often occur in cities with very liberal and open outlooks, where there is little or no expectation of political violence, and toleration is a normal way of being.

London and Manchester are both cities with big creative and tech sectors, with many people very aware of what the Internet does, its benefits and also the dangers of attempts to control, censor and surveil. If the government uses these events to pursue policies that are ineffective, meaningless or dangerous, then many of those who feel a personal investment in seeing our communities protected, may quickly feel that these events are being exploited rather than dealt with maturely.

Calls for an end to tolerance of extremism are perhaps even more ill-judged. It is hard to imagine that the public sector has been tolerating extremism, except in relatively isolated examples. These statements could easily lead to over-reactions and quite divisive policy. For instance, the controversial Prevent programme, backed up by legislative anti-extremist quasi-policing duties across many parts of the public sector, could ramp up, leading to serious misjudgements.

It seems particularly harsh to accuse Muslim communities of tolerating extremist views without also recognising that the there are claims that the Manchester attacker had been reported as potentially dangerous by members of his community, and without articulating that extremists wish to create divisions between us. Whatever the changes that may be needed, it would also be wise to recognise that the government too may have had its failings.

We will be looking very carefully at her proposals for online censorship and attempts to limit the security of ordinary users of Internet services. To be clear, we are not saying that there are no measures that could ever be taken. There are already, quite rightly, laws about what is illegal and duties on companies to act when they are instructed. They also do a great deal well beyond their legal duties, because they do not want any association with any kind of criminality.

However, what we have heard so far from the government does not give us confidence that their proposals will necessary, proportionate, and ensure legal accountability. This is what the Conservative manifesto has to say on page 79:

We will put a responsibility on industry not to direct users – even unintentionally – to hate speech, pornography, or other sources of harm. We will make clear the responsibility of platforms to enable the reporting of inappropriate, bullying, harmful or illegal content, with take-down on a comply-or-explain basis.
We will continue to push the internet companies to deliver on their commitments to develop technical tools to identify and remove terrorist propaganda, to help smaller companies build their capabilities and to provide support for civil society organisations to promote alternative and counter-narratives.
… In addition, we do not believe that there should be a safe space for terrorists to be able to communicate online and will work to prevent them from having this capability. (ORG wiki)

We—and we hope you—will want to know: will the proposals work? Will they create new risks or adverse effects? Who will hold the police or companies to account for their decisions, and how? So far, what we have heard does not give us much confidence that we will receive satisfactory answers.

Theresa May’s speech had the feel of electioneering rather than a common-sense, values and evidence based approach. That is simply not being sufficiently serious and respectful about what has happened.

 

[Read more] (7 comments)


June 04, 2017 | Jim Killock

The London Attacks

Open Rights Group condemns the appalling attack at London Bridge; this is not only a violent assault on individual lives but an attack against the freedom and security we enjoy in the UK.

It is disappointing that in the aftermath of this attack, the Government’s response appears to focus on the regulation of the Internet and encryption.

This could be a very risky approach. If successful, Theresa May could push these vile networks into even darker corners of the web, where they will be even harder to observe.

But we should not be distracted: the Internet and companies like Facebook are not a cause of this hatred and violence, but tools that can be abused. While governments and companies should take sensible measures to stop abuse, attempts to control the Internet is not the simple solution that Theresa May is claiming.

Real solutions—as we were forced to state only two weeks ago—will require attempts to address the actual causes of extremism. For instance, both Jeremy Corbyn and Theresa May have drawn attention to the importance of finding solutions to the drivers of terrorism in countries including Syria, Iraq and Libya.

Debating controls on the Internet risks distracting from these very hard and vital questions.

 

[Read more] (4 comments)


May 24, 2017 | Jim Killock

The Manchester attack

Open Rights Group wishes to express its sympathy for the victims of the vile and brutal attack in Manchester. We condemn these violent attacks, which seem even more abhorrent when deliberately targeted at children and young people.

We hope that law enforcement and intelligence agencies will help to bring those involved in these attacks to justice and we support their work combating terrorism. We believe that these agencies need powers of surveillance to do this.

However, we also believe that there must be limits to these powers in order to preserve the democratic values of freedom and liberty - the same values that terrorists want to undermine. This is the central challenge of the moment, in our view.

There are many emotions and reactions that flow from this event. Solidarity, the need to comfort as best possible; the value we place in our communities and the human aid that people have given to help people directly affected. But there is also fear, hatred and a desire to do anything that could prevent such an attack from happening again.

The political response to this attack is complicated by the fact that it is has taken place in the middle of an election. Campaigning has been put on hold but politicians cannot help but be aware that their response will affect the outcome of the election - and this could see policies that exploit public fears.

The traditional response in the UK is to first commit to British values, and say that terrorists will never remove these; and then to try to reassert a sense of security and control by showing that security measures will be stepped up.

Often these attempts are highly misleading. Security measures can be helpful, but building a security state will never be enough to stop terrorism. Terrorism needs to be dealt with at source, through changes in politics and society. As long as we have failed states in Libya, Syria and elsewhere, we will not be safe. We do not wish to gloss over the complexity and difficulty of tackling these issues, but changes there are the first step to reducing the threats of terrorism.

Meanwhile, surveillance including mass surveillance appears to be leading to more information than can be effectively processed, with known individuals escaping investigation because they are too numerous for the authorities to pursue them all. In this case, even human resources may face limits, as expansion of staff numbers can lead to bureaucratisation and new bottlenecks. Terrorists can also adapt their behaviour to avoid surveillance technologies, by changing their tech, avoiding it altogether, or simplifying their operations to make them less visible.

This does not mean we should give up, nor does it mean that technology can play no role in surveillance. It does however mean that we should not assume that claims of resources and powers will necessarily result in security.

ORG is concerned that the Government’s use of investigatory powers to ostensibly keep us safe can themselves be exploited by criminals and terrorists.

It is worrying to hear that in the wake of these attacks, the Home Office wants to push ahead with proposals to force companies to weaken the security of their products and services through “Technical Capability Notices” (TCNs). These are notices that can be issued to a company to force them to modify their products and services so that the security agencies can use them to access a target’s communications.

The Government already has these powers on the statute book, as they were outlined in the Investigatory Powers Act, passed last December. To make the powers active, they must pass a regulation that gives more detail about how TCNs could be used.

Recently, the Home Office held a ‘targeted’ consultation about the new regulations. The draft was only sent to a few companies for their response, even though, these powers could affect the digital security of people in the UK and beyond.

As a result, ORG leaked the proposals so that affected businesses and individuals could raise their concerns with the Home Office. Over 1,400 ORG supporters sent their comments to the Home Office and ORG also submitted a response that we published here.

Our core concern is that using TCNs to force companies to limit or bypass encryption or otherwise weaken the security of their products will put all of us at greater risk. Criminals could exploit the same weaknesses. Changes to technology at companies merely need to be ‘feasible’ rather than ‘safe’ or ‘sensible’ for users or providers.

The recent #WannaCry hack demonstrated how a vulnerability discovered by the National Security Agency (NSA) to access their target’s communications was then used by criminals. These are powers involving different technologies but the principle remains the same: Governments should be doing all they can to protect our digital security.

Another concern is that TCNs may be served on companies overseas, including WhatsApp, owned by Facebook. These have assets in the UK and can easily be targeted for compliance. Others such as WhisperSystems who produce Signal have no UK assets. The UK appears to be deliberately walking into an international dispute, where much of the legal debate will be entirely hidden from view, as the notices are served in secret, and it is not clear what appeal routes to public courts really exist. Other governments, from Turkey to China, will take note.

Powers must be proportionate, and agencies should not be given a blank cheque. Justification for and oversight of the use of TCNs and vulnerabilities is inadequate, so the risks cannot be properly assessed in the current legal frameworks. There is no regime for assessing the use of vulnerabilities including ‘zero days’.

We urge politicians to take a detailed and considered look at TCNs and the use of vulnerabilities, to ensure that the consequences of their use can be properly evaluated and challenged.

These will seem like narrow issues compared with Monday’s events. And that is true. The wider issue, however, is that we as a society do not react to these events by emulating our enemies, by treating all citizens as a threat, and gradually removing British values such as the rule of law, due process and personal privacy.

[Read more]


May 22, 2017 | Jim Killock

Facebook censorship complaints could award government immense control

Facebook censorship complaints run both ways—and we should be careful when politicians press for more controls

The leaked Facebook Files, the social media company’s internal policies for content regulation published by the Guardian, show that, like a relationship status on Facebook, content moderation is complicated.

It is complicated because Facebook is a near-monopoly in the social media market, making them both a power player and a target for regulation. It is complicated because there is an uneasy balance to strike between what is law, what is code, and what is community decency.

It is complicated because Facebook finds itself in a media landscape determined to label them as either a publisher or a platform, when neither of which are suitable titles. And ultimately, it is complicated because we are talking about human interaction and regulation of speech at a scale never seen before.

Big player. Big target

Facebook are a monopoly. And that is a big problem. With almost 2 billion users on the site, operating in almost all countries around the world, hoarding the data generated by a community of a size never before seen. The leaks show that even they seem unclear how best to police it.

It could be argued that as a private company, they can create their terms and conditions as they see fit but their global domination means that their decisions have global impact on free speech. This impact creates obligations for them to uphold standards of free expression that are not normally expected of a private company.

Operating in so many countries also means that Facebook are an easy target for criticism from many different governments and media, who will blame them for things that go wrong because of their sheer scale. They can see an easy way to impose control, by by targeting them through the media or regulation. Most recently seen in the Home Affairs Committee report where social media companies were accused of behaving irresponsibly in failing to police their platforms.

World Policing in the Community

Facebook’s business model is premised on users being on their site and sharing as much information as possible so that they can use personal data to sell highly targeted advertising.  Facebook do not want to lose customers who are offended, which means that offence is a much lower threshold than what is legal or not.

Facebook is not unregulated. The company has to comply with court orders when served but as the leaked files show making judgments about content that is probably legal but offensive or graphic is much more difficult.

Being a community police for the world is a deeply complicated position, even moreso if your platform is often seen as the Internet.

Law versus community standards

Facebook will takedown material reported to them that is illegal. However, the material highlighted by the Guardian as inappropriate for publication falls into a category of offensiveness, such as graphic material or sick jokes, rather than illegality.

Where people are objecting to illegal material appearing and not being removed fast enough, we should also bear in mind the actual impacts. For instance, how widely has it actually circulated?  In social media, longevity and contacts is what tends to produce visibility for your content. We suspect a lot of ‘extremist’ postings are not widely seen as the accounts will be swiftly deleted.

In both cases, there is a serious argument that it is society, not Facebook, generating unwanted material. While Facebook can be targeted to remove it, this won’t stop its existence. At best, it might move off its platform, and arrive in a less censored, probably less responsible environment, even one that caters and encourages bad behaviour. 4Chan is a great example of this, in that its uncensored message boards attract abuse, sick jokes and co-ordination of attacks.

Ultimately behaviour such as abuse, bullying and harassment needs to be dealt with by law enforcement. Only law enforcement that can deliver protection, prosecutions and work with individuals to correct their behaviour to reduce actual offending. Failing to take serious action against real offenders encourages bad behaviour.

Publisher v Platform

What happens when your idea of bringing the world together, suddenly puts you in the position of a publisher? When people are no longer just sharing their holiday pictures, but organising protests, running campaigns, even publishing breaking news.

Some areas of the media have long delighted in the awkward positioning of Facebook as a publisher (subject to editorial controls and speech regulation) and not a platform (a service where user’s can express their ideas that are not representative of the service). It might be worth those media remembering that they too rely on “safe harbour” regulations designed to protect platforms for all those comments that their readers post below their articles. To place regulatory burdens creating new legal liabilities for user generated content would be onerous and likely to limit free expression, which no-one should want to see.

Safe harbour arrangements typically allow user content to be published without liability, and place a duty on platforms to take down material when it is shown to be illegal. Such arrangements are only truly fair when courts are involved. Where an individual, or the police, can notify without a court, platforms are forced to become risk averse. Under the DMCA copyright arrangements, for instance, a user can contest their right for material to be re-published after takedown, but has also to make arrangement to be taken to court. All of this places the burden of risk on the defendant rather than the accuser. Only a few who are accused will opt to take legal risk, whereas normally, accusers would the ones to be careful about who they take to court for their content.

Money to burn. People to hire

Facebook have enough money that they should be able to go further in their hiring of humans to do this job better. They appear to be doing that and should be trying to involve more human judgement in speech regulation, not less.   

Considering the other options on the market, more human involvement would seem the most reasonable approach. Facebook have tried and failed miserably  to moderate content by algorithm.  

However, the sheer size of the task in moderating content across so many different cultures and countries, reportedly leaving human moderators only 10 seconds to make a decision whether to take down content, is a massive task that will only grow as Facebook expands.

We need to understand that moderation is rules based, not principle based. Moderators strictly match against Facebook’s “rules” rather than working from principles whether something is reasonable or not. The result is that decisions will often seem arbitrary or just bad. The need for rules rather than principles stems from making judgements at scale, and seems unavoidable.

Algorithms, to be clear, can only make rules-based approaches less likely to be sane, and more likely to miss human, cultural and contextual nuances. Judgement is an exclusively human capability; machine learning only simulates it. When a technologist embodies their or their employer’s view of what’s fair into a technology, any potential for the exercise of discretion is turned from a scale to a step and humanity is quantised. That quantisation of discretion is always in the interest of the person controlling the technology.

One possible solution to the rigidity of rules-based moderation is to create more formal flexibility, such as appeals mechanisms. However, Facebook are most likely to prefer to deal with exceptional cases as they come to light through public attention, rather than impose costs on themselves.

Manifesto pledges

Any push for more regulation such as suggested by the Conservative manifesto is highly likely to encourage automation of judgements to reduce costs—and validates this demand being made by every other government. The Conservative pledges here seem to us to be a route straight to the Computer saying No.

Thus, if you are concerned about what would seem to be arbitrary, opaque rules in place for Facebook’s content moderators set by the company, then you should be doubly concerned by the Conservative’s manifesto pledge to bring further state regulation to the Internet.

Governments have been the home of opaque and arbitrary rules for years, and the Conservatives, if elected, would deliver an internet where the state creates incentives to remove anything potentially objectionable (that could create adverse publicity, perhaps) and what level of security citizens should be able to enjoy from the platforms they use everyday . That is not the future we want to see.

So we have a private monopoly whose immense power in deciding what content people view is concerning, but a concerning proposal for too much state involvement in that decision too. A situation where you want to see better rules in place, but not rules that turn platforms into publishers, and a problem so vast that it seems that just hiring more people would not solve the problem alone.  Like we said, it’s complicated.

What is simple, however is that Facebook present a great opportunity for media stories, and complaints followed by power grabs from government to police the acceptability of speech that they would never dare make illegal. We may regret it if these political openings translate into legislation.

 

 

[Read more] (2 comments)