Blog


June 21, 2017 | Jim Killock

Queen’s speech 2017—threats to privacy and free speech

First analyses of the Queen’s Speech are focussing on what isn’t included, as a weakened Conservative Government appears to have dropped a number of its manifesto commitments but there are several worrying things for digital rights. One welcome development could be data protection legislation, to fill the options in the GDPR.

There are references to a review of Counter-terrorism and a Commision for Countering Extremism which will include Internet-related policies. Although details are lacking, these may contain threats to privacy and free speech. The government has opted for a “Digital Charter”, which isn’t a Bill, but something else.

Here are the key areas that will affect digital rights:

Digital Charter

This isn’t a Bill, but some kind of policy intervention, backed up by “regulation”. This could be the system of fines for social media companies previously mentioned, but this is not explained.

The Digital Charter appears to address both unwanted and illegal content or activity online, and the protection of vulnerable people. The work of CTIRU and the IWF are mentioned as examples of work to remove illegal or extremist content.

At this point, it is hard to know exactly what harms will emerge, but pushing enforcement into the hands of private companies is problematic. It means that decisions never involve courts and are not fully transparent and legally accountable.

Counterterrorism review

There will be a review of counterterrorism powers. The review includes “working with online companies to reduce and restrict the availability of extremist material online”.

This appears to be a watered down version of the Conservative manifesto commitment to give greater responsibility for companies to take down extremist material from their platforms. Already Google and Facebook have issued public statements about how they intend to improve the removal of extremist material from their platforms.

Commission for Countering Extremism

A Commission will look at the topic of countering extremism, likely including on the Internet.

This appears to be a measure to generate ideas and thinking, which could be a positive approach, if it involves considering different approaches, rather than pressing ahead with policies in order to be seen to be doing something. The quality of the Commission will therefore depend on their ability to take a wide range of evidence and assimilate it impartially; it faces a significant challenge in ensuring that fundamental rights are respected within any policy suggestions they suggest.

Data Protection Bill

A new Data Protection Bill, “will fulfil a manifesto commitment to ensure the UK has a data protection regime that is fit for the 21st century”. This will replace the Data Protection Act 1998, which is in any case being removed as the result of the new General Data Protection Regulation passed by the European Parliament last year. Regulations apply directly, so the GDPR does not need to be ‘implemented’ in UK law before Brexit.

We welcome that (at least parts of) the GDPR will be implemented in primary legislation with a full debate in Parliament. It is not clear if the text of the GDPR will be brought into this Bill, or whether it supplements it.

This appears to be a bill to at least implement some of the ‘derogations’ (options) in the GDPR, plus the new rules for law enforcement agencies, that came in with the new law enforcement-related Directive and have to be applied by EU member states.

The bulk of the important rights are in the GDPR, and cannot be tampered with before Brexit. We welcome the chance to debate the choices, and especially to press for the right of privacy groups to bring complaints directly.

Missing: sex and relationships education

There is no mention of the introduction of compulsory sex and relationship education in schools, which was a manifesto commitment for all the main parties, Labour, Lib Dem and Conservative. As there appeared to be a consensus on this issue, it is not clear why this seems to have been dropped.

Encryption is also not mentioned, but that’s because the powers will be brought in through a statutory instrument enabling Technical Capability Notices.

Help us win new rights and fight off censorship

There’s lots to do. Please help us fight proposals for privatised and unaccountable censorship, and to establish rights for privacy groups to complain directly about data protection breaches. Join ORG for £6/month so we can defend your rights.

 

[Read more] (1 comments)


June 13, 2017 | Ed Johnson-Williams

UK and France propose automated censorship of online content

Theresa May and Emmanuel Macron's plans to make Internet companies liable for 'extremist' content on their platforms are fraught with challenges. They entail automated censorship, risking the removal of unobjectionable content and harming everyone's right to free expression.

The Government announced this morning that Theresa May and the French President Emmanuel Macron will talk today about making tech companies legally liable if they “fail to remove unacceptable content”. The UK and France would work with tech companies “to develop tools to identify and remove harmful material automatically”.

No one would deny that extremists use mainstream Internet platforms to share content that incites people to hate others and, in some cases, to commit violent acts. Tech companies may well have a role in helping the authorities challenge such propaganda but attempting to close it down is not as straightforward or consequence-free as politicians would like us to believe.

First things first, how would this work? It almost certainly entails the use of algorithms and machine learning to censor content. With this sort of automated takedown process, the companies instruct the algorithms to behave in certain ways. Given the economic and reputational incentives on the companies to avoid fines, it seems highly likely that the companies will go down the route of using hair-trigger, error-prone algorithms that will end up removing unobjectionable content.

May and Macron’s proposal is to identify and remove new extremist content. It is unclear whose rules they want Internet companies to enforce. The Facebook Files showed Facebook's own policies are to delete a lot of legal but potentially objectionable content, often in a seemingly arbitrary way. Alternatively, if the companies are to enforce UK and French laws on hate speech and so on, that will probably be a lot less censorious than May and Macron are hoping for.

The history of automated content takedown suggests removing extremist content without removing harmless content will be an enormous challenge. The mistakes made by YouTube’s ContentID system that automate takedowns of alleged copyright-infringing content on YouTube are well-documented.

Context is king when it comes to judging content. Will these automated systems really be able to tell the difference between posts that criticise terrorism while using video of terrorists and posts promoting terrorism that use the same video?

There are some that will say this is a small price to pay if it stops the spread of extremist propaganda but it will lead to a framework for censorship that can be used against anything that is perceived as harmful. All of this might result in extremists moving to other platforms to promote their material. But will they actually be less able to communicate?

Questions abound. What incentives will the companies have to get it right? Will there be any safeguards? If so, how transparent will those safeguards be? Will the companies be fined for censoring legal content as well as failing to censor illegal content?

And what about the global picture? Internet companies like Facebook, Twitter and Youtube have a global reach. Will they be expected to create a system that can be used by any national government – even those with poor human rights records? It’s unclear whether May and Macron have thought through whether they are happy for Internet platforms to become an arm of every state that they operate in.

All this of course is in the context of Theresa May entering a new Parliament with a very fragile majority. She will be careful only to bring legislation to Parliament that she is confident of getting through. Opposition in Parliament to these plans is far from guaranteed. In April the Labour MP Yvette Cooper recommended fines for tech companies in a report she headed up on the Home Affairs select committee.

ORG will challenge these proposals both inside and outside Parliament. If you'd like to support our work you can do so by joining ORG. It's £6 a month and we'll send you a copy of our fantastic new book when you join.

[Read more] (3 comments)


June 05, 2017 | Jim Killock

Our response to the London and Manchester Attacks

Some of you will know that ORG for many years had our offices in Borough. It was a daily occurrence for us until summer 2015 to walk and to eat in the places where where Saturday’s appalling events took place.

As Londoners, we are relieved that we do not know anyone who has been directly affected. It is also genuinely shocking, as it was for some of us during the 2005 bombings, to have personal connections with the places involved in brutal terrorist killings. It is a reminder of the personal trauma that is also being felt by our friends and colleagues in Manchester. Many of us feel very exposed in the face of terrorism and violence.

As individuals, it is also natural to ask whether our own views can withstand this kind of onslaught. Is it right to resist or question measures that the government wishes to pursue, which it claims could improve security, or could at least reassure people that everything possible is being done. Is it selfish, or unrealistic, to argue against potential protections when people are seeking to ensure that, as Theresa May put it, “enough is enough”?

However, many people in London and Manchester will not wish these events to be exploited and used to usher in policies that are ill-thought out, illiberal or otherwise seek to exploit the situation. This is not a denial of the vulnerability that we feel, but a desire to ensure that terrorism does not win. These attacks so often occur in cities with very liberal and open outlooks, where there is little or no expectation of political violence, and toleration is a normal way of being.

London and Manchester are both cities with big creative and tech sectors, with many people very aware of what the Internet does, its benefits and also the dangers of attempts to control, censor and surveil. If the government uses these events to pursue policies that are ineffective, meaningless or dangerous, then many of those who feel a personal investment in seeing our communities protected, may quickly feel that these events are being exploited rather than dealt with maturely.

Calls for an end to tolerance of extremism are perhaps even more ill-judged. It is hard to imagine that the public sector has been tolerating extremism, except in relatively isolated examples. These statements could easily lead to over-reactions and quite divisive policy. For instance, the controversial Prevent programme, backed up by legislative anti-extremist quasi-policing duties across many parts of the public sector, could ramp up, leading to serious misjudgements.

It seems particularly harsh to accuse Muslim communities of tolerating extremist views without also recognising that the there are claims that the Manchester attacker had been reported as potentially dangerous by members of his community, and without articulating that extremists wish to create divisions between us. Whatever the changes that may be needed, it would also be wise to recognise that the government too may have had its failings.

We will be looking very carefully at her proposals for online censorship and attempts to limit the security of ordinary users of Internet services. To be clear, we are not saying that there are no measures that could ever be taken. There are already, quite rightly, laws about what is illegal and duties on companies to act when they are instructed. They also do a great deal well beyond their legal duties, because they do not want any association with any kind of criminality.

However, what we have heard so far from the government does not give us confidence that their proposals will necessary, proportionate, and ensure legal accountability. This is what the Conservative manifesto has to say on page 79:

We will put a responsibility on industry not to direct users – even unintentionally – to hate speech, pornography, or other sources of harm. We will make clear the responsibility of platforms to enable the reporting of inappropriate, bullying, harmful or illegal content, with take-down on a comply-or-explain basis.
We will continue to push the internet companies to deliver on their commitments to develop technical tools to identify and remove terrorist propaganda, to help smaller companies build their capabilities and to provide support for civil society organisations to promote alternative and counter-narratives.
… In addition, we do not believe that there should be a safe space for terrorists to be able to communicate online and will work to prevent them from having this capability. (ORG wiki)

We—and we hope you—will want to know: will the proposals work? Will they create new risks or adverse effects? Who will hold the police or companies to account for their decisions, and how? So far, what we have heard does not give us much confidence that we will receive satisfactory answers.

Theresa May’s speech had the feel of electioneering rather than a common-sense, values and evidence based approach. That is simply not being sufficiently serious and respectful about what has happened.

 

[Read more] (8 comments)


June 04, 2017 | Jim Killock

The London Attacks

Open Rights Group condemns the appalling attack at London Bridge; this is not only a violent assault on individual lives but an attack against the freedom and security we enjoy in the UK.

It is disappointing that in the aftermath of this attack, the Government’s response appears to focus on the regulation of the Internet and encryption.

This could be a very risky approach. If successful, Theresa May could push these vile networks into even darker corners of the web, where they will be even harder to observe.

But we should not be distracted: the Internet and companies like Facebook are not a cause of this hatred and violence, but tools that can be abused. While governments and companies should take sensible measures to stop abuse, attempts to control the Internet is not the simple solution that Theresa May is claiming.

Real solutions—as we were forced to state only two weeks ago—will require attempts to address the actual causes of extremism. For instance, both Jeremy Corbyn and Theresa May have drawn attention to the importance of finding solutions to the drivers of terrorism in countries including Syria, Iraq and Libya.

Debating controls on the Internet risks distracting from these very hard and vital questions.

 

[Read more] (4 comments)


May 24, 2017 | Jim Killock

The Manchester attack

Open Rights Group wishes to express its sympathy for the victims of the vile and brutal attack in Manchester. We condemn these violent attacks, which seem even more abhorrent when deliberately targeted at children and young people.

We hope that law enforcement and intelligence agencies will help to bring those involved in these attacks to justice and we support their work combating terrorism. We believe that these agencies need powers of surveillance to do this.

However, we also believe that there must be limits to these powers in order to preserve the democratic values of freedom and liberty - the same values that terrorists want to undermine. This is the central challenge of the moment, in our view.

There are many emotions and reactions that flow from this event. Solidarity, the need to comfort as best possible; the value we place in our communities and the human aid that people have given to help people directly affected. But there is also fear, hatred and a desire to do anything that could prevent such an attack from happening again.

The political response to this attack is complicated by the fact that it is has taken place in the middle of an election. Campaigning has been put on hold but politicians cannot help but be aware that their response will affect the outcome of the election - and this could see policies that exploit public fears.

The traditional response in the UK is to first commit to British values, and say that terrorists will never remove these; and then to try to reassert a sense of security and control by showing that security measures will be stepped up.

Often these attempts are highly misleading. Security measures can be helpful, but building a security state will never be enough to stop terrorism. Terrorism needs to be dealt with at source, through changes in politics and society. As long as we have failed states in Libya, Syria and elsewhere, we will not be safe. We do not wish to gloss over the complexity and difficulty of tackling these issues, but changes there are the first step to reducing the threats of terrorism.

Meanwhile, surveillance including mass surveillance appears to be leading to more information than can be effectively processed, with known individuals escaping investigation because they are too numerous for the authorities to pursue them all. In this case, even human resources may face limits, as expansion of staff numbers can lead to bureaucratisation and new bottlenecks. Terrorists can also adapt their behaviour to avoid surveillance technologies, by changing their tech, avoiding it altogether, or simplifying their operations to make them less visible.

This does not mean we should give up, nor does it mean that technology can play no role in surveillance. It does however mean that we should not assume that claims of resources and powers will necessarily result in security.

ORG is concerned that the Government’s use of investigatory powers to ostensibly keep us safe can themselves be exploited by criminals and terrorists.

It is worrying to hear that in the wake of these attacks, the Home Office wants to push ahead with proposals to force companies to weaken the security of their products and services through “Technical Capability Notices” (TCNs). These are notices that can be issued to a company to force them to modify their products and services so that the security agencies can use them to access a target’s communications.

The Government already has these powers on the statute book, as they were outlined in the Investigatory Powers Act, passed last December. To make the powers active, they must pass a regulation that gives more detail about how TCNs could be used.

Recently, the Home Office held a ‘targeted’ consultation about the new regulations. The draft was only sent to a few companies for their response, even though, these powers could affect the digital security of people in the UK and beyond.

As a result, ORG leaked the proposals so that affected businesses and individuals could raise their concerns with the Home Office. Over 1,400 ORG supporters sent their comments to the Home Office and ORG also submitted a response that we published here.

Our core concern is that using TCNs to force companies to limit or bypass encryption or otherwise weaken the security of their products will put all of us at greater risk. Criminals could exploit the same weaknesses. Changes to technology at companies merely need to be ‘feasible’ rather than ‘safe’ or ‘sensible’ for users or providers.

The recent #WannaCry hack demonstrated how a vulnerability discovered by the National Security Agency (NSA) to access their target’s communications was then used by criminals. These are powers involving different technologies but the principle remains the same: Governments should be doing all they can to protect our digital security.

Another concern is that TCNs may be served on companies overseas, including WhatsApp, owned by Facebook. These have assets in the UK and can easily be targeted for compliance. Others such as WhisperSystems who produce Signal have no UK assets. The UK appears to be deliberately walking into an international dispute, where much of the legal debate will be entirely hidden from view, as the notices are served in secret, and it is not clear what appeal routes to public courts really exist. Other governments, from Turkey to China, will take note.

Powers must be proportionate, and agencies should not be given a blank cheque. Justification for and oversight of the use of TCNs and vulnerabilities is inadequate, so the risks cannot be properly assessed in the current legal frameworks. There is no regime for assessing the use of vulnerabilities including ‘zero days’.

We urge politicians to take a detailed and considered look at TCNs and the use of vulnerabilities, to ensure that the consequences of their use can be properly evaluated and challenged.

These will seem like narrow issues compared with Monday’s events. And that is true. The wider issue, however, is that we as a society do not react to these events by emulating our enemies, by treating all citizens as a threat, and gradually removing British values such as the rule of law, due process and personal privacy.

[Read more]


May 22, 2017 | Jim Killock

Facebook censorship complaints could award government immense control

Facebook censorship complaints run both ways—and we should be careful when politicians press for more controls

The leaked Facebook Files, the social media company’s internal policies for content regulation published by the Guardian, show that, like a relationship status on Facebook, content moderation is complicated.

It is complicated because Facebook is a near-monopoly in the social media market, making them both a power player and a target for regulation. It is complicated because there is an uneasy balance to strike between what is law, what is code, and what is community decency.

It is complicated because Facebook finds itself in a media landscape determined to label them as either a publisher or a platform, when neither of which are suitable titles. And ultimately, it is complicated because we are talking about human interaction and regulation of speech at a scale never seen before.

Big player. Big target

Facebook are a monopoly. And that is a big problem. With almost 2 billion users on the site, operating in almost all countries around the world, hoarding the data generated by a community of a size never before seen. The leaks show that even they seem unclear how best to police it.

It could be argued that as a private company, they can create their terms and conditions as they see fit but their global domination means that their decisions have global impact on free speech. This impact creates obligations for them to uphold standards of free expression that are not normally expected of a private company.

Operating in so many countries also means that Facebook are an easy target for criticism from many different governments and media, who will blame them for things that go wrong because of their sheer scale. They can see an easy way to impose control, by by targeting them through the media or regulation. Most recently seen in the Home Affairs Committee report where social media companies were accused of behaving irresponsibly in failing to police their platforms.

World Policing in the Community

Facebook’s business model is premised on users being on their site and sharing as much information as possible so that they can use personal data to sell highly targeted advertising.  Facebook do not want to lose customers who are offended, which means that offence is a much lower threshold than what is legal or not.

Facebook is not unregulated. The company has to comply with court orders when served but as the leaked files show making judgments about content that is probably legal but offensive or graphic is much more difficult.

Being a community police for the world is a deeply complicated position, even moreso if your platform is often seen as the Internet.

Law versus community standards

Facebook will takedown material reported to them that is illegal. However, the material highlighted by the Guardian as inappropriate for publication falls into a category of offensiveness, such as graphic material or sick jokes, rather than illegality.

Where people are objecting to illegal material appearing and not being removed fast enough, we should also bear in mind the actual impacts. For instance, how widely has it actually circulated?  In social media, longevity and contacts is what tends to produce visibility for your content. We suspect a lot of ‘extremist’ postings are not widely seen as the accounts will be swiftly deleted.

In both cases, there is a serious argument that it is society, not Facebook, generating unwanted material. While Facebook can be targeted to remove it, this won’t stop its existence. At best, it might move off its platform, and arrive in a less censored, probably less responsible environment, even one that caters and encourages bad behaviour. 4Chan is a great example of this, in that its uncensored message boards attract abuse, sick jokes and co-ordination of attacks.

Ultimately behaviour such as abuse, bullying and harassment needs to be dealt with by law enforcement. Only law enforcement that can deliver protection, prosecutions and work with individuals to correct their behaviour to reduce actual offending. Failing to take serious action against real offenders encourages bad behaviour.

Publisher v Platform

What happens when your idea of bringing the world together, suddenly puts you in the position of a publisher? When people are no longer just sharing their holiday pictures, but organising protests, running campaigns, even publishing breaking news.

Some areas of the media have long delighted in the awkward positioning of Facebook as a publisher (subject to editorial controls and speech regulation) and not a platform (a service where user’s can express their ideas that are not representative of the service). It might be worth those media remembering that they too rely on “safe harbour” regulations designed to protect platforms for all those comments that their readers post below their articles. To place regulatory burdens creating new legal liabilities for user generated content would be onerous and likely to limit free expression, which no-one should want to see.

Safe harbour arrangements typically allow user content to be published without liability, and place a duty on platforms to take down material when it is shown to be illegal. Such arrangements are only truly fair when courts are involved. Where an individual, or the police, can notify without a court, platforms are forced to become risk averse. Under the DMCA copyright arrangements, for instance, a user can contest their right for material to be re-published after takedown, but has also to make arrangement to be taken to court. All of this places the burden of risk on the defendant rather than the accuser. Only a few who are accused will opt to take legal risk, whereas normally, accusers would the ones to be careful about who they take to court for their content.

Money to burn. People to hire

Facebook have enough money that they should be able to go further in their hiring of humans to do this job better. They appear to be doing that and should be trying to involve more human judgement in speech regulation, not less.   

Considering the other options on the market, more human involvement would seem the most reasonable approach. Facebook have tried and failed miserably  to moderate content by algorithm.  

However, the sheer size of the task in moderating content across so many different cultures and countries, reportedly leaving human moderators only 10 seconds to make a decision whether to take down content, is a massive task that will only grow as Facebook expands.

We need to understand that moderation is rules based, not principle based. Moderators strictly match against Facebook’s “rules” rather than working from principles whether something is reasonable or not. The result is that decisions will often seem arbitrary or just bad. The need for rules rather than principles stems from making judgements at scale, and seems unavoidable.

Algorithms, to be clear, can only make rules-based approaches less likely to be sane, and more likely to miss human, cultural and contextual nuances. Judgement is an exclusively human capability; machine learning only simulates it. When a technologist embodies their or their employer’s view of what’s fair into a technology, any potential for the exercise of discretion is turned from a scale to a step and humanity is quantised. That quantisation of discretion is always in the interest of the person controlling the technology.

One possible solution to the rigidity of rules-based moderation is to create more formal flexibility, such as appeals mechanisms. However, Facebook are most likely to prefer to deal with exceptional cases as they come to light through public attention, rather than impose costs on themselves.

Manifesto pledges

Any push for more regulation such as suggested by the Conservative manifesto is highly likely to encourage automation of judgements to reduce costs—and validates this demand being made by every other government. The Conservative pledges here seem to us to be a route straight to the Computer saying No.

Thus, if you are concerned about what would seem to be arbitrary, opaque rules in place for Facebook’s content moderators set by the company, then you should be doubly concerned by the Conservative’s manifesto pledge to bring further state regulation to the Internet.

Governments have been the home of opaque and arbitrary rules for years, and the Conservatives, if elected, would deliver an internet where the state creates incentives to remove anything potentially objectionable (that could create adverse publicity, perhaps) and what level of security citizens should be able to enjoy from the platforms they use everyday . That is not the future we want to see.

So we have a private monopoly whose immense power in deciding what content people view is concerning, but a concerning proposal for too much state involvement in that decision too. A situation where you want to see better rules in place, but not rules that turn platforms into publishers, and a problem so vast that it seems that just hiring more people would not solve the problem alone.  Like we said, it’s complicated.

What is simple, however is that Facebook present a great opportunity for media stories, and complaints followed by power grabs from government to police the acceptability of speech that they would never dare make illegal. We may regret it if these political openings translate into legislation.

 

 

[Read more] (2 comments)


May 16, 2017 | Mike Morel

The UK Government should protect encryption not threaten it

It is difficult to overstate the importance of encryption. A cornerstone of the modern digital economy, we rely on it when we use our digital devices or make transactions online. Physical infrastructure like power stations and transport systems are dependent on it too.

Department of Culture, Media & Sport

 Encryption also strengthens democracy by underpinning digital press freedom. Whistleblowers can’t safely reveal official corruption to journalists without it.

Laws restricting encrypted communications have generally been associated with more authoritarian governments, but lately proposals to circumvent encryption have been creeping into western democracies. Former Prime Minister David Cameron attacked encryption after the Paris attacks in 2015, and Home Secretary Amber Rudd MP recently said that there should be a way around end-to-end encryption on devices like WhatsApp.

As it happens, Amber Rudd already has legislation that claims to give her the power to tell WhatsApp to remove “electronic protection” (read “encryption”). She can issue a technical capability notice (TCN) which instructs commercial software developers to produce work-arounds in their software without outlawing or limiting encryption itself. Just over a week ago, ORG leaked a secret Home Office consultation on the draft TCN regulation, which gives more detail about how this power can be used.

To be clear, this goes way beyond WhatsApp. The Government wants access to all UK telecommunications encompassing a wide variety of services. Any organisation that facilitates communications among 10,000 or more users could be issued a TCN including email account providers, data storage services, games companies, and (they claim) even overseas operators with enough UK users.

The current ransomware outbreak shows how software vulnerabilities used by security agencies can fall into the wrong hands. There is no reason to think backdoors intentionally created for Government access could not be exploited as well. Why start a digital arms race when we may be releasing new weapons to criminals and hostile governments?

The lack of transparency surrounding TCN’s is another problem. The regulation makes no mention of oversight or risk assessment mechanisms, and the consultation’s secrecy reduces accountability even more. Sometimes the Government has good reason for secrecy, but this is not one of those times. When digital services are compromised, people must know because it affects their privacy and security and everyone has a right to protect themselves.

Business owners should be concerned because their products and customers could be seriously affected, and the process by which they might appeal a TCN is unclear. The only real grounds for complaint appears to be “feasibility” — and many things may be ‘feasible’ but a very bad idea.

From securing the economy to underpinning press freedom, the need for strong encryption is vital. We alter it at our own peril, especially if we do so in secret. Tell the Home Office yourself before the secret consultation ends on 19 May.

See ORG’s detailed breakdown of the TCN regulation here.

[Read more] (1 comments)


May 13, 2017 | Jim Killock

NHS ransom shows GCHQ putting us at risk

The NHS ransom shows the problems with GCHQ’s approach to hacking and vulnerabilities, and this must be made clear to MPs who have given them sweeping powers in the IP Act that could result in the same problems recurring in the future.

GCHQ buildingHere are four points that stand out to us. These issues of oversight relating to hacking capabilities are barely examined in the Investigatory Powers Act, which concentrates oversight and warrantry on the balance to be struck in targeting a particular person or group, rather than the risks surrounding the capabilities being developed.

GCHQ and the NSA knew about the problem years ago

Vulnerabilities, as we know from the Snowden documents, are shared between the NSA and GCHQ, as are the tools built that exploit them. These tools are then used to hack into computer equipment, as a stepping stone to getting to other data. These break ins are at all kinds of companies, sites and groups, who may be entirely innocent, but useful to the security agencies to get closer to their actual targets.

In this case, the exploit, called ETERNALBLUE was leaked after a break in or leak from the NSA’s partners this April. It affects Windows XP. It has now been exploited by criminals to ransom organisations still running this software.

While GCHQ cannot be blamed for the NHS’s reliance on out of date software, the decision that the NSA and GCHQ have made in keeping this vulnerability secret, rather than trying to get it fixed, means they have a significant share of the blame for the current NHS ransom.

GCHQ are in charge of hacking us and protecting us from hackers

GCHQ are normally responsible for ‘offensive’ operations, or hacking and breaking into other networks. They also have a ‘defensive’ role, at the National Cyber Security Centre, which is meant to help organisations like the NHS keep their systems safe from these kinds of breakdown.

GCHQ are therefore forced to trade off their use of secret hacking exploits against the risks these exploits pose to organisations like the NHS.

They have a tremendous conflict of interest, which in ORG’s view, ought to be resolved by moving the UK defensive role out of GCHQ’s hands.

Government also needs to have a robust means of assessing the risks that GCHQ’s use of vulnerabilities might pose to the rest of us. At the moment, ministers can only turn to GCHQ to ask about the risks, and we assume the same is true in practice of oversight bodies and future Surveillance Commissioners. The obvious way to improve this and get more independent advice is to split National Cyber Security Centre from GCHQ.

GCHQ’s National Cyber Security Centre had no back up plan

We also need to condemn the lack of action from NCSC and others once the exploit was known to be “lost” this April. Some remedial action was taken in the US by informing Microsoft who created a patch in March, not however issued freely until today.

Hoarding vulnerabilities is of course inherently dangerous, but then apparently not having an adequate US or any UK wide plan to execute when they are lost is inexcusable.  This is especially true given that this vulnerability is obviously capable of being used by self-spreading malware.

GCHQ are not getting the balance between offence and defence right

The bulk of GCHQ’s resources go into offensive capabilities, including hoarding data, analytics and developing hacking methods. There needs to be serious analysis to see whether this is really producing the right results. This imbalance is likely to remain the case while GCHQ is in charge of both offence and defence, who will always prioritise offence. Offence has also been emphasised by politicians who feel pressure to defend against terrorism, whatever the cost. Defence—such as ensuring critical national infrastructure like the NHS is protected — is the poor relation of offensive capabilities. Perhaps the NHS ransom is the result.

Other interesting responses

[Read more] (8 comments)