Blog


January 20, 2017 | Javier Ruiz

Lords Committee slams data sharing powers in Digital Economy Bill

The Delegated Powers and Regulatory Reform Committee of the House of Lords has made some very critical recommendations about the data sharing proposals in the Digital Economy Bill.

report coverIn a report published today the Committee asks for the “almost untrammeled” powers given to Ministers in the Bill to be severely curtailed, and for all Codes of Practice associated with these data sharing powers to be laid before Parliament in draft for full approval before coming into force.

The Committee “consider it inappropriate” for Ministers to have the powers to define lists of specified persons and non-specific purposes related to public service provision, fraud or debt. Instead, they argue that those given the powers to share data and the purposes for which it is used should be on the face of the Bill, with Ministers only able to make very limited additions based on a clear necessity.

We can see that the Government will resist such a move, as that level of flexibility appears central to their approach to data sharing. If they plan to ignore these recommendations, the Cabinet Office will need to include much stronger safeguards on the face of the Bill about the criteria and processes for inclusion in the data gateways.

The report also raises concerns with the onward disclosure of shared data, which is subject to very broad exemptions for the purposes of crime, anti-social behaviour or legal proceedings.

The Committee starkly sets out that the data shared under these powers for benign social services could be used to bring criminal proceedings against the same individuals without restriction. This was always a red line during the open policy making pre legislative discussion where ORG participated. ORG has proposed various amendments to narrow down these further reuses of data, but we may have to revisit our proposals to further tighten them up.

We particularly welcome the Committee’s recommendations made on the Codes of Practice. The Government has so far refused to put key safeguards on the use of the powers on the face of the Bill, leaving these to the Codes. The Committee is under no doubt that the Codes are “legislative” in nature, despite the arguments by the government that these are not legally enforceable.

The report demands that the Codes are laid in draft form in front of Parliament for discussion and affirmative approval, and not just presented for filing in the statute book. They concede that further modifications could be made by negative procedure. Clarity on the full legal status of the Codes is critical, and we can only hope the Government will heed these recommendations, which chime with those of many others including ORG.

The Committee ask for various so-called “Henry VIII powers” peppered throughout the Bill to be narrowed down. These kind of powers add a provision to a Bill which enables the Government to repeal or amend it after it has become an Act of Parliament, and are an anachronism meant to be used sparingly for very narrow purposes. The Committee finds that some of these powers could be useful here to stop data sharing and narrow down future provisions, but the way they are written they could be used to expand the powers in the Bill without any accountability.

The report also tackles a fairly technical but potentially important point that ORG and others engaged in their process had missed so far: the so-called “dehybridisation clauses”. A Hybrid Instrument is a piece of legislation that disproportionately affects a particular group of people within a class. The clauses in the Bill simply state that this should be disregarded. This can be important due to an obscure provision in the House of Lords that gives those who are specially and directly affected by Hybrid Instruments

the opportunity to present their arguments against the SI [statutory instrument] to the House of Lords Hybrid Instruments Committee and then, possibly, to a select committee charged with reporting on its merits and recommending whether or not the SI should be approved by both Houses of Parliament. The hybrid instrument procedure is unique to the House of Lords and the process must be completed before the SI can be approved by both Houses.

We can see why the Government would want to remove this provision to speed up legislation, but it seems unfair and potentially abusive to simply decree that what may be a hybrid instrument should not be treated as such, thus denying those affected their right to make their case.

[Read more] (1 comments)


January 16, 2017 | Ed Johnson-Williams

Let's save 'backdoor' for the real thing

The Guardian reported on Friday last week that WhatsApp - owned by Facebook - has a “backdoor” that “allows snooping on encrypted messages”. The report was based on research by Tobias Boelter, published in April 2016. The Guardian has since changed the word "backdoor" in its article to "vulnerability" or "security vulnerability".

A few days before the Guardian article was published, the journalist contacted ORG for a quote. She couldn’t discuss the details of the alleged security flaw so we gave a generic quote about the importance of transparency from companies that offer end-to-end encryption and the dangers to encryption within the Investigatory Powers Act.

The vulnerability that was reported theoretically works like this. Say Ed is texting his dad on WhatsApp.

  1. Ed texts his dad on WhatsApp and his dad texts back - all good, happy families.
  2. Then Ed texts his dad again but his dad’s phone is off. Ed's message is still on Ed’s phone waiting to be sent.
  3. WhatsApp or somebody else with access to WhatsApp's servers registers Ed’s dad’s mobile number with WhatsApp on a different phone. This could be done by stealing Ed’s dad's SIM card or using vulnerabilities in the mobile phone network to re-route SMS confirmations.
  4. Ed's WhatsApp app now sees the number that used to be linked to his dad’s phone is active again and automatically re-sends the message.
  5. The new phone receives the message that Ed intended to send to his dad. The message never reaches Ed’s dad’s phone.
  6. Depending on whether a non-default setting is enabled, Ed may receive a notification saying that his dad’s security code has changed because he reinstalled WhatsApp or switched phones.

This means that somebody collaborating with WhatsApp could theoretically read a small number of messages. This is very unlikely though and would be very easy to detect. This is not a backdoor that WhatsApp can use for routine access to users’ messages. And unless an app forces you to verify encryption keys with someone before you can send and receive messages with them, and also whenever they change their phone, then this vulnerability is going to be present.

WhatsApp have made an intentional decision about usability. It means that - in the example given above - if Ed’s dad’s phone was off because it was broken, Ed’s dad could put his SIM card into a new phone and still receive the messages without anyone having to change anything.

It would be incredibly difficult for WhatsApp to use the vulnerability to read messages this way at scale without gaining a terrible reputation for not delivering messages. Lots of people would receive a notification saying that the security key of many of their intended recipients had changed. Messages would go missing. The risk to the company of actively tampering with someone's message stream is very high and would be very complicated to get right. And if you’re worried about law enforcement, they have other ways (such as hacking the phone) to target an individual WhatsApp user’s messages that would be cheaper, quicker, and more difficult for the target to detect.

Lots of people recommend Signal as an alternative to WhatsApp. Signal is a highly respected encrypted messaging app which is preferable to WhatsApp for many reasons. Unlike WhatsApp, Signal does not collect data about users and share that data with Facebook. Facebook’s business model is to collect as much data about people as possible to help sell advertising. And unlike WhatsApp, Signal’s code is open-source meaning it’s possible to verify that it’s working properly. Some people find Signal more difficult to use than WhatsApp.

But Signal are planning to use the same behaviour as WhatsApp that was reported as a backdoor in an attempt to make their app easier for people to use. As Matthew Green, Assistant Professor at Johns Hopkins University, said on Twitter in response to the Guardian’s article, “I wish we could put the word "backdoor" in a glass case and only bring it out when something is really deserving.”

It is a struggle to get people to use secure messaging tools. Facebook and WhatsApp’s business model leaves much to be desired and Signal does a lot more to respect the privacy of its users. But WhatsApp have been successful in getting millions of people to encrypt the contents of their messages end-to-end.

The UK’s Investigatory Powers Act has powers in it for the Government to serve companies with Technical Capability Notices for the “removal of electronic protection applied by a relevant operator” to force them to carry out hacking and intercept data for the Government.

There are big fights ahead on encryption and we have to remain vigilant to those. Let’s save the word “backdoor” for the real thing.

Update: I fixed point 3 to say that if's Ed's dad's SIM card were stolen, it could be used to re-register Ed's dad's WhatsApp account on a different phone. It used to say if 'Ed's SIM card' were stolen.

[Read more]


December 21, 2016 | Javier Ruiz

EU Court slams UK data retention surveillance regime

Here’s our quick overview of what the CJEU has told the UK and Sweden they must do to fix requirements for data retention.

The full judgment can be read here.

Generalised Data Retention

The CJEU has repeated arguments, made previously in the Digital Rights Ireland case, to rule that generalised data retention is disproportionate and unlawful.

The UK case did not ask about general data retention. In the original sentence that triggered this CJEU case, the UK High Court argued that general retention was acceptable as long as the safeguards were strong:

 “70. In oral argument Ms Rose modified her stance on point (i). She accepted that the CJEU cannot have meant that CSPs can only lawfully be required to retain the communications data of “suspects or persons whose data would contribute to the prevention, detection or prosecution of serious criminal offences”. Such a restriction would be wholly impracticable. Rather the Court must be understood to have held that a general retention regime is unlawful unless it is accompanied by an access regime which has sufficiently stringent safeguards to protect citizens’ rights set out in Articles 7 and 8 of the Charter.” (from the High Court judgment)

Unfortunately for the UK government, ORG and PI were there to argue the opposite, alongside the joined Swedish case brought by Tele2 Sverige AB, a telecoms company challenging the compatibility of generalised data retention orders in that country.

The CJEU has made it clear that generalised data retention is not acceptable:

 103 Further, while the effectiveness of the fight against serious crime, in particular organised crime and terrorism, may depend to a great extent on the use of modern investigation techniques, such an objective of general interest, however fundamental it may be, cannot in itself justify that national legislation providing for the general and indiscriminate retention of all traffic and location data should be considered to be necessary for the purposes of that fight (see, by analogy, in relation to Directive 2006/24, the Digital Rights judgment, paragraph 51).

 Retention must be restricted somehow to a section of the public more likely to be of use to investigations, possibly by geography:

 111 As regard the setting of limits on such a measure with respect to the public and the situations that may potentially be affected, the national legislation must be based on objective evidence which makes it possible to identify a public whose data is likely to reveal a link, at least an indirect one, with serious criminal offences, and to contribute in one way or another to fighting serious crime or to preventing a serious risk to public security. Such limits may be set by using a geographical criterion where the competent national authorities consider, on the basis of objective evidence, that there exists, in one or more geographical areas, a high risk of preparation for or commission of such offences.

 Summed up in the ruling:

 1. Article 15(1) of Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications), as amended by Directive 2009/136/EC of the European Parliament and of the Council of 25 November 2009, read in the light of Articles 7, 8 and 11 and Article 52(1) of the Charter of Fundamental Rights of the European Union, must be interpreted as precluding national legislation which, for the purpose of fighting crime, provides for general and indiscriminate retention of all traffic and location data of all subscribers and registered users relating to all means of electronic communication.

This will come as a shocker to the UK government, which could be forgiven for safely assuming that at least the basic principles of retention would be accepted by the CJEU, given the opinion of the Advocate General and the views of UK courts.

The UK has pioneered population level data retention and drove the adoption of the original EU Data Retention Directive after the London bombings in 2005. It will now be forced to rethink its approach.

Access only allowed for serious crime:

The Court accepts that some data retention can be necessary and acceptable, as it had previously said in the Digital Rights Ireland case, but only for very limited purposes defined in the e-privacy directive. Within this narrower retention regime, access should be even more restricted.

The CJEU fully supports the ruling by the UK High Court, which triggered the case, that only serious crime is an acceptable purpose for accessing retained data.

The case hinges on the interpretation of Article 15 of the EU e-privacy Directive 2002/58, which sets out limitations to the confidentiality of communications. The UK government had argued that the purposes for which retention was acceptable were not restricted by those included in this article, but instead should cover the broader set of purposes in Article 13 of the Data Protection Directive 95/46 (now replaced by the GDPR):

 (e) an important economic or financial interest of a Member State or of the European Union, including monetary, budgetary and taxation matters;

(f) a monitoring, inspection or regulatory function connected, even occasionally, with the exercise of official authority in cases referred to in (c), (d) and (e);

(g) the protection of the data subject or of the rights and freedoms of others.

The CJEU rejected this point saying that the list in Art 15 is a narrow closed list of the allowed purposes that allow for data to be retained:

 90 It must, in that regard, be observed that the first sentence of Article 15(1) of Directive 2002/58 provides that the objectives pursued by the legislative measures that it covers, which derogate from the principle of confidentiality of communications and related traffic data, must be ‘to safeguard national security — that is, State security — defence, public security, and the prevention, investigation, detection and prosecution of criminal offences or of unauthorised use of the electronic communication system’, or one of the other objectives specified in Article 13(1) of Directive 95/46, to which the first sentence of Article 15(1) of Directive 2002/58 refers (see, to that effect, judgment of 29 January 2008, Promusicae, C‑275/06, EU:C:2008:54, paragraph 53). That list of objectives is exhaustive, as is apparent from the second sentence of Article 15(1) of Directive 2002/58, which states that the legislative measures must be justified on ‘the grounds laid down’ in the first sentence of Article 15(1) of that directive. Accordingly, the Member States cannot adopt such measures for purposes other than those listed in that latter provision.

Furthermore, the CJEU says that even in the area of fighting crime, laws should be proportionate and access must be narrowed:

 115 As regards objectives that are capable of justifying national legislation that derogates from the principle of confidentiality of electronic communications, it must be borne in mind that, since, as stated in paragraphs 90 and 102 of this judgment, the list of objectives set out in the first sentence of Article 15(1) of Directive 2002/58 is exhaustive, access to the retained data must correspond, genuinely and strictly, to one of those objectives. Further, since the objective pursued by that legislation must be proportionate to the seriousness of the interference in fundamental rights that that access entails, it follows that, in the area of prevention, investigation, detection and prosecution of criminal offences, only the objective of fighting serious crime is capable of justifying such access to the retained data.

The new leaked e-privacy Regulation maintains a similar list in its Article 11(1) so this ruling should stand:

 Union or Member State law may restrict by way of a législative measure the scope of the obligations and rights provided for in Articles 5, 6, 7, and 8 of this Régulation when such a restriction respects the essence of the fundamental rights and is a necessary, appropriate and proportionate measure in a démocratie society to safeguard national security (i.e. State security), defence, public security, and the prévention, investigation, détection- or prosecution of criminal offences or the exécution of criminal penalties, or of unauthorised use of electronic communications systems. Any législative measure refeiïed to in paragraph l shall be in accordance with the Charter of Fundamental Rights of the European Union, in particular with Articles 7, 8, 10 and 52 thereof.

The IPA contains a much broader set of purposes for access to communications data by some 48 public authorities that include NHS trusts and the Gambling Commission. It is very hard to see how this can be squared with the ruling.

Prior review and authorisation by a court or independent administrative body

 The CJEU has also fully endorsed the UK High Court ruling that required independent authorisation for access to retained data:

 120 In order to ensure, in practice, that those conditions are fully respected, it is essential that access of the competent national authorities to retained data should, as a general rule, except in cases of validly established urgency, be subject to a prior review carried out either by a court or by an independent administrative body, and that the decision of that court or body should be made following a reasoned request by those authorities submitted, inter alia, within the framework of procedures for the prevention, detection or prosecution of crime.

This is a blow to the UK legal system, where authorisation is performed by a 'Designated Senior Officer', who is part of the same organisation that requests the data.

Other issues raised in the judgment:

The CJEU judgment also raises a few other issues that  were not explicitly raised by the UK Court of Appeal. However, they will be very important for any future legislation in this area.

Freedom of expression

The Court reiterates the points previously made in the Digital Rights Ireland case that data retention engages not just privacy but also freedom of expression, “one of the essential foundations of a pluralist, democratic society”.

93 Accordingly, the importance both of the right to privacy, guaranteed in Article 7 of the Charter, and of the right to protection of personal data, guaranteed in Article 8 of the Charter, as derived from the Court’s case-law (see, to that effect, judgment of 6 October 2015, Schrems, C‑362/14, EU:C:2015:650, paragraph 39 and the case-law cited), must be taken into consideration in interpreting Article 15(1) of Directive 2002/58. The same is true of the right to freedom of expression in the light of the particular importance accorded to that freedom in any democratic society. That fundamental right, guaranteed in Article 11 of the Charter, constitutes one of the essential foundations of a pluralist, democratic society, and is one of the values on which, under Article 2 TEU, the Union is founded.

This is important because it could make it harder to justify the blanket retention of Internet Connection Records, which could be deemed a 'reading list'. Measures that made ordinary citizens refrain from accessing materials or expressing opinions online this could well impinge the “essence of the right”. This would move the argument away from safeguards on access to the records towards the broader direct impact of the measures, in a way that an analysis purely focused on individual privacy may not.

Notification

Open Rights Group and other human rights groups have long argued that people whose data is accessed should be notified, once this will not impact on investigations. Our calls have always been rejected on the grounds that investigations can go cold and be revived later on, and this would give too much information to suspects.

The CJEU has, almost unprompted, taken the opportunity to remind national courts that this is indeed a basic component of the legal framework around surveillance:

121 Likewise, the competent national authorities to whom access to the retained data has been granted must notify the persons affected, under the applicable national procedures, as soon as that notification is no longer liable to jeopardise the investigations being undertaken by those authorities. That notification is, in fact, necessary to enable the persons affected to exercise, inter alia, their right to a legal remedy, expressly provided for in Article 15(2) of Directive 2002/58, read together with Article 22 of Directive 95/46, where their rights have been infringed.

This would shake the secretive UK surveillance regime to its core, almost more than introducing independent authorisation, as it might be feasible to maintain the current black box model with the use of secret court orders or extending the role - and resources - of the Judicial Commissioners in the IPA. Having to notify discarded suspects would be a crack through which light may reach the darker corners of the current regime.

Given that there are over half a million requests a year for communications data, notification was perceived as introducing a huge administrative burden. It would also give visibility and raise social awareness of the extent of surveillance.

Only suspects' data can be accessed

In addition to rejecting generalised retention and narrowing down access to serious crime with independent authorisation, the CJEU has further established that as a rule only the data of people suspected of direct involvement in those crimes can be accessed. Accessing other people’s data must be an exception and also based on specific evidence of how this may help investigations.

119 Accordingly, and since general access to all retained data, regardless of whether there is any link, at least indirect, with the intended purpose, cannot be regarded as limited to what is strictly necessary, the national legislation concerned must be based on objective criteria in order to define the circumstances and conditions under which the competent national authorities are to be granted access to the data of subscribers or registered users. In that regard, access can, as a general rule, be granted, in relation to the objective of fighting crime, only to the data of individuals suspected of planning, committing or having committed a serious crime or of being implicated in one way or another in such a crime (see, by analogy, ECtHR, 4 December 2015, Zakharov v. Russia, CE:ECHR:2015:1204JUD004714306, § 260). However, in particular situations, where for example vital national security, defence or public security interests are threatened by terrorist activities, access to the data of other persons might also be granted where there is objective evidence from which it can be deduced that that data might, in a specific case, make an effective contributionto combating such activities.

The IPA contains powers for the bulk acquisition of communications data by the Security and Intelligence Agencies, which had been in place through secretive interpretations of previous legislation. MI5 has been getting a copy of all of the country’s phone calls, texts and possibly other data for decades. Clearly, this would not fit the criteria set out by the CJEU and we expect these practices to be challenged in court.

Retained data must be kept in the EU

This was a point raised in the original UK ruling and unsurprisingly it was ratified by the CJEU. It is worth repeating as a reminder of the dire consequences that leaving the EU data protection regime, including data retention, would have for the UK digital economy.

122 With respect to the rules relating to the security and protection of data retained by providers of electronic communications services, it must be noted that Article 15(1) of Directive 2002/58 does not allow Member States to derogate from Article 4(1) and Article 4(1a) of that directive. Those provisions require those providers to take appropriate technical and organisational measures to ensure the effective protection of retained data against risks of misuse and against any unlawful access to that data. Given the quantity of retained data, the sensitivity of that data and the risk of unlawful access to it, the providers of electronic communications services must, in order to ensure the full integrity and confidentiality of that data, guarantee a particularly high level of protection and security by means of appropriate technical and organisational measures. In particular, the national legislation must make provision for the data to be retained within the European Union and for the irreversible destruction of the data at the end of the data retention period (see, by analogy, in relation to Directive 2006/24, the Digital Rights judgment, paragraphs 66 to 68).

If you want to support our work in future cases, and help to ensure that this ruling is enforced, please join ORG today.

[Read more]


December 20, 2016 | Jim Killock

Is the government misleading the Lords about blocking Twitter?

Last week we reported that the UK government expect the BBFC to ask social media providers, such as Twitter, to block the use of their service by accounts that are associated with porn sites that fail to verify the age of their users.

Twitter censoredThe Bill is even worse than we illustrated. The definition of a “pornographic website” in Clause 15 (2) is purely a site that operates on a “commercial basis”. This could catch any site—including Twitter, Reddit, Tumblr—where pornography can be found. The practical limit would therefore purely be down to the discretion of the regulator, the BBFC, as to the kind of commercial sites they wanted to force to use Age Verification. However, the BBFC does not seem to want to require Twitter or Reddit to apply age verification—at least, not yet.

However, we also got one part wrong last week. In relation to Twitter, Reddit and other websites where porn sites might promote their content, the Bill contains a power to notify these “ancillary services” but has no specific power to enforce the notifications.

In other words, they expect Twitter, Google, Facebook, Tumblr and other companies to voluntarily block accounts within the UK, without a specific legal basis for their action.

This would create a toxic situation for these companies. If they fail to “act” on the “notifications”, these services will leave themselves open to the accusation that they are failing to protect children, or actively “supplying” pornography to minors.

On the other hand, if they act on these notices, they will rightly be accused by ourselves and those that are censored of acting in an unaccountable, arbitrary manner. They will not have been legally obliged to act by a court; similar content will remain unblocked; and there will be no clear remedy for someone who wished to contest a “notification”. Liability for the blocks would remain with the company, rather than the BBFC.

The government has not been clear with the Lords that this highly unclear situation is the likely result of notifications to Twitter—rather than account blocks, as they have suggested.

There are very good reasons not to block accounts after a mere notification. For instance in this case, although sites can contest a classification at the BBFC, and an internal appeals process will exist, there is no external appeal available, other than embarking on an expensive judicial review. It is not clear that a classification as pornography should automatically lead to action by ancillary services, not least because compliance automatically results in the same content being made available. To be clear, the bill does not aim to remove pornography from Twitter, Reddit users or search engines.

Why then, has the government drafted a bill with this power to notify “ancillary services”, but no method to enforce? The reason appears to be that payment providers in particular have a long standing agreement amongst themselves that they will halt payments when they are notified that someone is taking payments for unlawful activity. Similarly, large online ad networks have a similar process of accepting notifications.

There is therefore no need to create enforcement mechanisms for these two kinds of “ancillary providers”. (There are pitfalls with their approach—it can lead to censorship and unwarranted damage to businesses—but let us leave that debate aside for now.)

It seems clear that, when the bill was written, there was no expectation that “ancillary providers” would include Twitter, Yahoo, or Google, so no enofrcement power was created. 

The government, in their haste, has agreed with the BBFC that they should be able to notify Twitter, Google, Yahoo and other platforms. They have agreed that BBFC need not take on a role of enforcement through court orders.

The key point is that the Lords are being misled by the government as things stand. Neither the BBFC or government have explored with Parliamentarians what the consequences of expanding the notion of "ancillary providers” is.

The Lords need to be told that this change means that:

  1. the notices are unenforceable against Internet platforms;
  2. they will lead to public disputes with the companies;
  3. they make BBFC’s decisions relating to ancillary providers highly unaccountable as legal responsibility for account blocks rest with the platforms.

It appears that the BBFC do not wish to be cast in the role of “national censor”. They believe that their role is one of classification, rather than enforcement. However, the fact that they also wish to directly block websites via ISPs rather flies in the face of their self-perception, as censorship is most clearly what they will be engaging in. Their self-perception is also not a reason to pass the legal buck onto Internet platforms who have no role in deciding whether a site fails to meet regulatory requirements.

This mess is the result of rushing to legislate without understanding the problems involved. The obvious thing to do is to limit the impact of the “ancillary services” approach by narrowing the definition to exclude all but payment providers and ad networks. The alternative—to create enforcement powers against a range of organisations—would need to establish full accountability for the duties imposed on ancillary providers in a court, something that the BBFC seems to wish to avoid.

Or of course, the government could try to roll back its mistaken approach entirely, and give up on censorship as a punishment: that would be the right thing to do. Please sign our petition if you agree.

[Read more] (3 comments)


December 16, 2016 | Javier Ruiz

ORG's first take on the leaked e-Privacy Regulations

The European Commission's proposed e-Privacy Regulations have leaked. We take a first look at what's in there.

apple watch on a wristBlog Leak

The leaked e-Privacy Regulation (ePR) brings many improved protections to our communications data, which are now extended to communications devices and internet services, not just traditional telecom providers. At the same time this modernisation has brought other fundamental changes that could have less welcome consequences.

Here we focus on the basic changes to electronic communications. Most other analyses of the leaked ePR will probably focus on cookies and the impact on online advertising, and rightly so as this is really important. We don’t have the space here for a proper take on both here, but in the coming months we will also engage with those other areas: cookies, marketing, nuisance calls, as well as the enforcement aspects.

One point we have to stress is that the new ePR explicitly allows national legislation for the interception of communications as long as this is compliant with human rights.

It is important to remember that this is a leak of a European Commission version of the Regulation, which will then have to be amended and fought over by the European Parliament and the Council of Member States, so the final legislation could be different in many areas. There will also be a concerted lobbying campaign from industry to change the parts of this leak that they don’t like.

Whatever happens with Brexit, this Regulation will have an impact in the UK given the current commitment to keep UK data laws compatible with the EU in order to facilitate data flows, e-commerce and services.

Confidentiality of electronic communications

The leaked Regulation is concerned with the confidentiality of 'electronic communications data' meaning both the content and metadata of electronic communications.

The ePR establishes a general principle that nobody can interfere with or monitor electronic communications and that metadata shall be “erased or made anonymous as soon as the communication has taken place”.

The ePR also sets out several cases where metadata can be retained and used for lawful processing by providers, mainly around typical needs to provide security, quality of service, billing and access to emergency services. This is similar to the provisions in the previous Regulation.

There are some differences when it comes to other uses, such as analysing users’ data for commercial purposes. The ePR allows these activities on the basis of consent for specified purposes, provided these could not be achieved with anonymous data.

It also establishes that where there is a “high risk to rights and freedoms” the provider must perform a data protection impact assessment and consult with the ICO, in reference to part of the General Data Protection Regulation (GDPR) that was recently passed by the EU.

Consent in general is strengthened and brought in line with the GDPR. Its is also explicit that consent for the use of electronic metadata must be user friendly and separate from general T&Cs and there is a ban on making services conditional on giving data access.

This is a good but it may not be enough. Communications data in the mobile phone age gives insights into our most intimate personal details and we believe that impact assessment should be compulsory in all cases, as even the best consent system can be bent.

Location

One very important change is that location data has disappeared as a separate category in the new ePR and it is now explicitly described as communications data. In the current Regulation, there are stricter conditions for consent to reuse location data when compared with other types of metadata, and it restricts its use to value-added services and not marketing. That was at the time when mobiles were coming into the mainstream and policymakers saw a high risk involved in knowing where you are at any time.

In the new version, location is just another piece of metadata. Location analytics has become mainstream and restricting the use of such data for mobile phone providers while Google and Apple get it from the handset didn’t really work. At the same time the potential value of collecting location data from mobile phones - whoever gets it, how and to what level of detail - is huge and continues to have high privacy risks.

Removing the different regimes for location and what used to be called “traffic data” is more consistent and will avoid complex debates on what is traffic and what is location. But it is still unclear whether this will be sufficient protection given the high interest in location analytics among industry.

Devices

The leaked ePR contains stronger provisions on the protection of data stored in devices and the extraction of data which should bring some real changes to the way the whole tech industry operates. There are even restrictions on using the processing power of end-users’ devices that could see blockchain technologies requiring some clear consent. There could be some issues with the implementation in some computer environments as it appears to be conceptualised around mobile devices run by corporates.

There are also detailed provisions on the tracking of devices, for example in public wifi in shopping malls or transport networks, where large notices must be displayed. In its guidance for Wifi Location Analytics, the UK ICO does go further in asking for the hashing of personal identifiers though, which makes it more difficult to identify individuals in a dataset.

It is very good that the recitals clarify that machine to machine communications of the kind involved in the internet of things and the coming 5G wave of hyper-connectivity are explicitly covered.

Conclusion

The new rules give companies more leeway in how they use our data while simultaneously tightening the rules on how consent is used in alignment with broader data protection. The new ePR seems particularly good for traditional telcos, which not only see their internet nemesis communications providers now included in the rules - WhatsApp, FaceTime, etc. - but also are the main beneficiaries of these changes on electronic communications data.

The Commission is unapologetic about wanting to create a data market around the reuse of communications data with consent, in recital 23. Interestingly this is exactly the big pitch from Telefonica around reinventing itself as a data company and giving their customers more control, also followed to a lesser extent by Vodafone.

One area that we will be looking into is the use of anonymisation to process communications data, so far the preferred modus operandi of telcos, who are only now starting to move towards a consent model. The new Regulation appears clearer than the previous formulation to delete or anonymise data “when no longer needed’, but we will see in practice if this stops companies building pseudonymous profiles of their users.

[Read more]


December 16, 2016 | Jim Killock

How it works: website blocking in the Digital Economy Bill

We realised that what will and will not be blocked under the Digital Economy Bill is becoming increasingly hard to understand. So here is a handy guide.

Blocking takes two shapes, after the Lords debate.

Websites

Firstly, websites that either don’t use Age Verification, or supply pornography that the new national censor, the BBFC, deems “non-conventional” can be blocked. In addition, if they use an Age Verification technology that seems inadequate, such as credit cards, this could lead to a block, although we believe this would be less likely as it would seem a very harsh response.

Ancillary services

Secondly, “ancillary services” is now clarified to include Twitter or other platforms where an account is used to promote a pornographic website. Here, a block could only be applied if the BBFC has decided to sanction the website for non-compliance. This would mean it could block an account from a website that publishes “non-conventional” pornography, or one that doesn’t provide Age Verification, or only uses credit card verification. However, other similar accounts from sites that had not been reviewed cannot be blocked under this power.

Blocked Twitter feeds would not need to be displaying pornography, they might just provide links.

As a further example of where this might go, we have also included DNS results in the table. Provision of DNS results for a pornographic website could easily be included in the expansive concept of an “ancillary service”.

BBFC classification and blocking will be selective

To complicate matters further, blocking can only take place if the BBFC has decided to classify a website. So the whole process is limited by their capacity to review  hundreds of thousands of pornographic websites. Furthermore, the BBFC cannot block “non-commercial” websites.

This incredibly complicated picture of course risks being perceived as extremely arbitrary. That is the inevitable result of pursuing censorship as a legitimate sanction against regulatory compliance, rather than limiting it to clearly illegal and harmful material.

Don’t forget to sign our petition against these proposals. 

Our summary of what will be blocked

Type of pornographic related content

Type of age verification

Can it be blocked

Will it be blocked in the UK

Major website with US style legal pornography {1}

Credit card or none

Yes

Yes

Website with BBFC-compliant content (2)

UK approved (4)

No

No

Website with BBFC-compliant content

Credit card only

Yes

Maybe

Non-commercial site with US style legal content

None

No

No

Major website with US style legal pornography

UK approved

Yes

Probably

First thousand websites by market share, reviewed by BBFC (3)

Credit card or none

Yes

Probably

Next three million websites by market share, not reviewed by BBFC

Credit card or none

No

No

Twitter feed for BBFC approved commercial website

None

No

No

Twitter feed for a website deemed non-compliant by the BBFC

None

Yes

Probably

Twitter feed for the millions of websites not classified by BBFC

None

No

No

Non-commercial Twitter feed

None

No

No

DNS result locating website

None

Yes

Possibly

 

Notes

1 This table uses “US style legal content” as a shorthand for content that may not be legal in the UK, or legal but not approved by the BBFC.

2 BBFC-compliant means approved by the BBFC, a more restrictive concept than legal in the UK

3 Or whatever number of websites the BBFC feels able to classify. We assume they wll aim to cover market share, so 1000 websites seems a reasonable number to target

4 By "UK approved" age verification we mean systems that meet BBFC requirements. These are currently undefined other than that they must verify age. Privacy and interoperability requirements are absent from the bill.

[Read more] (8 comments)


December 13, 2016 | Jim Killock

MPs leave it to House of Lords to sort out porn cock up

Plans, outlined in the Digital Economy Bill, to make the Internet safe for children are in a worse state than when the Bill was first published this Autumn. It’s now down to peers to sort them out as the Bill has its second reading in the House of Lords.

Although Labour raised the issue of privacy, nothing was changed so there are still no privacy duties in the Bill.  However, the Commons did find time to add powers for the regulator to block legal websites, through a poorly worded amendment from the government.

The Lords therefore have three issues to resolve: will age verification be safe? Will it lead to widespread censorship of legal content? And how will it make both age verification and website blocking safe and fair?

These aren’t easy questions and they ought to have been dealt with well before this bill reached Parliament. The privacy risks of data breaches and tracking of people around the Internet simply have to be addressed. We believe that privacy duties have to be written onto the face of the bill.

On website blocking, it’s clear that MPs are misleading themselves. The objective of website blocking appears to be to restrict access to websites that are not verifying the age of their users. However, the truth is that this is completely beyond the reach of the regulator.

The regulator, the BBFC, almost certainly have no intention of blocking more than a small fraction of the pornographic sites available. This would make pornography in general only slightly less accessible to someone under 18, as they will still be able to reach millions of other sites. It will however restrict access to specific, relatively popular sites. These sites could be aimed at specific communities or identities, which would be particularly harmful.

It is clear that website blocking is not likely to be a safety measure, but a punishment directed at non-compliant websites. Of course it will also punish the users of these websites. It is not clear to us that this approach is necessary or proportionate.

All this ought to make it clear that Age Verification isn’t a particularly wise policy.

The very least that needs to be done is for the regulator to make a proportionality test in relation to the blocking of any given website. This can also take into account the issues relating the which ISPs might do the blocking, for instance so that ISPs that lack the capability to block are not asked to do so, or at the very least, not without compensation.

Another concern that the BBFC itself raised is whether its own classification standards are imposed on websites, or the standard of what is legal to view in the UK. Understandably, the BBFC will seek to sanction websites that are publishing material that it does not view as publishable – or classifiable – in the UK. However, this needs to be the legal standard, rather than the BBFC’s view of what is legal or acceptable.

For this reason, there has to be a simple external appeals mechanism before any sanction is applied, and this too is missing.

Censorship, however you look at it, is a drastic step. While “improvements” can be made that might limit some particularly awful practices from developing, the vast majority of what gets blocked will be legal material that any adult has a right to access. This paradox – the censorship of legal content – won’t disappear just because the process is improved.

The only justification could be that there is a serious and widespread harm emerging that can be addressed; and while it is completely valid to discourage teenagers from accessing this content, it is far less clear that the proposed measures will work, nor that the alternative approaches such as filtering for specific users cannot work. And articulating a valid social goal is still a long way off a rigorous demonstration of harm.

We will hear the first signs of whether the Lords will resolve any of these issues today. The debate starts late this afternoon and can be watched online.

For more details you can read our briefing. if you want to help with the campaign, please sign the petition.

[Read more] (4 comments)


November 25, 2016 | Ed Johnson-Williams

TfL needs to give passengers the full picture on WiFi collection scheme

Transport for London is running a trial that uses people's mobile phones to track crowd movement around 54 London Underground stations. We think they have to do a better job of communicating to passengers what the trial is, what the data will be used for, and how people can opt out.

When a device has WiFi turned on, it broadcasts a unique identifier called a MAC address. By tracking where they detect the MAC addresses of potentially millions of people’s devices a day, TFL want to analyse crowding and travel patterns to improve their services. TfL say they are not identifying individuals or monitoring browsing activity.

TfL WiFi data collection sign

TfL are alerting to passengers to the scheme with this sign. (Text of the sign at the end of this blog.)

Unfortunately, it misses three crucial points to help passengers understand a) how the scheme works, b) all the purposes the data is being collected for, and c) how to opt out.

  1. TfL are tracking people's movement around London and around stations
  2. Passengers have to turn off WiFi on all the devices they are carrying to opt out. If they leave WiFi switched on but never use the WiFi network, they will still be tracked.
  3. The data will be used to find and set prices for advertising spaces in stations, in addition to improving services

How to complain

If you don't like this and want to complain, you can complain directly to TfL using Facebook, Twitter or email.

Tell TfL to ensure they they properly inform passengers:

  • about how the scheme works and what they'll use the data for;
  • that passengers need to disable WiFi on all their devices to opt out.

The Information Commissioner’s Office (ICO) guidance on on WiFi location analytics says that:

  • "Clear and prominent information is one way to alert individuals that certain processing is taking place."
  • "The information should clearly define...the defined purposes of the processing"
  • "Data controllers should consider the use of:
    • signage at the entrance to the collection area;
    • reminder information throughout the location where data is being collected;
    • information on their websites and in any sign-up or portal page of the Wi-Fi network they may be providing; and
    • detailed information to explain how individuals can control the collection of personal data using the settings on their device."

We will be asking the ICO for its view on the signage used by TfL to alert passengers to this scheme and whether it meets this guidance. We have already contacted TfL with the points made here.

There are a number of other issues with the way TfL has implemented this scheme.

It is very difficult for passengers to find out about the scheme beyond the limited information on TfL’s sign. The sign provides a link to tfl.gov.uk/privacy which is the Privacy Policy for the TfL website rather than their page about the WiFi data collection scheme. On a mobile screen - which nearly all passengers will be using in a station - the link to the page about the scheme is at the far bottom of the page. This is the page which includes all the details about what the data is being used for and how to opt out. It seems unlikely that many people will actually read this information.

Even if passengers turn off WiFi on their phone in an attempt to opt out, TfL may still track them using the MAC address broadcasted by tablets or laptops they are carrying. Many tablets and laptops, including Apple iPads and Macbooks, broadcast a MAC address when WiFi is enabled - even when the devices are in Sleep mode. While people may be able to easily disable WiFi on their phone, people will find it much harder to turn off WiFi on their laptop in a busy Tube station and so find it hard to opt out.

It is not clear that passengers are alerted soon enough or often enough about the scheme. The signs at London Bridge Underground station are placed near the ticket barriers. They were not placed at the entrances to the station or throughout the station. London Bridge is only one of the 54 stations which are part of the study and signs may be placed differently in other stations.

Passengers will be within range of TfL’s WiFi quite some way before seeing the signs and may not see the signs in the crowded ticket barrier area. Travellers who enter the Underground at a station which isn't part of the scheme will be unlikely to see a sign and TfL may collect data about them without having informed them about the scheme.

To alleviate some of these issues, we would like TfL to:

  • ensure they inform all passengers about the scheme
  • inform passengers of all the purposes this data will be used for
  • tell passengers to turn off WiFi on all the devices they are carrying if they want to opt out of the scheme
  • ensure signs are placed at the entrances to stations and throughout the stations

ORG has previously warned of the privacy risks of TfL’s Oyster card system. Although the data in this new scheme is not linked to data in the Oyster system, it is clear that TfL has not lost its appetite for monitoring passengers.

Text of TfL's sign


WiFi data collection

We are collecting WiFi data at this station to test how it can be used to improve our services, provide better travel information and help prioritise investment.

We will not identify individuals or monitor browsing activity.

We will collect data between Monday 21 November and Monday 19 December

For more information visit: tfl.gov.uk/privacy

[Read more] (14 comments)