December 16, 2016 | Javier Ruiz

ORG's first take on the leaked e-Privacy Regulations

The European Commission's proposed e-Privacy Regulations have leaked. We take a first look at what's in there.

apple watch on a wristBlog Leak

The leaked e-Privacy Regulation (ePR) brings many improved protections to our communications data, which are now extended to communications devices and internet services, not just traditional telecom providers. At the same time this modernisation has brought other fundamental changes that could have less welcome consequences.

Here we focus on the basic changes to electronic communications. Most other analyses of the leaked ePR will probably focus on cookies and the impact on online advertising, and rightly so as this is really important. We don’t have the space here for a proper take on both here, but in the coming months we will also engage with those other areas: cookies, marketing, nuisance calls, as well as the enforcement aspects.

One point we have to stress is that the new ePR explicitly allows national legislation for the interception of communications as long as this is compliant with human rights.

It is important to remember that this is a leak of a European Commission version of the Regulation, which will then have to be amended and fought over by the European Parliament and the Council of Member States, so the final legislation could be different in many areas. There will also be a concerted lobbying campaign from industry to change the parts of this leak that they don’t like.

Whatever happens with Brexit, this Regulation will have an impact in the UK given the current commitment to keep UK data laws compatible with the EU in order to facilitate data flows, e-commerce and services.

Confidentiality of electronic communications

The leaked Regulation is concerned with the confidentiality of 'electronic communications data' meaning both the content and metadata of electronic communications.

The ePR establishes a general principle that nobody can interfere with or monitor electronic communications and that metadata shall be “erased or made anonymous as soon as the communication has taken place”.

The ePR also sets out several cases where metadata can be retained and used for lawful processing by providers, mainly around typical needs to provide security, quality of service, billing and access to emergency services. This is similar to the provisions in the previous Regulation.

There are some differences when it comes to other uses, such as analysing users’ data for commercial purposes. The ePR allows these activities on the basis of consent for specified purposes, provided these could not be achieved with anonymous data.

It also establishes that where there is a “high risk to rights and freedoms” the provider must perform a data protection impact assessment and consult with the ICO, in reference to part of the General Data Protection Regulation (GDPR) that was recently passed by the EU.

Consent in general is strengthened and brought in line with the GDPR. Its is also explicit that consent for the use of electronic metadata must be user friendly and separate from general T&Cs and there is a ban on making services conditional on giving data access.

This is a good but it may not be enough. Communications data in the mobile phone age gives insights into our most intimate personal details and we believe that impact assessment should be compulsory in all cases, as even the best consent system can be bent.


One very important change is that location data has disappeared as a separate category in the new ePR and it is now explicitly described as communications data. In the current Regulation, there are stricter conditions for consent to reuse location data when compared with other types of metadata, and it restricts its use to value-added services and not marketing. That was at the time when mobiles were coming into the mainstream and policymakers saw a high risk involved in knowing where you are at any time.

In the new version, location is just another piece of metadata. Location analytics has become mainstream and restricting the use of such data for mobile phone providers while Google and Apple get it from the handset didn’t really work. At the same time the potential value of collecting location data from mobile phones - whoever gets it, how and to what level of detail - is huge and continues to have high privacy risks.

Removing the different regimes for location and what used to be called “traffic data” is more consistent and will avoid complex debates on what is traffic and what is location. But it is still unclear whether this will be sufficient protection given the high interest in location analytics among industry.


The leaked ePR contains stronger provisions on the protection of data stored in devices and the extraction of data which should bring some real changes to the way the whole tech industry operates. There are even restrictions on using the processing power of end-users’ devices that could see blockchain technologies requiring some clear consent. There could be some issues with the implementation in some computer environments as it appears to be conceptualised around mobile devices run by corporates.

There are also detailed provisions on the tracking of devices, for example in public wifi in shopping malls or transport networks, where large notices must be displayed. In its guidance for Wifi Location Analytics, the UK ICO does go further in asking for the hashing of personal identifiers though, which makes it more difficult to identify individuals in a dataset.

It is very good that the recitals clarify that machine to machine communications of the kind involved in the internet of things and the coming 5G wave of hyper-connectivity are explicitly covered.


The new rules give companies more leeway in how they use our data while simultaneously tightening the rules on how consent is used in alignment with broader data protection. The new ePR seems particularly good for traditional telcos, which not only see their internet nemesis communications providers now included in the rules - WhatsApp, FaceTime, etc. - but also are the main beneficiaries of these changes on electronic communications data.

The Commission is unapologetic about wanting to create a data market around the reuse of communications data with consent, in recital 23. Interestingly this is exactly the big pitch from Telefonica around reinventing itself as a data company and giving their customers more control, also followed to a lesser extent by Vodafone.

One area that we will be looking into is the use of anonymisation to process communications data, so far the preferred modus operandi of telcos, who are only now starting to move towards a consent model. The new Regulation appears clearer than the previous formulation to delete or anonymise data “when no longer needed’, but we will see in practice if this stops companies building pseudonymous profiles of their users.

[Read more]

December 16, 2016 | Jim Killock

How it works: website blocking in the Digital Economy Bill

We realised that what will and will not be blocked under the Digital Economy Bill is becoming increasingly hard to understand. So here is a handy guide.

Blocking takes two shapes, after the Lords debate.


Firstly, websites that either don’t use Age Verification, or supply pornography that the new national censor, the BBFC, deems “non-conventional” can be blocked. In addition, if they use an Age Verification technology that seems inadequate, such as credit cards, this could lead to a block, although we believe this would be less likely as it would seem a very harsh response.

Ancillary services

Secondly, “ancillary services” is now clarified to include Twitter or other platforms where an account is used to promote a pornographic website. Here, a block could only be applied if the BBFC has decided to sanction the website for non-compliance. This would mean it could block an account from a website that publishes “non-conventional” pornography, or one that doesn’t provide Age Verification, or only uses credit card verification. However, other similar accounts from sites that had not been reviewed cannot be blocked under this power.

Blocked Twitter feeds would not need to be displaying pornography, they might just provide links.

As a further example of where this might go, we have also included DNS results in the table. Provision of DNS results for a pornographic website could easily be included in the expansive concept of an “ancillary service”.

BBFC classification and blocking will be selective

To complicate matters further, blocking can only take place if the BBFC has decided to classify a website. So the whole process is limited by their capacity to review  hundreds of thousands of pornographic websites. Furthermore, the BBFC cannot block “non-commercial” websites.

This incredibly complicated picture of course risks being perceived as extremely arbitrary. That is the inevitable result of pursuing censorship as a legitimate sanction against regulatory compliance, rather than limiting it to clearly illegal and harmful material.

Don’t forget to sign our petition against these proposals. 

Our summary of what will be blocked

Type of pornographic related content

Type of age verification

Can it be blocked

Will it be blocked in the UK

Major website with US style legal pornography {1}

Credit card or none



Website with BBFC-compliant content (2)

UK approved (4)



Website with BBFC-compliant content

Credit card only



Non-commercial site with US style legal content




Major website with US style legal pornography

UK approved



First thousand websites by market share, reviewed by BBFC (3)

Credit card or none



Next three million websites by market share, not reviewed by BBFC

Credit card or none



Twitter feed for BBFC approved commercial website




Twitter feed for a website deemed non-compliant by the BBFC




Twitter feed for the millions of websites not classified by BBFC




Non-commercial Twitter feed




DNS result locating website






1 This table uses “US style legal content” as a shorthand for content that may not be legal in the UK, or legal but not approved by the BBFC.

2 BBFC-compliant means approved by the BBFC, a more restrictive concept than legal in the UK

3 Or whatever number of websites the BBFC feels able to classify. We assume they wll aim to cover market share, so 1000 websites seems a reasonable number to target

4 By "UK approved" age verification we mean systems that meet BBFC requirements. These are currently undefined other than that they must verify age. Privacy and interoperability requirements are absent from the bill.

[Read more] (8 comments)

December 13, 2016 | Jim Killock

MPs leave it to House of Lords to sort out porn cock up

Plans, outlined in the Digital Economy Bill, to make the Internet safe for children are in a worse state than when the Bill was first published this Autumn. It’s now down to peers to sort them out as the Bill has its second reading in the House of Lords.

Although Labour raised the issue of privacy, nothing was changed so there are still no privacy duties in the Bill.  However, the Commons did find time to add powers for the regulator to block legal websites, through a poorly worded amendment from the government.

The Lords therefore have three issues to resolve: will age verification be safe? Will it lead to widespread censorship of legal content? And how will it make both age verification and website blocking safe and fair?

These aren’t easy questions and they ought to have been dealt with well before this bill reached Parliament. The privacy risks of data breaches and tracking of people around the Internet simply have to be addressed. We believe that privacy duties have to be written onto the face of the bill.

On website blocking, it’s clear that MPs are misleading themselves. The objective of website blocking appears to be to restrict access to websites that are not verifying the age of their users. However, the truth is that this is completely beyond the reach of the regulator.

The regulator, the BBFC, almost certainly have no intention of blocking more than a small fraction of the pornographic sites available. This would make pornography in general only slightly less accessible to someone under 18, as they will still be able to reach millions of other sites. It will however restrict access to specific, relatively popular sites. These sites could be aimed at specific communities or identities, which would be particularly harmful.

It is clear that website blocking is not likely to be a safety measure, but a punishment directed at non-compliant websites. Of course it will also punish the users of these websites. It is not clear to us that this approach is necessary or proportionate.

All this ought to make it clear that Age Verification isn’t a particularly wise policy.

The very least that needs to be done is for the regulator to make a proportionality test in relation to the blocking of any given website. This can also take into account the issues relating the which ISPs might do the blocking, for instance so that ISPs that lack the capability to block are not asked to do so, or at the very least, not without compensation.

Another concern that the BBFC itself raised is whether its own classification standards are imposed on websites, or the standard of what is legal to view in the UK. Understandably, the BBFC will seek to sanction websites that are publishing material that it does not view as publishable – or classifiable – in the UK. However, this needs to be the legal standard, rather than the BBFC’s view of what is legal or acceptable.

For this reason, there has to be a simple external appeals mechanism before any sanction is applied, and this too is missing.

Censorship, however you look at it, is a drastic step. While “improvements” can be made that might limit some particularly awful practices from developing, the vast majority of what gets blocked will be legal material that any adult has a right to access. This paradox – the censorship of legal content – won’t disappear just because the process is improved.

The only justification could be that there is a serious and widespread harm emerging that can be addressed; and while it is completely valid to discourage teenagers from accessing this content, it is far less clear that the proposed measures will work, nor that the alternative approaches such as filtering for specific users cannot work. And articulating a valid social goal is still a long way off a rigorous demonstration of harm.

We will hear the first signs of whether the Lords will resolve any of these issues today. The debate starts late this afternoon and can be watched online.

For more details you can read our briefing. if you want to help with the campaign, please sign the petition.

[Read more] (4 comments)

November 25, 2016 | Ed Johnson-Williams

TfL needs to give passengers the full picture on WiFi collection scheme

Transport for London is running a trial that uses people's mobile phones to track crowd movement around 54 London Underground stations. We think they have to do a better job of communicating to passengers what the trial is, what the data will be used for, and how people can opt out.

When a device has WiFi turned on, it broadcasts a unique identifier called a MAC address. By tracking where they detect the MAC addresses of potentially millions of people’s devices a day, TFL want to analyse crowding and travel patterns to improve their services. TfL say they are not identifying individuals or monitoring browsing activity.

TfL WiFi data collection sign

TfL are alerting to passengers to the scheme with this sign. (Text of the sign at the end of this blog.)

Unfortunately, it misses three crucial points to help passengers understand a) how the scheme works, b) all the purposes the data is being collected for, and c) how to opt out.

  1. TfL are tracking people's movement around London and around stations
  2. Passengers have to turn off WiFi on all the devices they are carrying to opt out. If they leave WiFi switched on but never use the WiFi network, they will still be tracked.
  3. The data will be used to find and set prices for advertising spaces in stations, in addition to improving services

How to complain

If you don't like this and want to complain, you can complain directly to TfL using Facebook, Twitter or email.

Tell TfL to ensure they they properly inform passengers:

  • about how the scheme works and what they'll use the data for;
  • that passengers need to disable WiFi on all their devices to opt out.

The Information Commissioner’s Office (ICO) guidance on on WiFi location analytics says that:

  • "Clear and prominent information is one way to alert individuals that certain processing is taking place."
  • "The information should clearly define...the defined purposes of the processing"
  • "Data controllers should consider the use of:
    • signage at the entrance to the collection area;
    • reminder information throughout the location where data is being collected;
    • information on their websites and in any sign-up or portal page of the Wi-Fi network they may be providing; and
    • detailed information to explain how individuals can control the collection of personal data using the settings on their device."

We will be asking the ICO for its view on the signage used by TfL to alert passengers to this scheme and whether it meets this guidance. We have already contacted TfL with the points made here.

There are a number of other issues with the way TfL has implemented this scheme.

It is very difficult for passengers to find out about the scheme beyond the limited information on TfL’s sign. The sign provides a link to which is the Privacy Policy for the TfL website rather than their page about the WiFi data collection scheme. On a mobile screen - which nearly all passengers will be using in a station - the link to the page about the scheme is at the far bottom of the page. This is the page which includes all the details about what the data is being used for and how to opt out. It seems unlikely that many people will actually read this information.

Even if passengers turn off WiFi on their phone in an attempt to opt out, TfL may still track them using the MAC address broadcasted by tablets or laptops they are carrying. Many tablets and laptops, including Apple iPads and Macbooks, broadcast a MAC address when WiFi is enabled - even when the devices are in Sleep mode. While people may be able to easily disable WiFi on their phone, people will find it much harder to turn off WiFi on their laptop in a busy Tube station and so find it hard to opt out.

It is not clear that passengers are alerted soon enough or often enough about the scheme. The signs at London Bridge Underground station are placed near the ticket barriers. They were not placed at the entrances to the station or throughout the station. London Bridge is only one of the 54 stations which are part of the study and signs may be placed differently in other stations.

Passengers will be within range of TfL’s WiFi quite some way before seeing the signs and may not see the signs in the crowded ticket barrier area. Travellers who enter the Underground at a station which isn't part of the scheme will be unlikely to see a sign and TfL may collect data about them without having informed them about the scheme.

To alleviate some of these issues, we would like TfL to:

  • ensure they inform all passengers about the scheme
  • inform passengers of all the purposes this data will be used for
  • tell passengers to turn off WiFi on all the devices they are carrying if they want to opt out of the scheme
  • ensure signs are placed at the entrances to stations and throughout the stations

ORG has previously warned of the privacy risks of TfL’s Oyster card system. Although the data in this new scheme is not linked to data in the Oyster system, it is clear that TfL has not lost its appetite for monitoring passengers.

Text of TfL's sign

WiFi data collection

We are collecting WiFi data at this station to test how it can be used to improve our services, provide better travel information and help prioritise investment.

We will not identify individuals or monitor browsing activity.

We will collect data between Monday 21 November and Monday 19 December

For more information visit:

[Read more] (14 comments)

November 25, 2016 | Pam Cowburn

No one expects spam for Christmas

Debenhams, Topshop, Argos and Next are just some of the High Street shops that have started to offer their customers e-receipts when they pay for goods.

Mary and Christ with tins of spam, no one expects spam for Christmas

The benefits of an email, at least according to the shops, are that it's more convenient for customers who don't have to worry about losing their receipts, and of course it's better for the environment to use less paper. 

But according to a Daily Mail investigation, many of these shops could be breaking data protection law because they are failing to give customers the full picture of how their email addresses are being used. The Mail found that shop staff did not always explain that emails could be used for marketing emails and that some customers were still sent marketing emails even though they had expressly asked not to get them. 

In some instances, this could be a lack of training for staff on the tills. However, it’s clear that shops see e-receipts as an opportunity to gather data about their customers. The Mail reported that one sales rep “admitted to getting a bonus if he collected email addresses”. The benefits to businesses are not just that they can build their email lists but that they can get insights into their customers based on their purchases.

This kind of email collection is not just taking place in shops. ORG was recently contacted by Nullig who was bombarded with unsolicited marketing emails after she bought something over the phone from Debenhams. She told us:

“I was asked over the telephone for my email address in case there were any issues with the delivery of my order. I was not asked if my email could be used for any other purposes and yet almost immediately I started getting marketing spam from Debenhams. It took 13 emails and three months of complaining for this to stop.”

The legal position

The law is very clear. At the point that email addresses are collected, customers need to be given 'a simple means of refusing' any future direct marketing emails. The Information Commissioner has also clarified that: “Whenever customer information is collected there must be a clear explanation given of how their information will be used.” Shops need to ensure that staff are fully trained to ensure that they comply with the law.

What can you do?

You can refuse an e-receipt and ask for a paper one.

If you prefer e-receipts, you don’t have to sign up for marketing emails. The sales assistant should explain how your email will be used. If they don’t, tell them that you don’t want your email to be used for marketing.

If you are unhappy about how your data has been collected and used, contact the ICO.

[Read more] (3 comments)

November 24, 2016 | Javier Ruiz

DEBill loophole will allow energy companies to be given your tax and benefits data

Over the last two years, ORG has taken parts in discussions about data sharing with the Government, convened by the Cabinet Office. This included examining ways for low income households to get support with their energy bills.

ORG, of course, supports the idea of helping people lower their energy bills, but we are concerned that for this to happen, companies will be given sensitive information about their customers. During the discussions, ORG called for a solution that prevented data from being shared with companies. Unfortunately these demands have not been fully translated into the proposed Digital Economy Bill and the draft Codes of Practice.

As a result, the proposals that are currently in front of Parliament could see energy companies being given huge amounts of sensitive personal data about their customers.

Who gets energy discounts?
People who receive certain kinds of benefits are entitled to discounts on their energy bills. This could be because they are on a low income, receive support for a particular health condition and/or have young children.

However, to date government has refused to take full responsibility for deciding who is eligible to qualify for the discount, other than pensioners who get it automatically.

Each company has its own bafflingly complex criteria and asks for different documents to prove that you are eligible. For example, OVO and EON ask for Medical Exemption certificates but British Gas don’t.

As well as complicating the process, this system acts as a barrier to consumers getting the best price deal. The Government promotes switching suppliers to stimulate lower prices but having to provide personal documents to prove they are eligible for a discount every time they change supplier is likely to deter many people from switching.

The obvious solution is to simplify the process, using data sharing powers to extend the automatic discounts that are currently given to pensioners to other disadvantaged groups.

How would automatic discounts work?
In a meeting on 6 January 2016, Alan Clifford from the recently disbanded Department for Energy and Climate Change (DECC) presented their proposals to expand the automatic provision of direct energy bill support for citizens living in fuel poverty.

Clifford explained that the new data sharing powers would build on existing powers under the Welfare Reform Act 2012 to allocate benefits automatically. Data sharing would be mainly used to enable access to HMRC tax credits data and information on the housing stock that would tell which houses are the coldest (e.g. from the Valuation Office Agency).

In this model, this data would not be shared with companies. Government departments or a specific contractor would work out who was entitled to the discount.

However, this intention is not reflected in the wording of the Digital Economy Bill and the accompanying Code of Practice. Although these codes say that this is how it’s intended to work, the wording is not tight enough.

There is nothing restraining the types of data to be shared or explaining how the process will be automated.

The risks to customers
The worry is that the government may fail to unify the discount system, and feel obliged to share personal data with the companies so they can sort them out. As there is no legal barrier to direct data sharing, we believe there is a significant risk that the government will go back on its promises not to do this.

While energy companies only need to know that their customers are entitled to a discount, under the current proposals there is nothing to stop much more detailed data being shared. Tax credits and housing data could be shared with energy companies so that they can each continue to make discount calculations in their own way.

Each company would need different information, which means that they are likely to receive much more than they actually need to know about their customers’ health and economic status. It would also mean that the data belonging to people who do not qualify for the discounts could be accessed by companies, just in order to run the assessments.

One process, maximum privacy
It needs to be written into the Bill that the new powers will only be used to tell energy companies which customers must be given an automatic discount in their bill, nothing more and nothing less.

Given the advanced state of development of these proposals we should have more transparency from government on which organisation will carry out the data matching, how they will unify the eligibility criteria for all the energy companies and the exact datasets that will be used in the process. And it would not hurt to give households an opt out of receiving discounts, even if this is taken up by a small minority.

Without changes to ensure a clear, simplified process, run by an accountable organisation rather than energy companies, the Digital Economy Bill will undermine the privacy of millions of energy customers.


[Read more]

November 20, 2016 | Jim Killock

Website blocking will open up age verification to credit card fraud

When you legislate at break-neck speed, and fail to consult, things will go wrong. This is absolutely the case with Age Verification (AV) in the Digital Economy Bill, which now seems set to include website blocking to bolster use of AV technologies. This is likely to lead to high risks of credit card fraud and privacy abuse.

Currently the BBFC are pinning their hopes on being able to specify some kind of privacy and safety standard through their ability to regulate “arrangements” that deliver age verified material. Sites must deliver pornographic material:

in a way that secures that, at any given time, the material is not normally accessible by persons under the age of 18

The regulator can issue then guidance for:

types of arrangements for making pornographic material available that the regulator will treat as complying

The claim is that this mechanism allows the guidance to specify what kind of AV is private and secure.

However, if the BBFC are told to block "non-compliant" websites, in practice they will have to accept any system that websites use that verifies age. To do otherwise would be highly unfair: why should a site with legal material, that uses their own AV system, end up blocked by the BBFC?

This will especially apply to systems that require registration / credit card tests. There are plenty of paysites already of course.  These are not privacy friendly, as they strongly identify the user to the website - and they have to do this to minimise fraudulent payment card transactions. That’s alright as a matter of choice of course, but dangerous when it is done purely as a means of age verification.

If asking for credit card details becomes common or permissible, and a credible ask in the minds of UK citizens, then the government will have created a gold mine for criminals to operate scam porn sites targeted at the UK, inviting people to supply their credit cards to scam sites for “Age Verification”. In fact you could see this being extended to all manner of sites that a criminal could claim were “blocked until you prove you’re over 18”.

verified by visa fraud

Once credit card details are harvested, in return for some minimal/copyright infringing porn access at a scam porn site, then criminals can of course resell them for fraud. Another easy to understand example of a criminal abusing this system is that you could see criminals typo-squatting on relevant domain names such as and asking for a credit card to gain access. Anything that normalises the entry of credit card details into pages where the user isn’t making a payment will increase the fraudulent use of such cards. And if a website is validating credit cards to prove age, but not verifying them, then the internationally agreed standards to protect credit card data are unlikely to apply to them.

During the committee stage of the Digital Economy Bill, we argued that the AV regulator should be highly specific about the privacy and anonymity protections, alongside the cyber security consequences. We argued for a single system with perhaps multiple providers, that would be verifiable and trusted. The government on the other hand believes that market-led solutions should be allowed to proliferate. This makes it hard for users to know which are safe or genuine. 

If website blocking becomes part of the enforcement armoury, then websites that employ unsafe but effective, or novel and unknown, AV systems will be able to argue that they should not be blocked. The BBFC is likely to have to err on the side of caution - it would be an extreme step to block an age-verifying website just because it hadn’t employed an “approved” system.

The amount of website blocking that takes place will add to the scamming problem and open up new opportunities for innovative criminals. The BBFC seems to be set to have an ‘administrative’ power to order ISPs to block. If this is the case, the policy would appear to be designed to block many websites, rather than a small number. The more blocking of sites that users encounter, the more they will get used to the idea that age verification is in use for pornography or anything that could possibly be perceived as age-restricted, and therefore trust the systems they are presented with. If this system is not always the same, but varies wildly, then there are plenty of opportunities for scams and criminal compromise of poorly-run Age Verification systems.

Security and privacy problems can be minimised, but are very, very hard to avoid if the government goes down the website blocking route. What MPs need to know right now is that they are moving too fast to predict the scale of the problems they are opening up.


[Read more] (4 comments)

November 09, 2016 | Jim Killock

Donald Trump will exert a great deal of control over GCHQ’s operations

The NSA and GCHQ are virtually joined at the hip. GCHQ shares nearly all the data it collects, and relies on US technology for its key operations.

Donald Trump“If there were a crisis in the relationship between the UK and the US, what risks would our shared intelligence arrangements pose?”

We asked this question in our 2015 report about the Snowden leaks. We might be about to find out the answer.

Chapter 5 of our report details the technological and data sharing integration. The Snowden documents show that Britain’s GCHQ and America’s NSA work very closely together. They are integrated in a way that means it is difficult for our Parliament to hold GCHQ to account. We rely so much on US technology and data that it poses questions for our sovereignty.

GCHQ is virtually a branch office of the NSA. It hoovers up around 30% of Internet traffic from the UK and EU, and share it all with the USA. The agencies use the same shared hacking tools. They use the same core data analytics platforms, like XKEYSCORE.

Is sharing of UK citizens’ “bulk data” with a Trump government safe? Will Trump threaten the UK with the removal of key technologies, if our government steps out of line? Will he push the UK into taking ever greater risks and intrusions as the price for this close relationship?

GCHQ helped the NSA by tapping Google’s cables and harvesting vast amounts of personal data according to the Snowden documents. It would be illegal in the USA, but they got the data. GCHQ hacked into the heart of Belgian telecoms at Belgacom, with US co-operation. The NSA even paid GCHQ £100m to keep its data harvesting operations open when their budget was cut. 

Will Trump be asking GCHQ to do more of the same? Is our government capable of resisting these requests, when they are made in secret, and the cost of resistance could be cutting off tools they rely on?

Oversight of this state of dependency between the UK and USA is woeful in the UK. If we want our future to be safe, this is time to rethink how surveillance is governed and overseen. 

Please join ORG today and help us keep fighting against extreme surveillance laws like the Investigatory Powers Bill.

[Read more] (2 comments)

: 3 Ways to Improve Democratic Participation in Scotland-->
  • May 31: ORG Aberdeen: Cryptonoise May 2018
  • ORG Glasgow: A discussion of the General Data Protection Regulation (GDPR)
  • ORG Aberdeen: March Cryptonoise event
  • ORG North East: Take control of your online life