Blog


November 25, 2016 | Ed Johnson-Williams

TfL needs to give passengers the full picture on WiFi collection scheme

Transport for London is running a trial that uses people's mobile phones to track crowd movement around 54 London Underground stations. We think they have to do a better job of communicating to passengers what the trial is, what the data will be used for, and how people can opt out.

When a device has WiFi turned on, it broadcasts a unique identifier called a MAC address. By tracking where they detect the MAC addresses of potentially millions of people’s devices a day, TFL want to analyse crowding and travel patterns to improve their services. TfL say they are not identifying individuals or monitoring browsing activity.

TfL WiFi data collection sign

TfL are alerting to passengers to the scheme with this sign. (Text of the sign at the end of this blog.)

Unfortunately, it misses three crucial points to help passengers understand a) how the scheme works, b) all the purposes the data is being collected for, and c) how to opt out.

  1. TfL are tracking people's movement around London and around stations
  2. Passengers have to turn off WiFi on all the devices they are carrying to opt out. If they leave WiFi switched on but never use the WiFi network, they will still be tracked.
  3. The data will be used to find and set prices for advertising spaces in stations, in addition to improving services

How to complain

If you don't like this and want to complain, you can complain directly to TfL using Facebook, Twitter or email.

Tell TfL to ensure they they properly inform passengers:

  • about how the scheme works and what they'll use the data for;
  • that passengers need to disable WiFi on all their devices to opt out.

The Information Commissioner’s Office (ICO) guidance on on WiFi location analytics says that:

  • "Clear and prominent information is one way to alert individuals that certain processing is taking place."
  • "The information should clearly define...the defined purposes of the processing"
  • "Data controllers should consider the use of:
    • signage at the entrance to the collection area;
    • reminder information throughout the location where data is being collected;
    • information on their websites and in any sign-up or portal page of the Wi-Fi network they may be providing; and
    • detailed information to explain how individuals can control the collection of personal data using the settings on their device."

We will be asking the ICO for its view on the signage used by TfL to alert passengers to this scheme and whether it meets this guidance. We have already contacted TfL with the points made here.

There are a number of other issues with the way TfL has implemented this scheme.

It is very difficult for passengers to find out about the scheme beyond the limited information on TfL’s sign. The sign provides a link to tfl.gov.uk/privacy which is the Privacy Policy for the TfL website rather than their page about the WiFi data collection scheme. On a mobile screen - which nearly all passengers will be using in a station - the link to the page about the scheme is at the far bottom of the page. This is the page which includes all the details about what the data is being used for and how to opt out. It seems unlikely that many people will actually read this information.

Even if passengers turn off WiFi on their phone in an attempt to opt out, TfL may still track them using the MAC address broadcasted by tablets or laptops they are carrying. Many tablets and laptops, including Apple iPads and Macbooks, broadcast a MAC address when WiFi is enabled - even when the devices are in Sleep mode. While people may be able to easily disable WiFi on their phone, people will find it much harder to turn off WiFi on their laptop in a busy Tube station and so find it hard to opt out.

It is not clear that passengers are alerted soon enough or often enough about the scheme. The signs at London Bridge Underground station are placed near the ticket barriers. They were not placed at the entrances to the station or throughout the station. London Bridge is only one of the 54 stations which are part of the study and signs may be placed differently in other stations.

Passengers will be within range of TfL’s WiFi quite some way before seeing the signs and may not see the signs in the crowded ticket barrier area. Travellers who enter the Underground at a station which isn't part of the scheme will be unlikely to see a sign and TfL may collect data about them without having informed them about the scheme.

To alleviate some of these issues, we would like TfL to:

  • ensure they inform all passengers about the scheme
  • inform passengers of all the purposes this data will be used for
  • tell passengers to turn off WiFi on all the devices they are carrying if they want to opt out of the scheme
  • ensure signs are placed at the entrances to stations and throughout the stations

ORG has previously warned of the privacy risks of TfL’s Oyster card system. Although the data in this new scheme is not linked to data in the Oyster system, it is clear that TfL has not lost its appetite for monitoring passengers.

Text of TfL's sign


WiFi data collection

We are collecting WiFi data at this station to test how it can be used to improve our services, provide better travel information and help prioritise investment.

We will not identify individuals or monitor browsing activity.

We will collect data between Monday 21 November and Monday 19 December

For more information visit: tfl.gov.uk/privacy

[Read more] (13 comments)


November 25, 2016 | Pam Cowburn

No one expects spam for Christmas

Debenhams, Topshop, Argos and Next are just some of the High Street shops that have started to offer their customers e-receipts when they pay for goods.

Mary and Christ with tins of spam, no one expects spam for Christmas

The benefits of an email, at least according to the shops, are that it's more convenient for customers who don't have to worry about losing their receipts, and of course it's better for the environment to use less paper. 

But according to a Daily Mail investigation, many of these shops could be breaking data protection law because they are failing to give customers the full picture of how their email addresses are being used. The Mail found that shop staff did not always explain that emails could be used for marketing emails and that some customers were still sent marketing emails even though they had expressly asked not to get them. 

In some instances, this could be a lack of training for staff on the tills. However, it’s clear that shops see e-receipts as an opportunity to gather data about their customers. The Mail reported that one sales rep “admitted to getting a bonus if he collected email addresses”. The benefits to businesses are not just that they can build their email lists but that they can get insights into their customers based on their purchases.

This kind of email collection is not just taking place in shops. ORG was recently contacted by Nullig who was bombarded with unsolicited marketing emails after she bought something over the phone from Debenhams. She told us:

“I was asked over the telephone for my email address in case there were any issues with the delivery of my order. I was not asked if my email could be used for any other purposes and yet almost immediately I started getting marketing spam from Debenhams. It took 13 emails and three months of complaining for this to stop.”

The legal position

The law is very clear. At the point that email addresses are collected, customers need to be given 'a simple means of refusing' any future direct marketing emails. The Information Commissioner has also clarified that: “Whenever customer information is collected there must be a clear explanation given of how their information will be used.” Shops need to ensure that staff are fully trained to ensure that they comply with the law.

What can you do?

You can refuse an e-receipt and ask for a paper one.

If you prefer e-receipts, you don’t have to sign up for marketing emails. The sales assistant should explain how your email will be used. If they don’t, tell them that you don’t want your email to be used for marketing.

If you are unhappy about how your data has been collected and used, contact the ICO.

[Read more] (2 comments)


November 24, 2016 | Javier Ruiz

DEBill loophole will allow energy companies to be given your tax and benefits data

Over the last two years, ORG has taken parts in discussions about data sharing with the Government, convened by the Cabinet Office. This included examining ways for low income households to get support with their energy bills.

ORG, of course, supports the idea of helping people lower their energy bills, but we are concerned that for this to happen, companies will be given sensitive information about their customers. During the discussions, ORG called for a solution that prevented data from being shared with companies. Unfortunately these demands have not been fully translated into the proposed Digital Economy Bill and the draft Codes of Practice.

As a result, the proposals that are currently in front of Parliament could see energy companies being given huge amounts of sensitive personal data about their customers.

Who gets energy discounts?
People who receive certain kinds of benefits are entitled to discounts on their energy bills. This could be because they are on a low income, receive support for a particular health condition and/or have young children.

However, to date government has refused to take full responsibility for deciding who is eligible to qualify for the discount, other than pensioners who get it automatically.

Each company has its own bafflingly complex criteria and asks for different documents to prove that you are eligible. For example, OVO and EON ask for Medical Exemption certificates but British Gas don’t.

As well as complicating the process, this system acts as a barrier to consumers getting the best price deal. The Government promotes switching suppliers to stimulate lower prices but having to provide personal documents to prove they are eligible for a discount every time they change supplier is likely to deter many people from switching.

The obvious solution is to simplify the process, using data sharing powers to extend the automatic discounts that are currently given to pensioners to other disadvantaged groups.

How would automatic discounts work?
In a meeting on 6 January 2016, Alan Clifford from the recently disbanded Department for Energy and Climate Change (DECC) presented their proposals to expand the automatic provision of direct energy bill support for citizens living in fuel poverty.

Clifford explained that the new data sharing powers would build on existing powers under the Welfare Reform Act 2012 to allocate benefits automatically. Data sharing would be mainly used to enable access to HMRC tax credits data and information on the housing stock that would tell which houses are the coldest (e.g. from the Valuation Office Agency).

In this model, this data would not be shared with companies. Government departments or a specific contractor would work out who was entitled to the discount.

However, this intention is not reflected in the wording of the Digital Economy Bill and the accompanying Code of Practice. Although these codes say that this is how it’s intended to work, the wording is not tight enough.

There is nothing restraining the types of data to be shared or explaining how the process will be automated.

The risks to customers
The worry is that the government may fail to unify the discount system, and feel obliged to share personal data with the companies so they can sort them out. As there is no legal barrier to direct data sharing, we believe there is a significant risk that the government will go back on its promises not to do this.

While energy companies only need to know that their customers are entitled to a discount, under the current proposals there is nothing to stop much more detailed data being shared. Tax credits and housing data could be shared with energy companies so that they can each continue to make discount calculations in their own way.

Each company would need different information, which means that they are likely to receive much more than they actually need to know about their customers’ health and economic status. It would also mean that the data belonging to people who do not qualify for the discounts could be accessed by companies, just in order to run the assessments.

One process, maximum privacy
It needs to be written into the Bill that the new powers will only be used to tell energy companies which customers must be given an automatic discount in their bill, nothing more and nothing less.

Given the advanced state of development of these proposals we should have more transparency from government on which organisation will carry out the data matching, how they will unify the eligibility criteria for all the energy companies and the exact datasets that will be used in the process. And it would not hurt to give households an opt out of receiving discounts, even if this is taken up by a small minority.

Without changes to ensure a clear, simplified process, run by an accountable organisation rather than energy companies, the Digital Economy Bill will undermine the privacy of millions of energy customers.

 

[Read more]


November 20, 2016 | Jim Killock

Website blocking will open up age verification to credit card fraud

When you legislate at break-neck speed, and fail to consult, things will go wrong. This is absolutely the case with Age Verification (AV) in the Digital Economy Bill, which now seems set to include website blocking to bolster use of AV technologies. This is likely to lead to high risks of credit card fraud and privacy abuse.

Currently the BBFC are pinning their hopes on being able to specify some kind of privacy and safety standard through their ability to regulate “arrangements” that deliver age verified material. Sites must deliver pornographic material:

in a way that secures that, at any given time, the material is not normally accessible by persons under the age of 18

The regulator can issue then guidance for:

types of arrangements for making pornographic material available that the regulator will treat as complying

The claim is that this mechanism allows the guidance to specify what kind of AV is private and secure.

However, if the BBFC are told to block "non-compliant" websites, in practice they will have to accept any system that websites use that verifies age. To do otherwise would be highly unfair: why should a site with legal material, that uses their own AV system, end up blocked by the BBFC?

This will especially apply to systems that require registration / credit card tests. There are plenty of paysites already of course.  These are not privacy friendly, as they strongly identify the user to the website - and they have to do this to minimise fraudulent payment card transactions. That’s alright as a matter of choice of course, but dangerous when it is done purely as a means of age verification.

If asking for credit card details becomes common or permissible, and a credible ask in the minds of UK citizens, then the government will have created a gold mine for criminals to operate scam porn sites targeted at the UK, inviting people to supply their credit cards to scam sites for “Age Verification”. In fact you could see this being extended to all manner of sites that a criminal could claim were “blocked until you prove you’re over 18”.

verified by visa fraud

Once credit card details are harvested, in return for some minimal/copyright infringing porn access at a scam porn site, then criminals can of course resell them for fraud. Another easy to understand example of a criminal abusing this system is that you could see criminals typo-squatting on relevant domain names such as youporm.com and asking for a credit card to gain access. Anything that normalises the entry of credit card details into pages where the user isn’t making a payment will increase the fraudulent use of such cards. And if a website is validating credit cards to prove age, but not verifying them, then the internationally agreed standards to protect credit card data are unlikely to apply to them.

During the committee stage of the Digital Economy Bill, we argued that the AV regulator should be highly specific about the privacy and anonymity protections, alongside the cyber security consequences. We argued for a single system with perhaps multiple providers, that would be verifiable and trusted. The government on the other hand believes that market-led solutions should be allowed to proliferate. This makes it hard for users to know which are safe or genuine. 

If website blocking becomes part of the enforcement armoury, then websites that employ unsafe but effective, or novel and unknown, AV systems will be able to argue that they should not be blocked. The BBFC is likely to have to err on the side of caution - it would be an extreme step to block an age-verifying website just because it hadn’t employed an “approved” system.

The amount of website blocking that takes place will add to the scamming problem and open up new opportunities for innovative criminals. The BBFC seems to be set to have an ‘administrative’ power to order ISPs to block. If this is the case, the policy would appear to be designed to block many websites, rather than a small number. The more blocking of sites that users encounter, the more they will get used to the idea that age verification is in use for pornography or anything that could possibly be perceived as age-restricted, and therefore trust the systems they are presented with. If this system is not always the same, but varies wildly, then there are plenty of opportunities for scams and criminal compromise of poorly-run Age Verification systems.

Security and privacy problems can be minimised, but are very, very hard to avoid if the government goes down the website blocking route. What MPs need to know right now is that they are moving too fast to predict the scale of the problems they are opening up.

 

[Read more] (4 comments)


November 09, 2016 | Jim Killock

Donald Trump will exert a great deal of control over GCHQ’s operations

The NSA and GCHQ are virtually joined at the hip. GCHQ shares nearly all the data it collects, and relies on US technology for its key operations.

Donald Trump“If there were a crisis in the relationship between the UK and the US, what risks would our shared intelligence arrangements pose?”

We asked this question in our 2015 report about the Snowden leaks. We might be about to find out the answer.

Chapter 5 of our report details the technological and data sharing integration. The Snowden documents show that Britain’s GCHQ and America’s NSA work very closely together. They are integrated in a way that means it is difficult for our Parliament to hold GCHQ to account. We rely so much on US technology and data that it poses questions for our sovereignty.

GCHQ is virtually a branch office of the NSA. It hoovers up around 30% of Internet traffic from the UK and EU, and share it all with the USA. The agencies use the same shared hacking tools. They use the same core data analytics platforms, like XKEYSCORE.

Is sharing of UK citizens’ “bulk data” with a Trump government safe? Will Trump threaten the UK with the removal of key technologies, if our government steps out of line? Will he push the UK into taking ever greater risks and intrusions as the price for this close relationship?

GCHQ helped the NSA by tapping Google’s cables and harvesting vast amounts of personal data according to the Snowden documents. It would be illegal in the USA, but they got the data. GCHQ hacked into the heart of Belgian telecoms at Belgacom, with US co-operation. The NSA even paid GCHQ £100m to keep its data harvesting operations open when their budget was cut. 

Will Trump be asking GCHQ to do more of the same? Is our government capable of resisting these requests, when they are made in secret, and the cost of resistance could be cutting off tools they rely on?

Oversight of this state of dependency between the UK and USA is woeful in the UK. If we want our future to be safe, this is time to rethink how surveillance is governed and overseen. 

Please join ORG today and help us keep fighting against extreme surveillance laws like the Investigatory Powers Bill.

[Read more] (2 comments)


November 03, 2016 | Jim Killock

Age verification for porn sites is tricky so let's try censorship

A cross bench group of MPs has tabled an amendment to block pornographic websites that fail to provide ‘age verification’ technologies.

The amendment has been tabled because MPs understand that age verification cannot be imposed upon the entire mostly US-based pornographic industry by the UK alone. In the USA, age verification has been seen by the courts as an infringement on the right of individuals to receive and impart information. This is unlikely to change, so use of age verification technologies will be limited at best.

However, the attempt to punish websites by blocking them is also a punishment inflicted on the visitors to these websites. Blocking them is a form of censorship, it is an attempt to restrict access to them for everyone.
When material is restricted in this way, it needs to be done for reasons that are both necessary for the goal, and proportionate to the aim. It has to be effective in order to be proportionate.

The goal is to protect children, although the level of harm has not been established. According to OfCom: “More than nine in ten parents in 2015 said they mediated their child’s use of the internet in some way, with 96% of parents of 3-4s and 94% of parents of 5-15s using a combination of: regularly talking to their children about managing online risks, using technical tools, supervising their child, and using rules or restrictions.” (1)

70% of households have no children. These factors make the necessity and proportionality of both age verification and censorship quite difficult to establish. This issue affects 30% of households who can choose to apply filters and use other strategies to keep their children safe online.

It is worth remembering also that the NSPCC and others tend to accept that teenagers are likely to continue to access pornography despite these measures. They focus their concerns on 9-12 years olds coming across inappropriate material, despite a lack of evidence that there is any volume of these incidents, or that harm has resulted. While it is very important to ensure that 9-12 year olds are safe online, it seems more practical to focus attention directly on their online environment, for instance through filters and parental intervention, than attempting to make the entire UK Internet conform to standards that are acceptable for this age group.

That MPs are resorting to proposals for website blocking tells us that the age verification proposals themselves are flawed. MPs should be asking about the costs and privacy impacts, and why such a lack of thought has gone into this. Finally, they should be asking what they can do to help children through practical education and discussion of the issues surrounding pornography, which will not go away, with or without attempts to restrict access.

(1) Ofcom report on internet safety measures: Strategies of parental protection for children online, Ofcom, December 2015: http://stakeholders.ofcom.org.uk/binaries/internet/fourth_internet_safety_report.pdf pp7

[Read more] (1 comments)


November 02, 2016 | Jim Killock

Facebook is right to sink Admiral's app

Firstcarquote aimed to stick Admiral’s famous spyglass right up your Facebook feed.

Admiral logo, spyglass looking rightLate yesterday, on the eve before Admiral tried to launch Firstcarquote, their application’s permission to use Facebook data was revoked by the social media site.

According to Admiral’s press release their app would use, “social data personality assessments, matched to real claims data, to better understand first time drivers and more accurately predict risk.” So young people could offer up their Facebook posts in the hope of getting a reduction in their car insurance.

However, their application has been found to be in breach of Facebook's Platform Policy section 3.15, which states:

Don’t use data obtained from Facebook to make decisions about eligibility, including whether to approve or reject an application or how much interest to charge on a loan.

Firstcarquote’s site says:

“We were really hoping to have our sparkling new product ready for you, but there’s a hitch: we still have to sort a few final details.”

Like persuading Facebook to change their Platform Policy.

There are significant risks in allowing the financial or insurance industry to base assessments on our social media activity. We might be penalised for our posts or denied benefits and discounts because we don’t share enough or have interests that mark us out as different and somehow unreliable.  Whether intentional or not, algorithms could perpetuate social biases that are based on race, gender, religion or sexuality. Without knowing the criteria for such decisions, how can we appeal against them? Will we start self-censoring our social media out of fear that we will be judged a high risk at some point in the future?

These practices could not only change how we use platforms like Facebook but also have the potential to undermine our trust in them. It is sensible for Facebook to continue to restrict these activities, despite patents indicating that they may themselves wish to monetise Facebook data in this kind of way. 

Insurers and financial companies who are beginning to use social media data need engage in a public discussion about the ethics of these practices, which allow a very intense examination of factors that are entirely non-financial.

Companies like Admiral also need to think about how using such fluid personal information leaves their system vulnerable to being gamed.  How hard would it be to work out what “likes” Admiral views as favourable, or unfavourable, and alter your profile accordingly? What we regard as a chilling effect could also turn out to be an incentive to cheat.

We must also recognise that these problems may confront us in the future, as the result of the forthcoming changes created by the General Data Protection Regulation. The government is clear this will enter UK law regardless of Brexit, which is sensible.

The GDPR creates many new rights for people, one of which is the famous right to delete your data, and another is the right to obtain all of your information at no cost, in electronic format, called “data portability”.

Data portability creates significant risks as well as benefits. It could be very hard to stop some industries attempting to abuse the trust of individuals, asking them to wholesale share their data to obtain discounts or favourable deals, but perhaps not being completely upfront about the downsides to the consumer.

There are extra protections in the GDPR around profiling, and particularly important is the right to have information deleted, if you find you have “over shared”.

Nevertheless, Admiral’s application shows a lack of understanding of the risks and responsibilities in parts of the financial industry. Indeed, Admiral appear to have not even done the basics and read Facebook’s terms and conditions, or understood the capacity for their product to be gamed. If this disregard is symptomatic, it may point to a need for sector specific privacy legislation for the financial industry, to further protect consumers from abuse through use of inappropriate or unreliable data.

 

[Read more] (3 comments)


October 27, 2016 | Jim Killock

Now we want censorship: porn controls in the Digital Economy Bill are running out of control

The government’s proposal for age verification to access pornograpy is running out of control. MPs have worked out that attempts to verify adult’s ages won’t stop children from accessing other pornographic websites: so their proposed answer is to start censoring these websites.

That’s right: in order to make age verification technologies “work”, some MPs want to block completely legal content from access by every UK citizen. It would have a massive impact on the free expression of adults across the UK. The impact for sexual minorities would be particularly severe.

This only serves to illustrate the problems with the AV proposal. Age verification was always likely to be accompanied by calls to block “non-compliant” overseas websites, and also to be extended to more and more categories of “unsuitable” material.

We have to draw a line. Child protection is very important, but let’s try to place this policy in some context:

  • 70% of UK households have no children

  • Take up of ISP filters is around 10-30% depending on ISP, so roughly in line with expectations and already restricting content in the majority of households with children (other measures may be restricting access in other cases).

  • Most adults access pornography, including a large proportion of women.

  • Less that 3% of children aged 9-12 are believed to have accessed inappropriate material

  • Pornography can and will be circulated by young people by email, portable media and private messaging systems

  • The most effective protective measures are likely to be to help young people understand and regulate their own behaviour through education, which the government refuses to make compulsory

MPs have to ask whether infringing on the right of the entire UK population to receive and impart legal material is a proportionate and effective response to the challenges they wish to address.

Censorship is an extreme response, that should be reserved for the very worst, most harmful kinds of unlawful material: it impacts not just the publisher, but the reader. Yet this is supposed to be a punishment targeted at the publishers, in order to persuade the sites to “comply”.

If website blocking was to be rolled out to enforce AV compliance, then the regulator would be forced to consider whether to block a handful of websites, and fail to “resolve” the accessibility of pornography, or else to try to censor thousands of websites, with the attendant administrative burden and increasing likelihood of errors.

You may ask: how likely is this to become law? Right now, Labour seem to be considering this approach as quite reasonable. If Labour did support these motions in a vote, together with a number of Conservative rebels, this amendment could easily be added to the Bill.

Another area where the Digital Economy Bill is running out of control is the measures to target services who “help” pornography publishers. The Bill tries to give duties to “ancillary services” such as card payment providers or advertising networks, to stop the services from making money from UK customers. However, the term is vague. They are defined as someone who:

provide[s], in the course of a business, services which enable or facilitate the making available of pornographic material or prohibited material on the internet by the [publisher]

Ancillary services could include website hosts, search engines, DNS services, web designers, hosted script libraries, furniture suppliers … this needs restriction just for the sake of some basic legal certainty.

Further problems are arising for services including Twitter, who operate on the assumption that adults can use them to circulate whatever they like, including pornography. It is unclear if or when they might be caught by the provisions. They are also potentially “ancillary providers” who could be forced to stop “supplying” their service to pornographers to UK customers. They might therefore be forced to block adult content accounts to UK adults, with or without age verification.

The underlying problem starts with the strategy to control access to widely used and legal content through legislative measures. This is not a sane way to proceed. It has and will lead to further calls for control and censorship as the first steps fail. More calls to “fix” the holes proceed, and the UK ends up on a ratchet of increasing control. Nothing quite works, so more fixes are needed. The measures get increasingly disproportionate.

Website blocking needs to be opposed, and kept out of the Bill.

 

[Read more] (4 comments)