EE and sale of user data: does Anonymisation work?

They said in response that “most” of the data is large, aggregated datasets, of around 50 users. However, their customers currently don’t know how and when their data might be aggregated or made available in an anonymised form.

Anonymising datasets rarely prevents re-identification. For instance, Nature highlights research showing “in a dataset where the location of an individual is specified hourly, and with a spatial resolution equal to that given by the carrier’s antennas, four spatio-temporal points are enough to uniquely identify 95% of the individuals.”

Cambridge research on network identification shows similar kinds of results.

In response to these publicly-aired concerns, the CEO of Ipsos Mori offered data to researchers:

Ben Page, Ipsos MORI ‏@benatipsosmori
@PlanetJamie39 @PaulbernalUK @patrick_kane_ I don’t see why not. Should publish peer reviewed paper on this data

But there are other answers to the problem, other than waiting for a public outcry. These are

  1. Ask for users’ permission before offering their anonymised data. Make this legally required in data protection, helpfully being debated right now.
  2. Open anonymisations techniques for peer review. Then the best brains can help spot mistakes. Such approaches take place in security software, e-voting software, and of course in Open Source software more widely.
  3. Offer “responsible disclosure” mechanisms for people to explain when they see mistakes, so data providers can stop the problem.

Mobile companies are not the only people playing with fire in this way. There are also government data initiatives, which are even more worrying, looking at personal health data, education and benefits data.

If you want to do something today, why not ask your MEP for strong data protection, as a first step?