call +44 20 7096 1079

The Digital Self

Speech given to the EU's Civil Liberties Committee.

5 March 2009

The Digital Self
Jim Killock, Executive Director, Open Rights Group

The major concern of the Open Rights Group has been privacy impacts, particularly the problems of the relationships between individuals and business and government. We deal with these issues primarily in a UK context.

The previous panel will have dealt with the regulatory landscape but, from our point of view it is worth noting again that many of the threats to individuals actually emanate from poor interventions and poor enforcement of our rights at state level.

Enhancements of human rights and the digital self

But to start, I’d like to briefly review some of the opportunities and enhancements afforded to the digital self on the internet

We can see that the first effects of widespread internet access have been, on the whole, extremely positive, reinforcing the exercise of many of our fundamental rights, particularly freedom of expression.

Wider aspects, including political participation, democratic engagement and - to an extent - transparency have all been enhanced.

Governments have tended to be slow in taking up the opportunities. In the UK, much government data is still not available online. There is currently a campaign from the Guardian newspaper to ‘Free our Data’ so we can enjoy the full democratic and economic benefits of the studies and information the government collects and pays for. There are notable parallels in the US.

But society moves quicker than governments, and blogging, social networking and the collection of public information in Wikipedia have all helped produce a new generation of opportunities, reinforcing and widening participation in what we traditionally call ‘civil society’. The ability to make political interventions has become part of everyday online activity for thousands of citizens.

Projects such as mySociety have been key to leveraging engagement between citizens and government, with voting records, petition sites and means to contact elected officials.

Data and Privacy Threats

However, as the digital age has permeated society we have found other effects, particularly on privacy, which have the potential to undermine our human rights.
These changes are as much to do with legal frameworks and lack of care and attention to these issues as they are to do with the potentials of technology for good or ill.
Let’s first look at the basics of digital technology. In digital form, data is cheaper and cheaper to store.  It is easier to copy. It is more difficult to destroy. And it is easier to keep for long periods.

For government and private industry, data is easier to analyse and is able to be collected and stored in more and more situations. The temptation to link, share, analyse and store information about us becomes greater and greater.

There are many threats to individuals that emerge from this:

  • There are potential problems with identity fraud and crime.
  • There are clear problems with managing consent around use of our data.
  • There is the potential for abuse of our privacy on an individual level,
  • There is the potential for miscarriages of justice.

And there is the pervasive sense of citizens being judged on their past behaviour and circumstances – being treated as statistics.

While minimizing risks to businesses, CRM (Customer Relations Management) systems, which amass our data and predict our likely future performance based on past behaviour reduce human dignity. We have probably all experienced the problems of being denied a service at some time or other on the basis of our prior circumstances, rather than our personal need today.

Thanks to a popular comedy sketch show, this has become in the UK a pervasive joke: “The computer says no.”

Applying the same logic to service provision by governments is a severe worry. It may be possible, for instance, to identify by data mining or data patterns, potential criminals, or perhaps the likely long term unemployed. But would it be wise? And when would finding information on individuals or communities turn into simple discrimination? Clear lines may be difficult to find.

Mission creep and unregulated surveillance

Governments are also finding that new services offer unrelated surveillance potential. Rather than thinking through and legislating privacy regulations to cover these new services, governments often succumb to the temptation to use these technologies for reasons that go far beyond those for which they are introduced.

For example, the Oyster Card, Transport for London (TfL)'s electronic ticketing system, retains centralised logs of individuals' journey details on an eight-week rolling basis, before anonymising the data and retaining it for research purposes.

Such data have never been collected before, and have the potential to present a detailed picture of an individual’s private life. The merits of centrally storing such detailed personalised data are not immediately clear from the perspective of functionality. It is therefore unclear why this feature was built into the system. Why, if the data are primarily being used for research purposes (as is stated by TfL) are they not anonymised immediately?

A Freedom of Information request to TfL revealed that between August 2004 and March 2006 TfL received 436 requests from the police for Oyster card information. Of these, 409 requests were granted and the data was released to the police.

In this example, a whole new data set, capable of revealing detailed pictures about individual's private lives, has been created without public consultation or even justifiable public function. Through it, the police have gained greater investigative powers, again without public or Parliamentary consent.
Transaction records, such as the data collected by TfL's Oyster Card scheme, are a powerful surveillance tool, and their collection, storage and sharing should ideally not take place at all.

This detailed record of people’s movements extend far beyond the monitoring powers of the secret police in the less technologically equipped authoritarian states of thirty years ago.
Where they are collected, routine access to such records must be strictly limited to a few, fully-accountable individuals within the organisation. Sharing with law enforcement agencies must continue to take place only on a case-by-case basis. This I established principle in European Data protection law. In the UK, this is transposed as the provisions of the second principle of the Data Protection Act.

We are concerned that future projects, such as the UK's National Identity Register, or the Schengen Information System II, will tend to over-collect transaction records without thinking through the ethical implications of owning such detailed pictures of individuals' lives.

Judicial oversight should also be reinforced, along with appeals mechanisms, which are poorly understood and little used.

We see this approach in the UK again with new number plate recognition (ANPR) technologies. What data will be retained; who will be able to access tracking information? We are concerned that the civil liberties implications are again being sidelined. Already, the decision has been made to retain information collected for one to five years without a serious public debate. Only the actions of Privacy International have created a serious challenge to this additional accumulation of data, who have complained to the ICO.

In  this context, the pressure being applied by the UK government for large scale data sharing and its potential for surveillance again raises serious civil liberties concerns. A think tank close to the government recently outlined the potential for data mining techniques in national security. The question of proportionality should be coming into play in all these debates and limiting these activities.

Data Security and databases

Governments have also shown they are capable of creating more prosaic threats to our digital selves, in their repeated accidental data losses, which have ultimately led to serious threats of identity fraud.

But we shouldn’t regard these data losses as simply being accidents that can be dealt with by better training, or bolt-on security measures.

It is often the underlying technological products and the design of the systems themselves that have led to the problems.

We find that computer security, on which privacy depends, is complicated and difficult, and easily ignored. Bruce Schneier, the well known technology expert says about security and encryption: “the mathematics are impeccable, the computers are vincible, the networks are lousy and the people are abysmal.”

Privacy and security design are system-wide problems. The latent potential for losses in very large databases with large numbers of users is obvious. As we said in our submission to the UK government on data sharing:

“To a trained computer scientist, building secure databases that centrally hold large-scale datasets and grant access to many hundreds of users is intuitively infeasible.”
Unfortunately this is a message that politicians do not seem able or willing to hear. We went on to say:

“Unfortunately, in the UK, trained computer scientists are poorly represented in key positions such as Parliament, the senior civil service and the media. Indeed, those trained computer scientists who do raise questions about the Government's plans to centralise ever-larger data sets in databases to which hundreds of civil servants have access have historically been sidelined or ridiculed by this Government, which eagerly listens to the sales pitches of commercial technology providers.”

These problems remain the case. Taking the point about trained computer scientists within government, the Information Commissioners Office confirmed in a meeting arranged by the Open Rights Group only last weekend that they had yet to hire computer technology specialists, although they are finally reviewing this approach, looking to hire, “when resources allow”. In this context, other EU data protection authorities, such as the Schleswig-Holstein state authority, and the European Data Protection supervisor do a better job.
Security is not easy to retrofit. We need to get it right first time round. So we were glad to hear the ICO call for ‘Privacy by Design’ last November. But we are less glad to hear unofficial complaints from IT specialists that specifications for security components of major government IT projects remain low.

Interception of communications data

Government interception, storage and examination of communications data is another area of serious concern for the Open Rights Group and many other civil liberties organisations.
In the UK, ISPs and telecoms companies retain traffic data for set lengths of time, in compliance with UK law, laundered into European law by the UK and her allies. This we are told is vital for combating terrorism and crime, with little evidence. The data is accessed for a whole range of other purposes, with insufficient judicial oversight and weak appeals mechanisms.

Data retention is sold as maintaining government surveillance capacity to access communications data. In some circumstances, such as VOIP telephony, this may be true, but in others, such as email, SMS and web browsing, this represents a massive extension of government powers.

Twenty years ago, nobody was sending and receiving 50-200 letters a day, people were not commonly members of twenty or thirty interest groups, and we did not gossip with friends via post-it notes and send them across town.

Nobody was recording what books and newspapers you read.

Yet now all of these communications are potentially available through electronic records. These are extensions of our private lives into electronic and semi public forums. Retaining parts of this data represents a de facto massive intrusion into our privacy, especially if examined, but in some sense even if it is not.

The idea that digital footprints of these activities exist is permeating into the public consciousness. That may over time be altering our behaviour – and not in a good way. Have you ever stumbled upon an indecent image? What if I visit the wrong sort of radical Islamic website? What if I look at an extreme nationalist website? How might any of this be construed, if someone decided to look?

You do not need to be paranoid to start limiting your behaviour if you feel there may be a reason that you might be monitored.
In a time of stress, such as the troubles in Northern Ireland, or for groups on the edges of political respectability, one can easily imagine that one’s sense of privacy or actual privacy may be invaded.

Business and privacy

From this rather depressing picture I’d like to turn to the private sector.

In the private sector, the pressures are different, and in a sense, we might expect greater responsibility for our data as maintaining customer confidence is extremely important. Unlike the government, if we don’t like what they do, we can go elsewhere. So it was gratifying to see consumer power forcing Facebook to revert their Terms and Conditions of Service last month, after widespread public protest at the use of personal data for commercial purposes by Facebook, even after termination of user accounts.

However, there is another imperative to make increasing use of personal data, which is simply to make new profits. While it is reasonable to want to make money, it is of the utmost importance that data are stored and handled transparently, with consent, and with the possibility of easy revocation.

There some basic regulatory problems here. Companies are frequently based outside the EU, especially in the USA, where data protection regimes are very different. Access to US databases is guaranteed to the US government under the Patriot Act, whatever the Federal Trade Commission Safe Harbor provisions or Binding Corporate Rules say. In the US, data belongs to the data holder, rather than the individual.

Secondly, people simply don’t read Terms and Condition, or shrink-wrapped click through agreements at all. And then they get changed, often without notice. This whole set up needs serious reexamination, to make licensing of all kinds easier to understand.

The Creative Commons license icons are a really simple to understand model of how legalese can be translated into user-friendly terms. I believe some thinking is being done along these lines by some of those involved in the Internet Bill of Rights project. If successful, this work should be taken up and promoted at EU level.

There is the beginnings of a serious market in technologies that enhance people’s personal privacy. PGP / GPG email is a useful technology that could gain wider use. For example, Microsoft has acquired the U-Prove technology which attempts to provide a proof of identity, while minimising the personal data transferred. As private sector contributions, this is significant and useful.

Behavioural advertising

On Tuesday, the IAB (Internet Advertising Bureau) released its UK Code of Conduct. This covers both Phorm and cookie based systems. They have agreed a system for ‘opt out’ behavioural advertising systems, which, in our view, seriously undermine the principle of ‘opting out’ by forcing users to repeatedly ‘opt out’ each time they change browser, change computer, or delete their cookies. This in effect becomes a form of ‘pester-ware’.

In Data Protection, we believe that once a choice is made, that choice should be respected. Users should not have to reassert their choice. If the technology does not support this, then the technology should be constructed and used differently.

However, it is just as important to look at the type of data being collected and see whether opt in or opt out is more appropriate. We do not believe that ‘opting out’ is reasonable when the information collected is your browsing history or parts of it – in effect a window into your soul.

This is information you should really choose to hand over, and not making users actively make that choice, and then pushing it upon those who don’t like the idea, seriously risks undermining the reputation of participating brands.

Being given the right to opt out is not the same thing as giving consent; and no data should be collected for profiling without the specific informed consent of the subject and of anyone else whose data is used. Where sensitive personal data is collected, under EU rules, including medical, heath, sexual history, religious beliefs or trades union membership – opt in is the legally required standard.

This new code highlights the problems of voluntary regulation, by setting minimum standards that suits companies rather than consumers, and which in our opinion bend Data Protection compliance beyond commonsense notions of consent.

The technology Phorm similarly breaks the idea of consent.

From one end, the ISP’s user will make informed consent. Web communications, however, are a two way process. I ask for content, you send me content.
Websites may contain personal, although publicly available, data; or even data that is hidden from general public view.

Phorm does not ask for the consent of website owners to have their communications data examined, but instead treats websites as purely public ‘broadcast’ material, like a newspaper or television programme.

If you visit Facebook, a web forum, MySpace or Youtube, you will see that this is not the case, with the content being highly personalised to the individual user.

The owners of those websites should be able to protect their customers from having their data intercepted and examined. And if they agree to Phorm’s use of their websites, they should be obliged to inform their customers that they have agreed to allow Phorm to examine their data.

Unfortunately Phorm only acts to gain consent from one end of the communication, which is why we believe that Phorm may be an illegal technology.

So again, we find advertisers breaking the notions of data protection and consent beyond our commonsense notions.
The European Commission is investigating the UK government's approval of the Phorm system.

Summary

Governments need to place privacy at the heart of their considerations. They need to hire IT specialists at all levels.

The private sector needs to be held properly to account, by data protection officials and citizens, and also make their relations with consumers more transparent. They need to reflect on the damage to their brands that data and privacy problems create. Intrusive technologies like Phorm should simply never be on a serious brand’s agenda.

Some sense of liability for data breaches and security problems needs to be built. The economic model should perhaps start to reflect the risks inherent in poor security in software. ‘Privacy by design’ should be our mantra. We welcome the idea of placing human rights at the centre of the debate about the future of the internet, but add that enforcement of our existing rights is the key problem, as Steve Peers’ report emphasises, and the data retention debate shows.