May 29, 2014 | Javier Ruiz

How will government share your data?

Today ORG attended an important meeting between government and civil society groups to discuss Data Sharing across government.

The Cabinet Office has started an early pre-consultation process looking at removing barriers to sharing or linking different databases across government departments. The rationale is that this can help Government “design and implement evidence based policy, for example to tackle social mobility, assist economic growth and prevent crime”.

Open Policy Making

This engagement is part of the new “open government” approach, where groups such as ORG, Big Brother Watch, MedConfidential and No2ID are consulted very early in the process. This means that many things under discussion may never happen and it would be pointless to air them. This is quite unique, so ORG has agreed not to disclose detailed discussions until things take more shape, in order to allow for a safe space for frank discussions. There is a public paper outlining the proposals so far in the website and we have asked for more information to be published more often, including minutes of meetings and evidence presented. The process is open to anyone, and we certainly could do with more participation from civil society groups.

Concerns about Data Sharing

After the PR disaster around the release of medical information in the programme, and more recently the sharing of tax data with private companies by HMRC, the government is acutely aware of the sensitivity of these proposals. And for good reason: connecting databases gives government officials a richer picture of an individual’s life. This is a clear interference with the right to privacy that must be shown to be necessary and proportionate.

Data sharing within government tends to be a complicated process involving lengthy legalities. Some of this friction may be unnecessary formality, but part of the friction is also a safeguard against abuses. There is a public interest in making government more efficient, but removing too many checks and balances could also remove basic protections.

Moving to an extreme sharing by default position could fundamentally transform the relationship between citizens and the state, almost as much as the introduction of a national ID card.

Some will argue that the prize is worth the risk. Underlying these proposals is an understanding that more and better data will automatically translate into better outcomes. But this is far from clear and we will be looking for detailed explanations of how exactly more data sharing will help and what exact changes are needed. The ideas considered so far include both new legislation and practical measures. New laws should only come into place when it's clear that the problem cannot be solved by simpler means.

But we don’t have to support the status quo either. From what we’ve heard so far, data sharing can clearly be improved. The whole thing is perceived as arcane by public employees who have not been trained on how data protection works. Nobody in government knows how many data sharing agreements there are in place, and streamlining the process could allow for more transparency and consistency. It could even lead to less data being shared but used more efficiently.

Data sharing should be based on a general principle of consent. This should be individual informed lawful consent if possible and applicable to the case, which clearly is not in areas such as taxes and criminal justice. Other cases will require a social consent, much like policing in the UK is based on consent. But this is complicated. Perceptions of privacy are context dependant. We must be careful not to assume that a willingness to share personal details in social media automatically translates into lower concerns about sharing of data on tax, health, education or social security. Privacy is also heavily dependant on exposure and direct experiences, such as media scandal or a close relative suffering identity theft. So what appears to be ok today may cause outrage tomorrow

Government Proposals for Data Sharing

There are three main strands covered by the current proposals, all the information is at

1. Research and statistics

This strand brings together two distinct proposals that relate to existing policy development elsewhere:

Office for National Statistics (ONS) to access more data from public authorities

There has been a long consultation on the future of the census, which has recommended an end to the paper questionnaires, with a predominantly online census from 2021 supplemented by further use of administrative and survey data.

ONS would be receiving more data held by other parts of Government. The Statistics and Registration Services Act 2007 could be amended to authorise the disclosure of information held by public authorities to ONS for statistical purposes. The Cabinet Office argues that “information from HMRC, for example, could allow ONS to improve the quality and speed of estimates of GDP”.

Sharing of de- identified data for research

In many cases, research on Government and public body data is limited to the analysis of single data sets. A report by the Administrative Data Taskforce Improving Access for Research and Policy recommended a model of data sharing that allowed for cross-linked research on de-identified data.

The government has presented several examples where such research could be useful:

“identifying pathways to success, and barriers to social mobility by linking data on education, employment status and income. Improve energy efficiency and save citizens money by linking data on energy use with property data; Help deliver targeted crime prevention strategies.”

Improving evidence based policy and national statistics are worthy goals, but there should be proper safeguards against re-identification and a guarantee that any sharing will ultimately benefit the public.

Something we have learnt from the recent data sharing scandals is that taxpayers and users of the NHS don’t necessarily care about the technical details of how their identities are protected. They are angry about commercial entities profiting from their personal data - even if de-identified - and worried about negative consequences, such as hikes in insurance premiums.

Current proposals will need to address these very real concerns, which may fall outside the remit of privacy legislation. For example, using statistical data for targeted crime prevention strategies could easily turn into unfair profiling of sectors of the population, even if no individual is ever identified.

2. Tailored public services

The heading of “tailored services” is slightly confusing, as it would appear to relate to the delivery of personalised services to individuals already in receipt of benefits. But our understanding is that it includes mixing datasets to identify and refine target groups. This has completely different privacy implications.

The Cabinet Office defines the proposals very broadly new “powers to allow organisations to share data around specific groups of citizens who use multiple public services for the purposes of improving their health, education and employment”.

Examples presented by government include:

Data sharing between departments and local authorities to target energy efficiency measures and fuel poverty grants, reducing mortality rates and hospital admissions amongst vulnerable groups; Better identification of families requiring more assistance and targeting of services and support, reducing costs to government and delivering better outcomes for those most in need.

The idea is to create a framework for new data sharing channels that are flexible and broad enough to survive specific policy initiatives but narrow enough to be clearly focused on specific outcomes. But each new data sharing channel would still need to comply with data protection, so this flexibility should be limited.

For now we are exploring what a generic new instrument for data sharing would look like, and are trying to understand what are the existing frameworks and obstacles to sharing. Ultimately, the intrusiveness of a specific sharing arrangement will depend on the exact datasets and access involved in each proposal. This makes it very difficult to discuss a generic new "power" for data sharing.

The government is exploring safeguards with civil society, including “transparency of data shares so that the public are fully informed of the process”. But this is not enough. Transparency is important, and for ORG one of the best outcomes of this process would be clearer processes and some form of register of data sharing. But transparency is no substitute for protection against harms in the first place. In some cases, not sharing may be the best safeguard.

In our meetings we are also finding that many of the problems with data may not be directly related too a lack of sharing, but to implementation and use of data. For example, we heard complaints about file formats that could no be opened without specialist software. In other cases where sharing is an issue, we have heard complaints that the law in itself is not the problem. Clashes in the culture of departments and refusals to implement what is already legally available seem important issues that could be solved without creating a new "data sharing legal power".

There are many open questions on which agencies would be covered, and proposals need to be analysed individually to ensure that there is a need or benefit in data sharing. For some people, concerns about stigmatisation and potential profiling may not compensate any benefits that they would get by being included in a program. These people should have the choice not to be part of the process. This has been happening with free school meals, where many parents  of eligible children prefer not to tell the schools.

By focusing on public services we are generally dealing with vulnerable groups. We have to be careful to avoid paternalistic attitudes that create a two tier system where some citizens have lower privacy protections than other based on socio-economic circumstances.

3. Fraud, error and debt (FED)

The government believes that data sharing would dramatically reduce the estimated £37 billion lost to FED each year. They describe the status quo as “an inconsistent patchwork quilt of legislation that is difficult and time-consuming to navigate”.

The idea is that new “permissive gateways” would allow for any new datasets need to be shared, but limit the organisations involved and purposes. This will have to include DWP and HMRC at least, but the idea seems to be quite a flexible and very ambitious system where

“any public authority or organisation providing services of a public nature on behalf of a public organisation could apply to join the lists of those who can share data for these purposes. The addition would be made by secondary legislation.”

Everyone has to contribute their fair share to the public finances, but these proposals have clear and huge privacy implications. We would hope to see well develop evidence of benefits to justify such intrusive system, and so far we have only seen projections of huge savings from limited pilot studies by the Cabinet Office’s FED Task Force.

We need to decide what level of FED, with a corresponding level of intrusion, we as a society are prepared to accept. Making FED completely disappear is virtually impossible without creating a totalitarian dystopia. Besides, a lot of tax money appears to be lost to legal avoidance schemes by companies and high net worth individuals. It is unclear how much these would be affected by a rewrite of the privacy rule book for ordinary citizens.

These proposals could have popular appeal. It is increasingly socially unacceptable to abuse social security benefits, although a lot less so to avoid paying the right amount of tax. But in any case, it is unclear how targeted this data sharing would be. It is very possible that the personal data of large groups of the population would be shared and processed to find out the minority of people defaulting the exchequer.

The special case of HMRC

Many of the above proposals involve data sharing from HMRC, and we believe that this should be dealt with separately. There is clear social concern after The Guardian published an article alerting of proposed changes to HMRC's statutes that allegedly would allow them to sell data to commercial companies. More than 300,000 people have signed petitions by ORG, 38 Degrees and SumofUs asking HMRC to reconsider.

While other ministries can share data under common law powers, HMRC is unique in its legal constraints to protect taxpayer confidentiality, as explained in their consultation documents for the proposed changes:

HMRC was created by the Commissioners for Revenue and Customs Act 2005 (CRCA). This legislation provides strong protection for the information that HMRC holds. HMRC officials are prohibited from sharing information except in the limited circumstances set out in the CRCA. This legislation enshrines the core principle of what is often described as ‘taxpayer confidentiality’.

The CRCA prohibition on disclosure applies to all of HMRC’s information including non-identifying (general, aggregate or anonymised) information as well as information on identifiable individuals or legal entities. As a result, it is arguable that for non-identifying information the current disclosure restrictions afford more protection than is necessary.

These fundamental changes should be properly discussed against concerns that private companies will access taxpayers’ data.

There could be some good reasons to share more HMRC data, for example aggregated postcodes of home and workplace of employees could be used to improve transport planning. And the gender pay gap is a lot smaller in Scandinavia where tax information is available. But it is not clear that all data sharing will have a public benefit. Selling such unique data to credit agencies and other big institutions will only entrench the asymmetry of information against citizens.

We need more people to help shape data sharing

ORG are coming into the process with an open mind and trying to influence the outcomes. We would love to be able to say that we helped create sensible proposals, but we will never endorse something that goes against our principles.

The pre-consultation process is open to anyone with an interest. At present this is largely civil society organisations, mainly focused on privacy, but we would encourage more people to join in. It would be particularly useful to have more participation from groups and individuals directly affected or working with the target groups - vulnerable families, NEETs, ex-offenders; experts in taxes and fraud, etc. Privacy advocates normally lack the detailed knowledge of these domains.

The meeting today marks the end of the initial round of the open policy making process. We will be discussing the results so far and our next steps as soon as the notes are published in the Data Sharing website.

[Read more]

May 22, 2014 | Jim Killock

Please vote for a digital rights candidate today

Today it's the European elections and if you're still not sure, who to vote for, you might want to check out the map to see if any of your MEP candidates have committed to protecting digital rights.

wepromise is a Europe-wide campaign, which ORG and 35 other organisations have signed up to. It's a simple idea: candidates promise to support a Charter of 10 digital rights principles, citizens promise that they will vote for candidates who have signed the Charter.

It's really important that we get as many pro-digital rights MEPs as possible so that we can push digital rights up the European agenda. Key issues like retaining our privacy online, controlling our personal data, owning the media we buy, and having a free and open Internet are often most fiercely debated in the European Parliament. Europe is arguably the most important layer of government for digital issues, so it is vital that we elect people who are willing to listen.

WePromise is a great way of finding out which MEPs are willing to make a stand. But we've also been questioning and filming MEP candidates directly at political hustings in Brighton, Bristol, London, Manchester, Norwich and Sheffield. Some candidates have been very impressive, while others, perhaps less so. It makes a difference who we elect: which is much less about their political allegiance than the work they are prepared to do when they get to the Parliament.

Please make sure you vote today, and vote for candidates that care about digital rights!

[Read more]

May 15, 2014 | Javier Ruiz

Landmark ruling by European Court on Google and the "Right to be Forgotten"

The European Court of Justice has concluded that Google has to delete search results linking to outdated but lawful content in order to protect the data protection rights of individuals.

The European Court of Justice has published a landmark ruling forcing Google to remove some search results related to Mr Costeja González, a Spanish national, after he claimed the linked information was outdated and irrelevant, giving a wrong impression of him. The links pointed to an archived newspaper page containing a public notice by a tax authority for the auction of Mr Costeja's home to cover debts related to a business.

The ruling has very far reaching implications and has generated conflicting opinions among digital rights advocates. It introduces some very positive - and some quite dangerous developments.

It is good news that internet companies such as Google, that operate in Europe but are headquartered elsewhere, will now have to comply with data protection laws and take responsibility for the data they process.

The court has also upheld that the "right to be forgotten" exists in European privacy law. But it has not fully considered the need to balance this "right" with the right to freedom of expression. This could create the potential for abuse by individuals who wish to hide damaging information. The court may have created a weak spot for censorship, where individuals don't bother to remove websites because the bar is too high in terms of proving libel or other harms. Instead it might be easier to ask Google to remove search results under data protection laws.

It is worth noting that the ruling will have an immediate impact in Spain, where hundreds of similar requests are awaiting resolution, but it will take some time to spread across other European legal systems. For now the ruling creates a precedent, and an incentive for Google to agree to requests for the removal of personal information without full consideration to freedom of expression.

1. Google has to fully comply with European data protection

Google has long claimed that it did not have# data protection obligations in Europe because Google Inc is the US company that holds the data, while local subsidiaries in Spain or the UK only run commercial activities. For example, Google offers you the option to download some personal data such as emails via their automated tools, but not to request all the information they hold on you, as they would have to do under EU law. The ruling demolishes this position and makes Google responsible if they "advertise and sell" in a member state.

2. Search engines are data controllers

Search engines have generally been seen as simply reproducing existing information. Most discussions on Google and personal data have looked at services such as email, location, etc. but not search results. The ruling defines the activities of search engines in relation to webpages with personal information - indexing, storing and making available - as "processing" under the terms of EU data protection law. Furthermore, Google is the "controller" that "determines the purposes and means of the processing".

3. The "right to be forgotten" already exists in European law

The "right to be forgotten" is based on the premise that outdated and irrelevant information can give a distorted picture of an individual, for example, preventing them from getting a job. This is a very real concern, but there is also a need to preserve a record of social history. Archivists are concerned that this right could mean rewriting history.

There have been fierce debates about the introduction of this right in new legislation, but the court has cut through the knot and found that no new legislation is needed. Existing laws, requiring the personal information that companies hold on people to be relevant and accurate, can be used to enforce this right.

The court did not fully engage with all the problematic wider implications of this right, as it simply considered search results and not the actual deletion of records. But many feel this is a cop out. Such major ruling on the "right to be forgotten" should lay out some criteria for when and how obsolete or distorting personal information should be removed from the public record, or at least made less accessible. This could mean for example stopping the indexing of public records and online archives by search engines and other processors. But it could also mean taking whole digital archives offline.

4. Being a data controller has far reaching implications

Labelling Google a data controller for search results goes beyond the right to delete information. It creates a seismic shift in the responsibilities that Google has to the people whose information appear in the searches. Given that Google indexes pretty much the whole public internet, this could affect anyone who is named in any website. For example, EU law places constraints on controllers around the export of data to certain countries with lower privacy protections. Data controllers have to give "data subjects" a copy of all the information they hold in them. There is even speculation could mean that people can now object to receiving adverts when they use the platform.

It remains to be seen how this can work in practice with millions of results involving potentially hundreds of people sharing the same name.

5. Publicly available information is subjected to data protection

This ruling is a reminder that many internet intermediaries are not exempt from data protection responsibilities. Just because personal information is publicly available it does not mean that you can do as you wish. This issue was considered in relation to open data by the Article 29 working group.

According to the court, the right to data protection of individuals trumps the "mere economic interest of the manager of the search engine" unless there is an explicit public interest. This will be important for many online projects processing personal data.

The court also mentions the need to consider the right to access the information of internet users, particularly if the person affected has a role in public life. But this balancing of privacy and freedom of expression is not really explored in the ruling.

What is clear from this and previous rulings is that public figures will have to expect a lower expectation of privacy, and generally should not be able to get their information deleted so easily.

6. Search engines have a separate responsibility from publishers

It appears surprising that the ruling supports the Spanish data authorities in allowing the original offending article to remain in place, while forcing Google to delete references to it. The ECJ makes a clear distinction between search engines and the publication of the information. This is consistent with the application of the principles above, but it creates a potential weak spot for online censorship.

A key aspect of this ruling is that it doesn't relate to libellous or defamatory information. It censors lawful content that contains personal information because it may yet cause detriment to individuals when processed by search engines because:

  • search engines can combine lawful information to generate a completely new insight. The court sees the search results relating to a person as a personal profile. This is not a neutral list of links because the information is organised (e.g. ranking, possibly removal of duplicate results, etc.)

  • search engines provide access to outdated information that before would simply disappear into dusty archives nobody visits, but now lingers on in accessible webpages. Without search engines you would need to know what you were looking for and make a special visit.


7. Google will now be tempted to remove links rather than contest requests

It is hard to evaluate the balance of competing rights involved in these cases. The ruling does not help Google decide in future cases. How old do websites have to be to become irrelevant, how public should a person be, how do you judge the public interest?

Should Google decide on this balance of rights? It is very unclear how the rights of the publisher will be safeguarded in an internal process by a private company. As a general principle, removal of websites, or search links, should be decided by a legal authority, not a business.

We are particularly concerned that the path of least resistance for Google will be to automate the removals. For Google it will be cheaper to delete links automatically and let others complain later on, than to consider the balance of rights in every request.

If any content has to be censored, with due process and consideration for the right to freedom of expression, this should be more consistent across the board.

It may not be the intention, but with the ruling appears to create a lower barrier for censoring search results than for hosting. Freedom of expression in the 21st century is not just about the right to publish, but also about being found online.

[Read more]

May 14, 2014 | Ed Johnson-Williams

ORG hands in petition saying no to HMRC's tax data sell off

We handed in our tax data sell-off petition to HMRC earlier today, along with ORG Advisory Council member Julian Huppert MP and campaign groups 38 Degrees and SumofUs. The Guardian's just put a story up covering the petition hand-in.

Over 300,000 people signed petitions, which were started by ORG, 38 Degrees and SumofUs after we found out that HMRC was considering sharing anonymised tax data for commercial research. We're concerned that under these plans it is very difficult to give or withdraw consent about what happens to our tax data. It is also not at all clear that selling tax data to companies is truly in the public interest.

ORG is currently engaged with HMRC and the Cabinet Office in discussions around the sharing of personal data held by the Government. We'll keep you updated with how that's going.

Thanks to everyone who signed the petition.

Handing in the HMRC petition


[Read more]

May 13, 2014 | Jason Kitcat

Guest blog: Estonia and the risks of internet voting

In my capacity as an ORG Advisory Council member I've been working with an independent team of election observers researching the Internet voting systems used by Estonia. Why should anyone in the UK be interested in this?

Two reasons: Firstly Estonia is regularly held up as a model of e-government and e-voting that many countries, including the UK, wish to emulate. Secondly, after years of e-voting being off the UK agenda (thanks in part to ORG's previous work in this area), the chair of the Electoral Commission recently put the idea of e-voting for British elections back in play.

Before our or any other government leaps to copy the Estonian model, our team wanted to better understand the strengths and weaknesses of the Estonian system. So several of us monitored the internet voting in operation for Estonia's October 2013 municipal elections as official observers accredited the Estonian National Election Committee. Subsequently the team used the openly published source code and procedures for the Estonian system to build a replica in a lab environment at the University of Michigan. This enabled detailed analysis and research to be undertaken on the replica of the real system.

Despite being built on their impressive national ID smartcard infrastructure, we were able to find very significant flaws in the Estonian internet voting system, which they call "I-voting". There were several serious problems identified:

Obsolete threat model

The Estonian system uses a security architecture that may have been adequate when the system was introduced a decade ago, but it is now dangerously out of date. Since the time the system was designed, state-level cyberattacks have become a very real threat. Recent attacks by China against U.S. companies, by the U.S. against Iran, and by the U.K. against European telecoms demonstrate the proliferation and sophistication of state-level attackers. Estonia itself suffered massive denial-of-service attacks in 2007 attributed to Russia.

Estonia’s system places extreme trust in election servers and voters’ computers — all easy targets for a foreign power. The report demonstrates multiple ways that today’s state-level attackers could exploit the Estonian system to change votes, compromise the secret ballot, disrupt elections, or cast doubt on the fairness of results.

Abundant lapses in operational security and procedures

Observation of the way the I-voting system was operated by election staff highlighted a lack of adequate procedures for both daily operations and handling anomalies. This creates opportunities for attacks and errors to occur and makes it difficult for auditors to determine whether correct actions were taken.

Close inspection of videos published by election officials reveals numerous lapses in the most basic security practices. They appear to show the workers downloading essential software over unsecured Internet connections, typing secret passwords and PINs in full view of the camera, and preparing election software for distribution to the public on insecure personal computers, among other examples. These actions indicate a dangerously inadequate level of professionalism in security administration that leaves the whole system open to attack and manipulation.

Serious vulnerabilities demonstrated

The authors reproduced the e-voting system in their laboratory using the published source code and client software. They then attempted to attack it, playing the role of a foreign power (or a well resourced candidate willing to pay a criminal organization to ensure they win). The team found that the Estonian I-voting system is vulnerable to a range of attacks that could undetectably alter election results. They constructed detailed demonstration attacks for two such examples:

Server-side attacks: Malware that rigs the vote count

The e-voting system places complete trust in the server that counts the votes at the end of the election process. Votes are decrypted and counted entirely within the unobservable “black box” of the counting server. This creates an opportunity for an attacker who compromises this server to modify the results of the vote counting.

The researchers demonstrated that they can infect the counting server with vote-stealing malware. In this attack, a state-level attacker or a dishonest election official inserts a stealthy form of infectious code onto a computer used in the pre-election setup process. The infection spreads via software DVDs used to install the operating systems on all the election servers. This code ensures that the basic checks used to ensure the integrity of the software would still appear to pass, despite the software having been modified. The attack’s modifications would replace the results of the vote decryption process with the attacker’s preferred set of votes, thus silently changing the results of the election to their preferred outcome.

Client-side attacks: A bot that overwrites your vote

Client-side attacks have been proposed in the past, but the team found that constructing fully functional client-side attacks is alarmingly straightforward. Although Estonia uses many security safeguards — including encrypted web sites, security chips in national ID cards, and smartphone-based vote confirmation — all of these checks can be bypassed by a realistic attacker.

A voter’s home or work computer is attacked by infecting it with malware, as millions of computers are every year. This malicious software could be delivered by pre-existing infections (botnets) or by compromising the voting client before it is downloaded by voters by exploiting operational security lapses. The attacker’s  software would be able to observe a citizen voting then could silently steal the PIN codes required to use the voter’s ID card. The next time the citizen inserts the ID card — say, to access their bank account — the malware can use the stolen PINs to cast a replacement vote for the attacker’s preferred candidate. This attack could be replicated across tens of thousands of computers. Preparation could being well in advance of the election starting by using a replica of the I-voting system, as the team did for their tests.

Insufficient transparency to establish trust in election outcomes

Despite positive gestures towards transparency — such as releasing portions of the software as open source and posting many hours of videos documenting the configuration and tabulation steps — Estonia’s system fails to provide compelling proof that election outcomes are correct. Critical steps occur off camera, and potentially vulnerable portions of the software are not available for public inspection. (Though making source code openly available is not sufficient to protect the software from flaws and attacks.) Many potential vulnerabilities and forms of attack would be impossible to detect based on the information provided to the public. So while the researchers applaud attempts at transparency, ultimately too much of how the I-voting system operates is invisible for it to be able to convince skeptical voters or candidates in the outcomes.

To illustrate this point, the team filmed themselves carrying out exactly the same procedural steps that real election officials show in nearly 24 hours of videos from the 2013 elections. However, due to the presence of malware injected by the team before the recordings started, their count produces a dishonest result.

Recommendation: E-voting should be withdrawn

After studying other e-voting systems around the world, the team was particularly alarmed by the Estonian I-voting system. It has serious design weaknesses that are exacerbated by weak operational management. It has been built on assumptions which are outdated and do not reflect the contemporary reality of state-level attacks and sophisticated cybercrime. These problems stem from fundamental architectural problems that cannot be resolved with quick fixes or interim steps.

While we believe e-government has many promising uses, the Estonian I-voting system carries grave risks — elections could be stolen, disrupted, or cast into disrepute. In light of these problems, our urgent recommendation is that to maintain the integrity of the Estonian electoral process, use of the Estonian I-voting system should be immediately discontinued.

Our work shows that despite a decade of experience and advanced e-government infrastructure Estonia are unable to provide a secure e-voting system. So we believe other countries including the UK should learn from this that voting is a uniquely challenging system to provide online whilst maintaining the fundamental requirements of fair elections: secrecy of the vote, security and accuracy. The significant costs of attempting to build such a system would be better directed at other forms of e-government which can provide greater and more reliable benefits for citizens without risking the sanctity of elections.

Read and watch more about this work at


[Read more] (1 comments)

May 08, 2014 | Jim Killock

Lobby tries to kill private copying with demand for iPod tax

For well over ten years we have been arguing about a private copying exception, to legalise everyday consumer behaviour of copying music to computer disks. Despite the fact that copyright industry groups have always said they'd never sue anyone, they claim that an exception would cause substantial damage that requires compensation.

Right now, both the private copying exception and parody appear to be delayed. The draft Statutory Instruments are now being discussed by a joint committee and the government in a rather opaque process.

The argument from publisher lobby groups is that European law requires compensation for economic harm arising from copyright exceptions. The UK government has so far, reasonably, argued that any harm would be minimal. Negligible might be more accurate. The change to the law would have little impact on people's behaviour. It would merely legalise what many people already do, copy the music they have legally bought from one device to another.

So what would the damage be? How many people will stop buying second copies of music if an exception is introduced? Probably nearly nobody, we imagine.

To put it another way, how much should you have to pay for a private copy of your own music and films? The BPI says that a private copying exception “fair compensation must be granted to rights holders”. UK Music says that “the exception cannot lawfully be made without fair compensation”.

The British Copyright Council says that "The private copying exception does not include a fair compensation mechanism as required by EU law (Article 5(2)(b) Information Society Directive); the harm by private copying is neither minimal nor priced in [to existing sales] … The BCC supports the introduction of a private copying exception for protected works in the UK, but any such exception should provide for fair compensation to rights owners which is limited to copying from physical products.” 

What could compensation look like? In Spain, 2008-11 any “non excluded” hard disk paid a €12 levy; a mobile phone paid €1.10; a 70ppm photocopier €227. Multifunction printers paid from €7.95 to €10. They excluded disks that were used to boot computers.

It is hard to see charges like this as anything except a tax on innovation and investment. It could easly affect mobile phones, tablets, portable hard disks, hitting the cheaper end of the market and poorer customers especially hard.

The Spanish law was killed in 2011 after massive pressure. Over 3 million Spaniards signed a petition to kill it. We're certain the UK doesn't want that fight. But will they bow to lobby pressure, and kill the private copying exception to avoid a fight over an ipod tax?

No politician is likely to agree to a levy for damage that barely exists, in return for a change in the law that merely reflects real behaviour that nobody is going to be prosecuted for. The real victim will be the legitimacy of copyright law: yet again, the copyright lobby groups are resisting change that could improve the perception of their industry and the laws that support it.

[Read more]

April 16, 2014 | Jim Killock

Quiz your MEP candidates on digital rights

Europe makes many of the laws that are shaping privacy and restricting surveillance. Data Protection, for instance, should guarantee that interception is lawful, rather than arbitrary.

Last week, the European Court of Justice declared the Data Retention Directive invalid: which has huge implications for our claim that UK law supervising surveillance is inadequate.

The European Parliament also investigated the Snowden allegations, and took evidence from Edward Snowden himself.

After investigations, the Parliament agreed that data protection “safe harbor” agreements with the USA should be suspended and said that the activities of GCHQ and the NSA “appear illegal”.

t was the Parliament, too, that struck down the ACTA treaty, and recently voted to protect net neutrality.

Europe matters for digital rights and our campaign to end mass surveillance in the UK. That's why we are taking part in the campaign, asking you and candidates to pledge to support digital rights; and why we are asking you to come to the nearest digital rights hustings for EU Parliamentary candidates in May. With the election coming, we can put pressure on candidates to tell us what they will do to protect the right to privacy and free speech if they are elected.

Digital Rights European elections debates


When: Tuesday 6th May, 6:30 - 8:30 pm
Where: The Main Hall, The Friends Meeting House, Mount Street


When: Thursday 8th May, 6:30 - 8:30 pm
Where: St Mary's, Bramall Lane, S2 4QZ


When: Friday 9th May, 6:30 - 8:30 pm
Where: St Werburghs Community Centre


When: Monday 12th May, 6:30 - 8:30 pm
Where: Norwich Quaker Meeting House, NR2 1EW


When: Thursday 15th May, 6:30 - 9:30pm
Where: Shoreditch Village Hall, 33 Hoxton Square, N1 6NN


When Friday, May 16, 2014 6:30 PM to 8:30 PM
Where: BMEP Centre 10A Fleet Street. Brighton, BN1 4ZE, Brighton

[Read more]

April 15, 2014 | Jim Killock

Help us to re-start the debate about internet filters

At times the campaign to prevent internet filters has bordered on the surreal, such as when the Deputy Children’s Commissioner Sue Berelowitz said, ‘no one should be panicking – but why should there not be a moral panic?’ Or the time when Helen Goodman MP thought parents weren’t capable of switching in filters themselves because, ‘the minute you talk about downloading software, my brain goes bzzzz’. And who can forget Claire Perry MP dismissing overblocking as, ‘a load of cock’?

Against this background of moral outrage and technological incompetence, ORG has been trying to make people aware that filters don’t work, are dangerous for internet freedom and could give parents a false sense of security when it comes to their children’s use of the internet.

But now it looks like Claire Perry has won. Every major internet service provider in the UK is promoting filters that block websites containing material that isn’t appropriate for children. This means that your internet service provider gets to decide what you can or can’t see online, regardless of how old you are.

No laws were passed for this to happen. There was no debate in parliament, just a series of closed meetings, following a report by Claire Perry MP. A report that was sponsored by Christian charity Safermedia and radio broadcaster Premier Christian Media.

This has been done in the name of keeping children safe from pornography, although the filters include a whole load of other categories, including web forums, alcohol, smoking, suicide and anorexia. No one knows exactly which sites are on the list. Recently, the government asked to add secret extremist website lists to the blacklist as well so we can only expect that this list will grow and grow. Then there’s the problem that a whole load of sites get blocked by mistake - from churches (they mention wine!) to political blogs that have been miscategorised as hate speech. And a lot of sites that children should have access to - such as sites on sexual health - are also blocked. Once your website is on a blocked list, there’s no easy way to get off it.

Let’s be honest, no one wants their kids seeing porn or stuff that might upset them but David Cameron’s suggestion of, "one click to protect your whole home and keep your children safe," is deeply irresponsible. It may come as a surprise to Cameron but parents might need to act like grown ups when it comes to adult content. Talking about porn, extremism or self-harming sites might not come naturally to most of us. But we have a responsibility to equip our children with the skills they need to navigate their way in the digital world - just as we do in the non-digital world. Filters don’t do that.

If parents want to switch on filters, that is their choice. But it should be an informed choice and there are alternatives to blanket filters, such as device-level filters, which are more effective.

If parents don’t want filters, they shouldn’t be made to feel ashamed or that they are failing as a parent because they’ve decided to take responsibility for how their kids use the internet. If you don’t have kids, then there is absolutely no reason you should feel pressurised into switching them on. Filters are harmful for people who are browsing for information about domestic violence, safe sex or drugs health but they are not going to stop a tech-savvy teenager who is determined to find adult content.

If it turns out the public don’t want filters to censor what they see online, then politicians will start asking for blocks that are even harder to switch off. They will continue to claim that filters can solve every social ill. We have to discredit this ridiculous idea. We don’t have to put up with censorship just to make their lives easier.


To get this message across we want to produce a high-quality, funny film that will re-start the debate about why filters are a bad idea. It will cost us £12,000 to get this campaign off the ground.

We have launched a campaign on Indiegogo to help raise the money we need and we have less than four weeks to raise it.

Support this film so we can show exactly how stupid filters are.

Update: In a couple of instances, the word default was used in this article. They have now been removed. April, 29th, 2014.

[Read more] (2 comments)