call +44 20 7096 1079

Blog


June 07, 2014 | Jim Killock

No transparency for the UK in Vodafone's transparency report

Yesterday’s transparency report from Vodafone raised a very intriguing question: why did Vodafone feel obliged to redact aggregate surveillance statistics from their UK report?

vodafone reportVodafone’s argument for publishing these statistics where they can is that “The need for governments to balance their duty to protect the state and its citizens against their duty to protect individual privacy is now the focus of a significant global public debate.We hope that – despite the shortcomings … – the country-by-country disclosures in this report will help inform that debate.”

They note however that it is not legal to disclose aggregate statistics or other information in many of the 29 countries in which they operate. Although Google, Twitter, Yahoo and others do publish aggregate information about the UK, Vodafone report states that the law in many states is not clear:

In many countries, there is a lack of legal clarity regarding disclosure of the aggregate number of law enforcement demands. We have therefore contacted governments to ask for guidance. Some have responded, and their views are summarised in this report.

But more importantly, Vodafone have chosen not to publish statistics about the volume of their own communications data requests, as the UK government does this already:

We believe governments should be encouraged and supported in seeking to adopt this approach [publishing aggregate statistics] consistently across our countries of operation. We have therefore provided links to all aggregate statistics currently published by governments in place of our own locally held information (where disclosure is legally permissible at all) and are already engaged in discussions with the authorities in a number of countries to enhance the level of transparency through government disclosure in future.

Separately, where the authorities currently do not publish aggregate statistical information but where we believe we can lawfully publish in our own right, we have disclosed the information we hold for our own local operations.

In other words, as the UK publishes a single aggregagate Comms Data statistic, Vodafone believe they should not duplicate and confuse the picture.

For the UK, Vodafone state:

[Note 1] Section 19 of the Regulation of Investigatory Powers Act 2000 prohibits disclosing the existence of any lawful interception warrant and the existence of any requirement to provide assistance in relation to a warrant. This duty of secrecy extends to all matters relating to warranted lawful interception. Data relating to lawful interception warrants cannot be published. Accordingly, to publish aggregate statistics would be to disclose the existence of one or more lawful interception warrants.

{Note 2] The Interception of Communications Commissioner’s Office publishes statistical information related to lawful interception and communications data demands issued by agencies and authorities.

It is not clear whether it is Vodafone’s interpretation of RIPA, or the government’s that it is really true that “to publish aggregate statistics would be to disclose the existence of one or more lawful interception warrants” and violate Section 19 of RIPA. 

We do not agree with Vodafone that it could be confusing to publish their own figures for requests. It is, we believe, important for everyone to be clear about the volumes and kind of requests they are getting, including the errors and rejections of requests that that are made. Showing that both companies and governments are roughly in agreement about what is happening helps us understand the bigger picture of law enforcement activity. The UK government has been notoriously resistant to the idea of improving transparency and will probably remain so. It is inadequate to expect them to improve without outside pressure, which means comapnies must publish what they can.

Transparency of course is not a solution to mass surveillance. It is just a precondition for a sensible debate, and re-establishing trust. At this point, it seems that the UK government is still trying to perpetuate a culture of secrecy.

UPDATE: This article has been edited to reflect Vodafone's explanation of their choice not to publish UK and other aggregate statistics set out in the report.

[Read more]


June 05, 2014 | Ed Paton Williams

Snowden: one year on and still no action by the British government

Last weekend I was on holiday in Hamburg. I got chatting to a German man in a cafe who asked me, as people do in casual conversation, about my work. I told him about ORG's work challenging the UK's surveillance of the Internet. He started talking about how angry he was at the way the UK and USA's surveillance has forced him to think differently when he uses the Internet.

He now finds himself always double-checking that what he searches for on Google, or what he writes on Facebook, or what he sends in an email couldn't be misconstrued by an intelligence agency as something suspicious. He thought it was wrong that he has to worry about who's watching him. He didn't put it in these terms but he'd identified the UK and USA's surveillance as a breach of his everyday freedoms of expression, thought and association as well as his privacy.

It's a year since The Guardian published the first of many news stories about the scale of GCHQ and the NSA's intrusion into our private lives. Based on the revelations of whistleblower Edward Snowden, the stories had global implications, exposing the insecurity of the Internet, straining relationships between the US and its allies and raising questions about who has control over the agencies that purport to protect our freedoms.

And as my conversation in Germany showed, surveillance has damaged global freedom of expression, affecting the way we think when we use the Internet. There have been other consequences to free speech in the UK as well. We have fallen five places in the Freedom House world ranking of countries' press freedom. This was as a result of legal threats made by the Government against The Guardian, the destruction of hard drives in the newspaper's offices and the detainment of David Miranda, the partner of Glenn Greenwald - one of the journalists who broke the Snowden story.

Despite this, and unlike in the US or the rest of Europe, there has been limited public and political debate in the UK. The issue continues to be conveniently ignored by the UK government and sidelined by most of our mainstream media.

In a new film Classified, launched by ORG today, we expose the failure of the Government to oversee the agencies that are scooping up massive amounts of our personal data in the name of national security. MPs, including Dominic Raab, David Davis, Julian Huppert and Tom Watson, admit that they didn't know about the extent of mass surveillance until The Guardian published Snowden's revelations. As the leader of the Green party Natalie Bennet points out, when democratically elected people, "who are supposed to control our security services didn't know...it is extremely disturbing". (Download the torrent here)

Our film shows that those charged with holding the agencies to account do not appear to have the knowledge and expertise to do their job properly. We need a proper inquiry and new legislation that will protect our rights and ensure that there is both judicial and political oversight of surveillance.

One year on, it's still not too late to demand change. MPs tell us that the best way to get their attention is constituents telling them in their own words why they care about an issue, so please help us by signing the Don't Spy on Us petition and then writing a brief email to your MP.

[Read more]


June 04, 2014 | Ruth Coustick-Deal

Big announcement: Strengthening ORG’s legal work

Our new Legal Director started this week and she is about to begin a series of new legal actions to defend your privacy and free speech.

ORG’s first full time Legal Director started work this week. ORG is extremely grateful for the generous help of its supporters, who have made this new role possible. ORG sought new members and funding through its #ORGLawFund campaign. Thanks to the commitment of old and new supporters we reached a total of 2100 supporters this year, which allowed us to hire a Legal Director to work full time.

In this post our new Legal Director, Elizabeth Knight, discusses ORG’s upcoming legal work:

"I am delighted to be starting as ORG’s new Legal Director. I’m looking forward to working for such a dynamic organisation, and helping with ORG's vitally important and high profile work. I hope that having a full time Legal Director will allow ORG to increase its impact through litigation and bring legal expertise to its already strong policy work.

A bit about me: I'm a solicitor. I have experience of working for NGOs, as well as in the city and for the Government. Most recently I spent four months at Amnesty International where I authored a major advocacy document and worked on issues around surveillance. I hold a Masters degree in Human Rights. I have undertaken an internship at the UN, working on international law and human rights. I was also awarded a pro bono fellowship by my previous firm and worked at a human rights NGO in South Africa. I practise litigation, which has included human rights, judicial review and intellectual property work.

There are a lot of exciting legal projects planned. One of the major issues I will be working on is Error 451 and copyright blocking orders.

This campaign aims to establish a transparent format for legal website blocks, including details of the legal basis, the court order and the organisation responsible for the block. There are many problems with copyright infringement court orders, including being indefinite, part private, and lacking both a complaint mechanism and any requirement that information be made available to the general public.

We intend to obtain lists of and copies of court orders, transcribe the orders and promote the Error 451 code to ISPs. This project has the potential to set best practice internationally and ORG is very enthusiastic about it. If you are interesting in volunteering to help transcribe and publish court orders please contact me as we would welcome your help!

Another major area of work in the near future is campaigning on data retention in the UK, following the striking down of the Data Retention Directive by the Court of Justice of the European Union.

In our view there is now no legal basis for data retention in the UK. Our planned approach is to ask our supporters to write to their ISP and then potentially complain to the Information Commissioner in the event that the ISP refuses to cease retaining the supporter’s data. It may also include court action. More details on this new campaign will be available shortly!

I look forward to updating you soon with news of legal developments." -Elizabeth Knight

 

Thank you once again to all our supporters for making this position possible. We will keep you to update with our legal actions and
If you're not already on our mailing list and would like to learn more about our work through regular updates, please sign up on our home page.

 

[Read more] (1 comments)


May 29, 2014 | Javier Ruiz

How will government share your data?

Today ORG attended an important meeting between government and civil society groups to discuss Data Sharing across government.

The Cabinet Office has started an early pre-consultation process looking at removing barriers to sharing or linking different databases across government departments. The rationale is that this can help Government “design and implement evidence based policy, for example to tackle social mobility, assist economic growth and prevent crime”.

Open Policy Making

This engagement is part of the new “open government” approach, where groups such as ORG, Big Brother Watch, MedConfidential and No2ID are consulted very early in the process. This means that many things under discussion may never happen and it would be pointless to air them. This is quite unique, so ORG has agreed not to disclose detailed discussions until things take more shape, in order to allow for a safe space for frank discussions. There is a public paper outlining the proposals so far in the website DataSharing.org.uk and we have asked for more information to be published more often, including minutes of meetings and evidence presented. The process is open to anyone, and we certainly could do with more participation from civil society groups.

Concerns about Data Sharing

After the PR disaster around the release of medical information in the care.data programme, and more recently the sharing of tax data with private companies by HMRC, the government is acutely aware of the sensitivity of these proposals. And for good reason: connecting databases gives government officials a richer picture of an individual’s life. This is a clear interference with the right to privacy that must be shown to be necessary and proportionate.

Data sharing within government tends to be a complicated process involving lengthy legalities. Some of this friction may be unnecessary formality, but part of the friction is also a safeguard against abuses. There is a public interest in making government more efficient, but removing too many checks and balances could also remove basic protections.

Moving to an extreme sharing by default position could fundamentally transform the relationship between citizens and the state, almost as much as the introduction of a national ID card.

Some will argue that the prize is worth the risk. Underlying these proposals is an understanding that more and better data will automatically translate into better outcomes. But this is far from clear and we will be looking for detailed explanations of how exactly more data sharing will help and what exact changes are needed. The ideas considered so far include both new legislation and practical measures. New laws should only come into place when it's clear that the problem cannot be solved by simpler means.

But we don’t have to support the status quo either. From what we’ve heard so far, data sharing can clearly be improved. The whole thing is perceived as arcane by public employees who have not been trained on how data protection works. Nobody in government knows how many data sharing agreements there are in place, and streamlining the process could allow for more transparency and consistency. It could even lead to less data being shared but used more efficiently.

Data sharing should be based on a general principle of consent. This should be individual informed lawful consent if possible and applicable to the case, which clearly is not in areas such as taxes and criminal justice. Other cases will require a social consent, much like policing in the UK is based on consent. But this is complicated. Perceptions of privacy are context dependant. We must be careful not to assume that a willingness to share personal details in social media automatically translates into lower concerns about sharing of data on tax, health, education or social security. Privacy is also heavily dependant on exposure and direct experiences, such as media scandal or a close relative suffering identity theft. So what appears to be ok today may cause outrage tomorrow

Government Proposals for Data Sharing

There are three main strands covered by the current proposals, all the information is at http://datasharing.org.uk

1. Research and statistics

This strand brings together two distinct proposals that relate to existing policy development elsewhere:

Office for National Statistics (ONS) to access more data from public authorities

There has been a long consultation on the future of the census, which has recommended an end to the paper questionnaires, with a predominantly online census from 2021 supplemented by further use of administrative and survey data.

ONS would be receiving more data held by other parts of Government. The Statistics and Registration Services Act 2007 could be amended to authorise the disclosure of information held by public authorities to ONS for statistical purposes. The Cabinet Office argues that “information from HMRC, for example, could allow ONS to improve the quality and speed of estimates of GDP”.

Sharing of de- identified data for research

In many cases, research on Government and public body data is limited to the analysis of single data sets. A report by the Administrative Data Taskforce Improving Access for Research and Policy recommended a model of data sharing that allowed for cross-linked research on de-identified data.

The government has presented several examples where such research could be useful:

“identifying pathways to success, and barriers to social mobility by linking data on education, employment status and income. Improve energy efficiency and save citizens money by linking data on energy use with property data; Help deliver targeted crime prevention strategies.”

Improving evidence based policy and national statistics are worthy goals, but there should be proper safeguards against re-identification and a guarantee that any sharing will ultimately benefit the public.

Something we have learnt from the recent data sharing scandals is that taxpayers and users of the NHS don’t necessarily care about the technical details of how their identities are protected. They are angry about commercial entities profiting from their personal data - even if de-identified - and worried about negative consequences, such as hikes in insurance premiums.

Current proposals will need to address these very real concerns, which may fall outside the remit of privacy legislation. For example, using statistical data for targeted crime prevention strategies could easily turn into unfair profiling of sectors of the population, even if no individual is ever identified.

2. Tailored public services

The heading of “tailored services” is slightly confusing, as it would appear to relate to the delivery of personalised services to individuals already in receipt of benefits. But our understanding is that it includes mixing datasets to identify and refine target groups. This has completely different privacy implications.

The Cabinet Office defines the proposals very broadly new “powers to allow organisations to share data around specific groups of citizens who use multiple public services for the purposes of improving their health, education and employment”.

Examples presented by government include:

Data sharing between departments and local authorities to target energy efficiency measures and fuel poverty grants, reducing mortality rates and hospital admissions amongst vulnerable groups; Better identification of families requiring more assistance and targeting of services and support, reducing costs to government and delivering better outcomes for those most in need.

The idea is to create a framework for new data sharing channels that are flexible and broad enough to survive specific policy initiatives but narrow enough to be clearly focused on specific outcomes. But each new data sharing channel would still need to comply with data protection, so this flexibility should be limited.

For now we are exploring what a generic new instrument for data sharing would look like, and are trying to understand what are the existing frameworks and obstacles to sharing. Ultimately, the intrusiveness of a specific sharing arrangement will depend on the exact datasets and access involved in each proposal. This makes it very difficult to discuss a generic new "power" for data sharing.

The government is exploring safeguards with civil society, including “transparency of data shares so that the public are fully informed of the process”. But this is not enough. Transparency is important, and for ORG one of the best outcomes of this process would be clearer processes and some form of register of data sharing. But transparency is no substitute for protection against harms in the first place. In some cases, not sharing may be the best safeguard.

In our meetings we are also finding that many of the problems with data may not be directly related too a lack of sharing, but to implementation and use of data. For example, we heard complaints about file formats that could no be opened without specialist software. In other cases where sharing is an issue, we have heard complaints that the law in itself is not the problem. Clashes in the culture of departments and refusals to implement what is already legally available seem important issues that could be solved without creating a new "data sharing legal power".

There are many open questions on which agencies would be covered, and proposals need to be analysed individually to ensure that there is a need or benefit in data sharing. For some people, concerns about stigmatisation and potential profiling may not compensate any benefits that they would get by being included in a program. These people should have the choice not to be part of the process. This has been happening with free school meals, where many parents  of eligible children prefer not to tell the schools.

By focusing on public services we are generally dealing with vulnerable groups. We have to be careful to avoid paternalistic attitudes that create a two tier system where some citizens have lower privacy protections than other based on socio-economic circumstances.

3. Fraud, error and debt (FED)

The government believes that data sharing would dramatically reduce the estimated £37 billion lost to FED each year. They describe the status quo as “an inconsistent patchwork quilt of legislation that is difficult and time-consuming to navigate”.

The idea is that new “permissive gateways” would allow for any new datasets need to be shared, but limit the organisations involved and purposes. This will have to include DWP and HMRC at least, but the idea seems to be quite a flexible and very ambitious system where

“any public authority or organisation providing services of a public nature on behalf of a public organisation could apply to join the lists of those who can share data for these purposes. The addition would be made by secondary legislation.”

Everyone has to contribute their fair share to the public finances, but these proposals have clear and huge privacy implications. We would hope to see well develop evidence of benefits to justify such intrusive system, and so far we have only seen projections of huge savings from limited pilot studies by the Cabinet Office’s FED Task Force.

We need to decide what level of FED, with a corresponding level of intrusion, we as a society are prepared to accept. Making FED completely disappear is virtually impossible without creating a totalitarian dystopia. Besides, a lot of tax money appears to be lost to legal avoidance schemes by companies and high net worth individuals. It is unclear how much these would be affected by a rewrite of the privacy rule book for ordinary citizens.

These proposals could have popular appeal. It is increasingly socially unacceptable to abuse social security benefits, although a lot less so to avoid paying the right amount of tax. But in any case, it is unclear how targeted this data sharing would be. It is very possible that the personal data of large groups of the population would be shared and processed to find out the minority of people defaulting the exchequer.

The special case of HMRC

Many of the above proposals involve data sharing from HMRC, and we believe that this should be dealt with separately. There is clear social concern after The Guardian published an article alerting of proposed changes to HMRC's statutes that allegedly would allow them to sell data to commercial companies. More than 300,000 people have signed petitions by ORG, 38 Degrees and SumofUs asking HMRC to reconsider.

While other ministries can share data under common law powers, HMRC is unique in its legal constraints to protect taxpayer confidentiality, as explained in their consultation documents for the proposed changes:

HMRC was created by the Commissioners for Revenue and Customs Act 2005 (CRCA). This legislation provides strong protection for the information that HMRC holds. HMRC officials are prohibited from sharing information except in the limited circumstances set out in the CRCA. This legislation enshrines the core principle of what is often described as ‘taxpayer confidentiality’.

The CRCA prohibition on disclosure applies to all of HMRC’s information including non-identifying (general, aggregate or anonymised) information as well as information on identifiable individuals or legal entities. As a result, it is arguable that for non-identifying information the current disclosure restrictions afford more protection than is necessary.

These fundamental changes should be properly discussed against concerns that private companies will access taxpayers’ data.

There could be some good reasons to share more HMRC data, for example aggregated postcodes of home and workplace of employees could be used to improve transport planning. And the gender pay gap is a lot smaller in Scandinavia where tax information is available. But it is not clear that all data sharing will have a public benefit. Selling such unique data to credit agencies and other big institutions will only entrench the asymmetry of information against citizens.

We need more people to help shape data sharing

ORG are coming into the process with an open mind and trying to influence the outcomes. We would love to be able to say that we helped create sensible proposals, but we will never endorse something that goes against our principles.

The pre-consultation process is open to anyone with an interest. At present this is largely civil society organisations, mainly focused on privacy, but we would encourage more people to join in. It would be particularly useful to have more participation from groups and individuals directly affected or working with the target groups - vulnerable families, NEETs, ex-offenders; experts in taxes and fraud, etc. Privacy advocates normally lack the detailed knowledge of these domains.

The meeting today marks the end of the initial round of the open policy making process. We will be discussing the results so far and our next steps as soon as the notes are published in the Data Sharing website.

[Read more]


May 22, 2014 | Jim Killock

Please vote for a digital rights candidate today

Today it's the European elections and if you're still not sure, who to vote for, you might want to check out the WePromise.eu map to see if any of your MEP candidates have committed to protecting digital rights.

wepromise logoWePromise.eu is a Europe-wide campaign, which ORG and 35 other organisations have signed up to. It's a simple idea: candidates promise to support a Charter of 10 digital rights principles, citizens promise that they will vote for candidates who have signed the Charter.

It's really important that we get as many pro-digital rights MEPs as possible so that we can push digital rights up the European agenda. Key issues like retaining our privacy online, controlling our personal data, owning the media we buy, and having a free and open Internet are often most fiercely debated in the European Parliament. Europe is arguably the most important layer of government for digital issues, so it is vital that we elect people who are willing to listen.

WePromise is a great way of finding out which MEPs are willing to make a stand. But we've also been questioning and filming MEP candidates directly at political hustings in Brighton, Bristol, London, Manchester, Norwich and Sheffield. Some candidates have been very impressive, while others, perhaps less so. It makes a difference who we elect: which is much less about their political allegiance than the work they are prepared to do when they get to the Parliament.

Please make sure you vote today, and vote for candidates that care about digital rights!

[Read more]


May 15, 2014 | Javier Ruiz

Landmark ruling by European Court on Google and the "Right to be Forgotten"

The European Court of Justice has concluded that Google has to delete search results linking to outdated but lawful content in order to protect the data protection rights of individuals.

The European Court of Justice has published a landmark ruling forcing Google to remove some search results related to Mr Costeja González, a Spanish national, after he claimed the linked information was outdated and irrelevant, giving a wrong impression of him. The links pointed to an archived newspaper page containing a public notice by a tax authority for the auction of Mr Costeja's home to cover debts related to a business.

The ruling has very far reaching implications and has generated conflicting opinions among digital rights advocates. It introduces some very positive - and some quite dangerous developments.

It is good news that internet companies such as Google, that operate in Europe but are headquartered elsewhere, will now have to comply with data protection laws and take responsibility for the data they process.

The court has also upheld that the "right to be forgotten" exists in European privacy law. But it has not fully considered the need to balance this "right" with the right to freedom of expression. This could create the potential for abuse by individuals who wish to hide damaging information. The court may have created a weak spot for censorship, where individuals don't bother to remove websites because the bar is too high in terms of proving libel or other harms. Instead it might be easier to ask Google to remove search results under data protection laws.

It is worth noting that the ruling will have an immediate impact in Spain, where hundreds of similar requests are awaiting resolution, but it will take some time to spread across other European legal systems. For now the ruling creates a precedent, and an incentive for Google to agree to requests for the removal of personal information without full consideration to freedom of expression.

1. Google has to fully comply with European data protection

Google has long claimed that it did not have# data protection obligations in Europe because Google Inc is the US company that holds the data, while local subsidiaries in Spain or the UK only run commercial activities. For example, Google offers you the option to download some personal data such as emails via their automated tools, but not to request all the information they hold on you, as they would have to do under EU law. The ruling demolishes this position and makes Google responsible if they "advertise and sell" in a member state.

2. Search engines are data controllers

Search engines have generally been seen as simply reproducing existing information. Most discussions on Google and personal data have looked at services such as email, location, etc. but not search results. The ruling defines the activities of search engines in relation to webpages with personal information - indexing, storing and making available - as "processing" under the terms of EU data protection law. Furthermore, Google is the "controller" that "determines the purposes and means of the processing".

3. The "right to be forgotten" already exists in European law

The "right to be forgotten" is based on the premise that outdated and irrelevant information can give a distorted picture of an individual, for example, preventing them from getting a job. This is a very real concern, but there is also a need to preserve a record of social history. Archivists are concerned that this right could mean rewriting history.

There have been fierce debates about the introduction of this right in new legislation, but the court has cut through the knot and found that no new legislation is needed. Existing laws, requiring the personal information that companies hold on people to be relevant and accurate, can be used to enforce this right.

The court did not fully engage with all the problematic wider implications of this right, as it simply considered search results and not the actual deletion of records. But many feel this is a cop out. Such major ruling on the "right to be forgotten" should lay out some criteria for when and how obsolete or distorting personal information should be removed from the public record, or at least made less accessible. This could mean for example stopping the indexing of public records and online archives by search engines and other processors. But it could also mean taking whole digital archives offline.

4. Being a data controller has far reaching implications

Labelling Google a data controller for search results goes beyond the right to delete information. It creates a seismic shift in the responsibilities that Google has to the people whose information appear in the searches. Given that Google indexes pretty much the whole public internet, this could affect anyone who is named in any website. For example, EU law places constraints on controllers around the export of data to certain countries with lower privacy protections. Data controllers have to give "data subjects" a copy of all the information they hold in them. There is even speculation could mean that people can now object to receiving adverts when they use the platform.

It remains to be seen how this can work in practice with millions of results involving potentially hundreds of people sharing the same name.

5. Publicly available information is subjected to data protection

This ruling is a reminder that many internet intermediaries are not exempt from data protection responsibilities. Just because personal information is publicly available it does not mean that you can do as you wish. This issue was considered in relation to open data by the Article 29 working group.

According to the court, the right to data protection of individuals trumps the "mere economic interest of the manager of the search engine" unless there is an explicit public interest. This will be important for many online projects processing personal data.

The court also mentions the need to consider the right to access the information of internet users, particularly if the person affected has a role in public life. But this balancing of privacy and freedom of expression is not really explored in the ruling.

What is clear from this and previous rulings is that public figures will have to expect a lower expectation of privacy, and generally should not be able to get their information deleted so easily.

6. Search engines have a separate responsibility from publishers

It appears surprising that the ruling supports the Spanish data authorities in allowing the original offending article to remain in place, while forcing Google to delete references to it. The ECJ makes a clear distinction between search engines and the publication of the information. This is consistent with the application of the principles above, but it creates a potential weak spot for online censorship.

A key aspect of this ruling is that it doesn't relate to libellous or defamatory information. It censors lawful content that contains personal information because it may yet cause detriment to individuals when processed by search engines because:

  • search engines can combine lawful information to generate a completely new insight. The court sees the search results relating to a person as a personal profile. This is not a neutral list of links because the information is organised (e.g. ranking, possibly removal of duplicate results, etc.)

  • search engines provide access to outdated information that before would simply disappear into dusty archives nobody visits, but now lingers on in accessible webpages. Without search engines you would need to know what you were looking for and make a special visit.

     

7. Google will now be tempted to remove links rather than contest requests

It is hard to evaluate the balance of competing rights involved in these cases. The ruling does not help Google decide in future cases. How old do websites have to be to become irrelevant, how public should a person be, how do you judge the public interest?

Should Google decide on this balance of rights? It is very unclear how the rights of the publisher will be safeguarded in an internal process by a private company. As a general principle, removal of websites, or search links, should be decided by a legal authority, not a business.

We are particularly concerned that the path of least resistance for Google will be to automate the removals. For Google it will be cheaper to delete links automatically and let others complain later on, than to consider the balance of rights in every request.

If any content has to be censored, with due process and consideration for the right to freedom of expression, this should be more consistent across the board.

It may not be the intention, but with the ruling appears to create a lower barrier for censoring search results than for hosting. Freedom of expression in the 21st century is not just about the right to publish, but also about being found online.

[Read more]


May 14, 2014 | Ed Paton Williams

ORG hands in petition saying no to HMRC's tax data sell off

We handed in our tax data sell-off petition to HMRC earlier today, along with ORG Advisory Council member Julian Huppert MP and campaign groups 38 Degrees and SumofUs. The Guardian's just put a story up covering the petition hand-in.

Over 300,000 people signed petitions, which were started by ORG, 38 Degrees and SumofUs after we found out that HMRC was considering sharing anonymised tax data for commercial research. We're concerned that under these plans it is very difficult to give or withdraw consent about what happens to our tax data. It is also not at all clear that selling tax data to companies is truly in the public interest.

ORG is currently engaged with HMRC and the Cabinet Office in discussions around the sharing of personal data held by the Government. We'll keep you updated with how that's going.

Thanks to everyone who signed the petition.

Handing in the HMRC petition

 

[Read more]


May 13, 2014 | Jason Kitcat

Guest blog: Estonia and the risks of internet voting

In my capacity as an ORG Advisory Council member I've been working with an independent team of election observers researching the Internet voting systems used by Estonia. Why should anyone in the UK be interested in this?

Two reasons: Firstly Estonia is regularly held up as a model of e-government and e-voting that many countries, including the UK, wish to emulate. Secondly, after years of e-voting being off the UK agenda (thanks in part to ORG's previous work in this area), the chair of the Electoral Commission recently put the idea of e-voting for British elections back in play.

Before our or any other government leaps to copy the Estonian model, our team wanted to better understand the strengths and weaknesses of the Estonian system. So several of us monitored the internet voting in operation for Estonia's October 2013 municipal elections as official observers accredited the Estonian National Election Committee. Subsequently the team used the openly published source code and procedures for the Estonian system to build a replica in a lab environment at the University of Michigan. This enabled detailed analysis and research to be undertaken on the replica of the real system.

Despite being built on their impressive national ID smartcard infrastructure, we were able to find very significant flaws in the Estonian internet voting system, which they call "I-voting". There were several serious problems identified:

Obsolete threat model

The Estonian system uses a security architecture that may have been adequate when the system was introduced a decade ago, but it is now dangerously out of date. Since the time the system was designed, state-level cyberattacks have become a very real threat. Recent attacks by China against U.S. companies, by the U.S. against Iran, and by the U.K. against European telecoms demonstrate the proliferation and sophistication of state-level attackers. Estonia itself suffered massive denial-of-service attacks in 2007 attributed to Russia.

Estonia’s system places extreme trust in election servers and voters’ computers — all easy targets for a foreign power. The report demonstrates multiple ways that today’s state-level attackers could exploit the Estonian system to change votes, compromise the secret ballot, disrupt elections, or cast doubt on the fairness of results.

Abundant lapses in operational security and procedures

Observation of the way the I-voting system was operated by election staff highlighted a lack of adequate procedures for both daily operations and handling anomalies. This creates opportunities for attacks and errors to occur and makes it difficult for auditors to determine whether correct actions were taken.

Close inspection of videos published by election officials reveals numerous lapses in the most basic security practices. They appear to show the workers downloading essential software over unsecured Internet connections, typing secret passwords and PINs in full view of the camera, and preparing election software for distribution to the public on insecure personal computers, among other examples. These actions indicate a dangerously inadequate level of professionalism in security administration that leaves the whole system open to attack and manipulation.

Serious vulnerabilities demonstrated

The authors reproduced the e-voting system in their laboratory using the published source code and client software. They then attempted to attack it, playing the role of a foreign power (or a well resourced candidate willing to pay a criminal organization to ensure they win). The team found that the Estonian I-voting system is vulnerable to a range of attacks that could undetectably alter election results. They constructed detailed demonstration attacks for two such examples:

Server-side attacks: Malware that rigs the vote count

The e-voting system places complete trust in the server that counts the votes at the end of the election process. Votes are decrypted and counted entirely within the unobservable “black box” of the counting server. This creates an opportunity for an attacker who compromises this server to modify the results of the vote counting.

The researchers demonstrated that they can infect the counting server with vote-stealing malware. In this attack, a state-level attacker or a dishonest election official inserts a stealthy form of infectious code onto a computer used in the pre-election setup process. The infection spreads via software DVDs used to install the operating systems on all the election servers. This code ensures that the basic checks used to ensure the integrity of the software would still appear to pass, despite the software having been modified. The attack’s modifications would replace the results of the vote decryption process with the attacker’s preferred set of votes, thus silently changing the results of the election to their preferred outcome.

Client-side attacks: A bot that overwrites your vote

Client-side attacks have been proposed in the past, but the team found that constructing fully functional client-side attacks is alarmingly straightforward. Although Estonia uses many security safeguards — including encrypted web sites, security chips in national ID cards, and smartphone-based vote confirmation — all of these checks can be bypassed by a realistic attacker.

A voter’s home or work computer is attacked by infecting it with malware, as millions of computers are every year. This malicious software could be delivered by pre-existing infections (botnets) or by compromising the voting client before it is downloaded by voters by exploiting operational security lapses. The attacker’s  software would be able to observe a citizen voting then could silently steal the PIN codes required to use the voter’s ID card. The next time the citizen inserts the ID card — say, to access their bank account — the malware can use the stolen PINs to cast a replacement vote for the attacker’s preferred candidate. This attack could be replicated across tens of thousands of computers. Preparation could being well in advance of the election starting by using a replica of the I-voting system, as the team did for their tests.

Insufficient transparency to establish trust in election outcomes

Despite positive gestures towards transparency — such as releasing portions of the software as open source and posting many hours of videos documenting the configuration and tabulation steps — Estonia’s system fails to provide compelling proof that election outcomes are correct. Critical steps occur off camera, and potentially vulnerable portions of the software are not available for public inspection. (Though making source code openly available is not sufficient to protect the software from flaws and attacks.) Many potential vulnerabilities and forms of attack would be impossible to detect based on the information provided to the public. So while the researchers applaud attempts at transparency, ultimately too much of how the I-voting system operates is invisible for it to be able to convince skeptical voters or candidates in the outcomes.

To illustrate this point, the team filmed themselves carrying out exactly the same procedural steps that real election officials show in nearly 24 hours of videos from the 2013 elections. However, due to the presence of malware injected by the team before the recordings started, their count produces a dishonest result.

Recommendation: E-voting should be withdrawn

After studying other e-voting systems around the world, the team was particularly alarmed by the Estonian I-voting system. It has serious design weaknesses that are exacerbated by weak operational management. It has been built on assumptions which are outdated and do not reflect the contemporary reality of state-level attacks and sophisticated cybercrime. These problems stem from fundamental architectural problems that cannot be resolved with quick fixes or interim steps.

While we believe e-government has many promising uses, the Estonian I-voting system carries grave risks — elections could be stolen, disrupted, or cast into disrepute. In light of these problems, our urgent recommendation is that to maintain the integrity of the Estonian electoral process, use of the Estonian I-voting system should be immediately discontinued.

Our work shows that despite a decade of experience and advanced e-government infrastructure Estonia are unable to provide a secure e-voting system. So we believe other countries including the UK should learn from this that voting is a uniquely challenging system to provide online whilst maintaining the fundamental requirements of fair elections: secrecy of the vote, security and accuracy. The significant costs of attempting to build such a system would be better directed at other forms of e-government which can provide greater and more reliable benefits for citizens without risking the sanctity of elections.

Read and watch more about this work at https://estoniaevoting.org

 

[Read more] (1 comments)


google plusdeliciousdiggfacebookgooglelinkedinstumbleupontwitteremail