February 04, 2019 | Javier Ruiz

ORG calls for public participation in digital trade policy after Brexit

A key aspect of Brexit is the future of trade policy. The Government  has committed to abandon the UK’s customs union with the EU to enter into myriad independent trade deals with countries across the world. We don’t want to get into a discussion about the merits of this approach or whether it is likely to succeed, but assuming it will go ahead we believe that transparency and participation are critical requirements for the development of future trade agreements after Brexit.

ORG is interested in trade because these agreements include provisions that severely affect digital rights such as privacy and access to information. Copyright and other forms of IP have been part of trade deals for over 20 years, but countries such as the US now want to expand the scope to include a whole raft of issues into trade negotiations, including algorithmic transparency and data flows.

The UK Department of International Trade is already pre-negotiating deals with the US, Australia and New Zealand and is engaging with interested parties in some sectors, such as IP, which is very positive. ORG is participating in some of these discussions.

Our concern, as we get closer to actual trade negotiations, is that there will be pressure to maintain most of the information confidential. Historically, trade deals have been shrouded in secrecy, with the executive branch of government claiming exclusive prerogative as part of their role in maintaining international relations. In the past decades, as trade issues have expanded into many socio-economic spheres - such as digital, labour or environmental regulations - generating vigorous debates, this lack of transparency has become unsustainable. Even as recently as in the ACTA and TPP negotiations, civil society has been forced to rely on leaks for information and public media interventions for engagement. This secrecy did not stop the derailing of many trade agreements and in cases has fuelled more public concerns.

We recognise that some information requires to remain confidential, but believe that very high levels of transparency and public participation are possible, and indeed necessary in these unprecedented circumstances. The blocking by the House of Lords of the Trade Bill until Government provides more information  on how international trade deals will be struck and scrutinised after Brexit points at the need for change.

The current situation in Parliament and elsewhere demonstrates the difficulties in finding a social consensus around Brexit and the kind of trade policy that should follow from it. The limited public debate on trade deals so far has quickly led to concerns about food safety, with headlines about chlorinated chicken, and the takeover of public services, particularly the NHS. In this situation we think that transparency, including access to draft texts and positions, will be critical to maintain legitimacy.

Fortunately things are changing elsewhere. The WTO has improved their external transparency over the years, particularly when compared to negotiations on bilateral agreements. Documents are available online, there are solid NGO relationships and public engagement activities. We hope that the UK will go even further than the WTO to become a world leader in enabling public participation in trade policy.

[Read more]

February 01, 2019 | Mike Morel

Response to IAB statement

By: Jim Killock (Open Rights Group) Johnny Ryan (Brave) Katarzyna Szymielewicz (Panoptykon Foundation) Michael Veale (University College London)


We have taken note of media reports regarding an update to complaints made by ad-blocking browser developer Brave and Polish activist group Panoptykon Foundation to a number of European data protection authorities.


In addition to complaints in Ireland and Poland, the IAB should be aware that complaints have also been made to the UK Information Commissioner in this matter, by Jim Killock, Executive Director of the Open Rights Group, and Michael Veale of University College London. 


As with previous submissions made by Brave et al., we believe that: (1) the complaints are fundamentally misdirected at IAB Europe or the IAB Tech Lab; and (2) they fail to demonstrate any breach of EU data protection law.

Technical standards developed by IAB Tech Lab are intended to facilitate the effective and efficient functioning of technical online advertising processes, such as real-time bidding. IAB Europe’s Transparency & Consent Framework helps companies engaged in online advertising to meet certain requirements under EU data protection and privacy law, such as informing users about how their personal data is processed. The responsibility to use technologies and do business in compliance with applicable laws lies with individual companies.


The IAB proceed on a misunderstanding of the law and the facts. The complaints have detailed widespread and significant breaches of the data protection regime, in the initial complaints as submitted by our legal team, Ravi Naik of ITN Solicitors with the assistance of a leading QC. Those initial complaints from Sept 2018 have been built on with the further material served on 28 January 2018, Data Protection Day. 

Furthermore, the IAB proceed on the basis of an overly restrictive interpretation of how a data controller is defined. Much like Google tried to avoid liability for search before the ECJ, IAB cannot seek to avoid accountability for their own system.

The facts make clear that IAB are a liable controller. IAB defines the structure of the OpenRTB system. Both the IAB and Google structures could – and should – be remedied to have due regard to the rights of data subject. Whether the structure is so remedied is within the IAB and Google’s control.

The IAB system provides for the inclusion of personal data in the bid request, some of which are very intimate indeed. Indeed, the IAB explicitly recommends the inclusion of personal data in the bid request. For example, it “strongly recommends” that ID codes that are unique to the person visiting a website should be included. [1] It even goes so far as to warn companies using its system that they will earn less money if they do not include these personal data. [2]

The IAB does this in the knowledge that it is unable to exercise any control over what happens to personal data broadcast billions of times a day by its system.  An internal IAB TechLab document from May 2018 confirms that “there is no technical way to limit the way data is used after the data is received by a vendor for decisioning/bidding” once an ad auction broadcast has been made. [3] The same document notes that “thousands of vendors” receive these data. [4]


The Content Taxonomy Mapping document cited by the complainants does not, as Brave and Panoptykon seem to contend, demonstrate that taxonomies of data types that would qualify as special categories of personal data (and are subject to stricter protections under EU data protection law) are used by individual companies; nor can it be considered to prove or demonstrate that any companies making use of those taxonomies are doing so without complying with applicable EU data protection or other law. 


The categorised content content is used by a person, the categories stick to that person, and become personal data. This helps other players profile "the human using the device", as IAB puts it. 

The example bid requests in Google’s developer documentation (Google also uses the IAB RTB standard) speak for themselves. They contain the following personal data: [5]

  • pseudonymized user IDs, that can be “matched” against for re-identification, 

  • IP address, 

  • GPS coordinates (latitude and longitude), 

  • ZIP code, 

  • machine and operating system version details, 

  • and categories (“publisher verticals”). 

The IAB’s own documentation includes an example bid request that contains the personal data of a young female, using a specific iPhone 6s, reading a loading URL, and with several IDs that allow ad auction companies to identify her. [6] The bid request also shows her GPS coordinate at this instant. (Would a woman on her own on a street at night be comfortable knowing that her GPS coordinates were being sent to random parties?)


The complaints are akin to attempting to hold road builders accountable for traffic infractions, such as speeding or illegal parking, that are committed by individual motorists driving on those roads. Using this analogy, the complainants’ purported finding that EU data protection law is being breached is comparable to someone pointing out that an automobile is technically capable of exceeding the speed limit, or parking in a restricted area, and adducing this fact as “evidence” that it actually does.  A technical standard may be misused to violate the law or used in a legally compliant way, just as a car may be driven faster than the speed limit or driven at or below that limit. The mere fact that misuse is possible cannot reasonably be used as evidence that it is  actually happening. And the whole purpose of the Transparency & Consent Framework is to ensure it does not.


The IAB has failed to protect people’s data, which are broadcast billions of times a day, using the system that it defines and encourages its members to use. It cannot claim the to be a bystander. By defining and promoting the system, it plays a role in determining the purposes and means of how that data is processed. Using IAB’s own metaphor - which presents them as road builders or car producers who cannot be held liable for traffic infractions  - it is clear that IAB is the authority that sets the traffic rules for its private roads. It has the responsibility when those rules conflict with the law.

1 AdCOM Specification v1.0, Beta Draft”, IAB TechLab, 24 July 2018 (URL:

AdCOM Specification v1.0, Beta Draft”, IAB TechLab, 24 July 2018 (URL:

Pubvendors.json v1.0: Transparency & Consent Framework”, IAB TechLab, May 2018.

4 Ibid.

5 Authorized Buyers Real-Time Bidding Proto”, Google, 23 January 2018 (URL:

6  AdCOM Specification v1.0, Beta Draft”, IAB TechLab, 24 July 2018 (URL:





[Read more]

January 28, 2019 | Ed Johnson-Williams

Public Understanding of GDPR

Today is the 13th annual Data Protection Day in Europe and the first since the General Data Protection Regulation (GDPR) came into force last May. We are publishing new research today about the public understanding of GDPR and what it means for how organisations communicate about how they use data.

Over the last couple of years, we've seen a lot of attention given to data protection. For the most part, the debate has focussed on helping businesses with legal compliance.

But data protection is more than a compliance hoop for organisations to jump through. It is also about the rights it gives to individuals to know about, understand, and control information about them.

Today, we are publishing research in a new report, ‌Public Understanding of GDPR: How companies, regulators, and civil society can support data protection rights.

We look at:

  • the ways members of the general public think about GDPR and their data protection rights;
  • how regulators, civil society organisations and others who support data protection rights can best communicate to make data protection relevant to the general public
  • what organisations should do to communicate better with individuals about data protection and their rights

The report follows several rounds of interviews, user research, and website usability testing. It is also informed by our experiences from creating a website called Data Rights Finder with Projects By IF. Data Rights Finder makes the content of privacy policies easier to understand and helps people engage their data protection rights.

You can read the report in full here. It is also available in full as a PDF.

We are grateful to the Information Commissioner’s Office (ICO) for funding this research through their Research Grants Programme.

Below is a summary of the findings in this project. 

Do people understand their rights around data protection?

Our research indicated that:

  1. The British public’s awareness of their data protection rights is low. People are surprised when they become aware of the rights they have.
  1. Awareness of consent as a basis for collecting and processing user data is relatively high, but understanding of what consent means is low. The other bases for processing data are not well-known.
  1. People do not think about their lives in terms of the rights they have. They do not think first about their data protection rights and then about what problems they have that they could solve with those rights. Instead, they realise they have a problem they want to deal with and then look for ways of dealing with their problem.

Making data protection relevant to people

Considering the way people understand data protection, these are some points to consider for regulators, civil society organisations and others when communicating in support of data protection rights:

  1. Provide information and context for data protection rights. Expect members of the public to require examples of the situations in which they might find data protection rights useful or vital to solving a problem or improving their life in some way.
  1. Offer services or tools that are problem-focussed rather than rights-focussed. Services or tools that help people use their data protection rights will likely resonate with more people if it is clear which specific problems the service helps with.
  1. Make time to undertake user-centred research to understand how your target audiences think about data protection and the problems in their life. This will help you show how data protection rights can be helpful. Test your messages and products with real people from your audiences.

How organisations can communicate well about data protection rights

From our experience of analysing organisations’ privacy policies to create Data Rights Finder, and talking to people about data protection issues, we have these recommendations for how organisations can communicate better about data protection to individuals:

  1. Provide electronic means such as an email address or contact form to contact your data protection officer. We found several well-known companies who only provided a postal address as the route through which to use a data protection right.

  2. Explain how the data protection rights interact with the particular activities or business that your organisation does. Help the individuals involved to know what their rights are, how those rights are relevant to their relationship with your organisation, and finally, how and why individuals would use those rights.
  1. Use plain English to describe how you use data. Tell people clearly what data you collect and what you will use it for. Test how easy it is to find, read, and comprehend the information you provide about how you use data.
  1. As much as possible, use a granular, rather than a bundled, approach to gaining consent to collect and process personal data. It is not always reasonable to expect people to give consent to everything in your privacy policy at the very beginning of their relationship with you. Just-in-time information and consent is one way to address this.
  1. Link the data you say you collect with the purpose you will use it for. Make it clear which data is being used for which purpose.
  1. Consider alternatives to the name ‘privacy policy’. Research in America consistently finds that people misunderstand what is meant by the name ‘privacy policy’. Phrases like “How we use data” may offer a better alternative.
  1. Contribute to and run trials of machine-readable standards about how you use data. Organisations are often presenting information about how they use data in inconsistent and unstructured ways. This makes it difficult to scrutinise and provide insight into how organisations use data. Organisations should collaborate on and test machine-readable standards to communicate how they use data.

[Read more]

December 20, 2018 | Javier Ruiz

Law Commission report asks for complete reform of online offensive communications

Earlier this year the Law Commission published a comprehensive review of the law around abusive online communications with some important recommendations that support ORG's views, particularly around the problems with the term "grossly offensive"


word cloud law commission

In 2019 ORG will be doing more work on the regulation of online content and free expression, as there are various important government initiatives in the area that could impact the rights of internet users. Earlier this year the Law Commission was asked to conduct an analysis of the criminal law in relation to offensive and abusive online communications. It makes some sensible recommendations for thorough reform of the relevant legislation. The government now has to fully respond and agree to move on with more pre-legislative work.

ORG believes that the current level of online abuse - particularly against vulnerable groups: women, ethnic minorities, migrants, transgender people or those with disabilities, among others -  is unacceptable, but we also consider that there are some areas where restrictions on online speech are not consistent and may impinge on free expression. This is a difficult area, and the thorough approach of the Law Commision is welcome. ORG staff met the Law Commissioner for extensive discussions.

Their focus is whether the criminal law provides equivalent protection online and offline. Excluded from the scope are terrorism offences, child sexual abuse and exploitation offences, online fraud and contempt of court.

The key offences considered are the “communications offences” under the Malicious Communications Act 1988 (“MCA 1988”) and s127 of the Communications Act 2003, but also looks more widely at harassment and public order. The LawCom sees as positive that these do not require evidence of “actual harms” and are therefore easier to prosecute. This actually supports our argument as to their use as “consolation prizes”.

The report recommends tightening the scope of the offences. MCA 1998 is unclear on whether it covers public internet fora or only messages directed to a specific person. s127 does not cover private networks, such as bluetooth or LANs. An interesting point raised is that s127 would cover materials stored in cloud but never actually sent. This could have huge implications if followed through by prosecutors.

The Law Commission engages in a thorough discussion of the term “grossly offensive”, mentioning this is one of the issues we raised with them. The report tracks ongoing problems with defining the term since its inception in the Post Office Protection Act 1884, including the relation of “grossly offensive” to obscenity, vulgarity, vilification; and criticises its impact on freedom of expression. The term continues to lead to controversial prosecutions despite the introduction of insufficient Crown Prosecution Service guidelines because the underlying proscribed conduct is very broadly interpreted and the concepts are malleable.

We welcome the report’s recognition that “grossly offensive communication may in fact be more broadly criminalised online than in the offline world”, as long argued by ORG, due to the pervasive record created by online communications and their much broader reach. The Law Commission diplomatically argues for the term to be removed through a further review.

The report extends those criticisms to the notions of “obscenity” and “indecency” and criticises the details of a wider array of related offences, recommending a thorough review, including the potential impact on private online communications of the Obscene Publications Act 1959.

Throughout the report, we see a pattern where the potential negative impact of various offences could be even worse if the criminal justice system decided to apply them with the simplistic principle that everything that applies offline should equally apply online.

The report makes recommendations on various other areas, such as harassment and staking, hate crime and non consensual disclosure of private information, including sexually explicit images. There is also an analysis of the online environment and challenges to policing, which  explores some of the technical difficulties in establishing the identities of perpetrators,  but not in great detail. Thankfully, there are no proposals for changes to anonymity online or demands for increased data retention.

Importantly, the report also details the harms caused by offensive communications, particularly to women,  and digital rights advocates should read it carefully to increase our self-awareness and avoid potentially insensitive arguments in our quest for due process and the protection of civil liberties.

[Read more]

September 20, 2018 | Javier Ruiz

Machine learning and the right to explanation in GDPR

One of the rights in GDPR is the right to explanation. Here we take a look at some of the debates about the right and how it can be implemented.

This blogpost is a small section of a much larger research report Debates, awareness, and projects about GDPR and data protection. The report complements the launch of the Digital Rights Finder tool delivered by Projects by IF and Open Rights Group. We highlight some of the most interesting and important debates around GDPR (General Data Protection Regulation).

There is some concern about the practical feasibility of implementing the right to explanation in GDPR in the context of complex data processing such as big data, artificial intelligence and machine learning. (See this section of the report for more on debates about the existence of the right to explanation.)

Lilian Edwards and Michael Veale argue that a right to an explanation is not the remedy to harms caused to people by algorithmic decisions. They also argue that the narrowly-defined right to explanation in GDPR of “meaningful information about the logic of processing” is not compatible with how modern machine learning technologies are being developed.

The problems to tackle here are discrimination and fairness. Machine learning systems are designed to discriminate but some forms of discrimination are socially unacceptable and the systems need to be restrained. The general obligation of fairness in data protection provides the basis for the need to have some level of insight into the functioning of algorithms, particularly in profiling.

One of Edwards and Veale’s proposals is to partially remove transparency as a necessary key step towards accountability and redress. They argue that people trying to tackle data protection issues have a desire for an action, not for an explanation. The actual value of an explanation will not be to relieve or redress the emotional or economic damage suffered, but to understand why something happened and helping ensure a mistake doesn’t happen again.

Within this more limited sense, problems remain in defining  transparency in the context of algorithmic accountability. For example, providing the source code of algorithms may not be sufficient and may create other problems in terms of privacy disclosures and the gaming of technical systems. They argue that an auditing approach could be more successful instead by looking at the external inputs and outputs of a decision process, rather than at the inner workings: “explaining black boxes without opening them”.

The authors see the right to explanation as providing some grounds for explanations about specific decisions. They present two types of algorithmic explanations that could be provide: model-centric explanations (MCEs) and subject-centric explanations (SCEs), which seem broadly aligned with explanations about either systems or decisions.

SCEs are seen as the best way to provide for some remedy, although with some severe constraints if the data is just too complex. Their proposal is to break down the full model and focus on particular issues through pedagogical explanations to a particular query, “which could be real or could be fictitious or exploratory”. These explanations will necessarily involve trade offs with accuracy to reduce complexity.

Their main concern seems to be to avoid a creating a “transparency fallacy”, where similarly to the “consent fallacy” people regimen an illusion of control that does not exist, instead of being offered practical remedies to stop harmful data practices.

There is growing interest in explanation of technical decision making systems in the field of human-computer interaction design. Practitioners in this field criticise efforts to open the black box in terms of mathematically interpretable models as removed from cognitive science and the actual needs of people. Alternative approaches would be to allow users to explore the system’s behaviour freely through interactive explanations. This is quite similar to the proposals by Edwards and Veale.

A complementary approach has been put forward by Andrew Selbst and Solon Barocas, who argue that the increasing calls for explainability of automated decision making systems rely on an intuitive approach that will not work with machine learning. ML is both inscrutable and non-intuitive. Inscrutability is the back box problem, the inability to understand the inner cogs of a model, but non-intuitiveness means being unable to grasp the rules the model follows, even if we were able to open the box. Accountability requires not only knowledge of the process, but also whether it is justified, or fair.

Selbst and Barocas argue that lawyers and scholars asking for explanations will be disappointed because intuition cannot deal with the truly novel insights produced through machine learning that associate data in patterns that completely escape human logic and imagination.

Their alternative proposal is to focus accountability on the processes around ML models, not the models themselves. Policies and documentation of intent and design choices should be made available, some by default, such as impact assessment, and others in the context of a complaint or regulatory action. This approach chimes with the main thrust of GDPR, which puts accountability at the fore.

In summary, the right to an explanation as defined in GDPR may be harder than expected to implement. This does not invalidate the basic premise that individuals have a right to know what is being done with their data, but – particularly with novel machine learning techniques – it means that we need to look beyond simple calls for transparency.

[Read more]

September 11, 2018 | Ed Johnson-Williams

Helping IoT developers to assess ethics, privacy, and social impact

GDPR has brought a renewed focus to assessing the privacy risks of using personal data. How can we also assess the ethical and social impacts of how organisations use data?

GDPR (General Data Protection Regulation) introduces a mandatory Data Protection Impact Assessment. This is to help organisations to identify and minimise the data protection risks of a project to individuals. But there are other consequences to collecting and using personal data beyond privacy and data protection considerations. We should also be thinking about the ethical and societal outcomes of what we do with data. Open Rights Group (ORG) is exploring these issues as part of the VIRT-EU consortium alongside the London School of Economics, Uppsala University, Polytechnic University of Turin, and Copenhagen Institute for Interaction Design.

The project is researching Internet of Things (IoT) development and development culture. It is also creating tools and frameworks to help foster ethical thinking among IoT developers. One of these tools will be the Privacy Ethical and Social Impact Assessment (PESIA), which augments and interacts with the Data Protection Impact Assessment from GDPR. The PESIA is being developed predominantly by Alessandro Mantelero at the Polytechnic University of Turin with the help of ORG. It will be a voluntary, self-assessment tool to help organisations who collect and process personal data to assess the wide variety of risks and repercussions related to how they use data.

While Privacy Impact Assessments and Data Protection Impact Assessments look primarily at issues around privacy, the PESIA extends beyond that by including ethical dimensions and societal consequences of using data. Privacy- and data protection-focused assessments seldom address issues like the possibility of discrimination affecting individuals and groups due to decisions made using big data. This is despite Recital 75 to Article 35 of GDPR which highlights “social disadvantage” and “discrimination” as some of the potential consequences of data processing. These considerations will be integrated within the PESIA.

The PESIA emphasises a public, participatory approach where a product or service engages its users and people it affects in the process of identifying the issues around relevant privacy, ethical, and social impacts. This contrasts with Privacy Impact Assessments and Data Protection Impact Assessments which, for the most part, are carried out internally and are not necessarily easily accessible for users and customers.

We hope that the PESIA will help developers to integrate ethical and social values into their work and the devices and services they create. Previous attempts to create a Social Impact Assessment have highlighted the significant investment of time and other resources that can be required for organisations to assess the impact of their products and services. For this reason, the PESIA aims to be easy to implement by people within the organisation itself – at least for the early stages of the analysis. Some involvement by experts may sometimes be necessary to assess social consequences of data processing.

The PESIA will not be a box-ticking exercise where organisations assess privacy-, ethical-, and social impact-related issues without fully engaging in the consequences of their work. Instead, our goal in the development of PESIA is that this tool will help IoT developers to develop a clearer understanding of the ways in which their use of data has an impact on society and what the ethical implications of their work are. Being a voluntary self-assessment, PESIA leaves the developers to decide how to address the issues that are raised in the process of thinking through these implications.

Because ethical and social values can vary significantly between countries and cultures, one of the most challenging aspects of developing the PESIA is defining the ethical and social values that the assessment uses. This is quite a significant shift from some aspects of conventional data protection assessments where technological considerations such as data security can be much more easily generalised across countries and cultures.

The VIRT-EU project is looking to address this challenge by separating out the PESIA into layers. The foundational layer presents the common values of international human rights charters and other law including the EU Charter of Fundamental Rights. It also brings in the case law of decisions regarding data processing made by data protection authorities, the European Data Protection Board (previously the Article 29 Data Protection Working Party), the European Court of Justice and the European Court of Human Rights. This layer should directly allow us to secure common ground across different contexts. The PESIA assessment will provide short cases and examples to help developers (who usually do not have a legal background) to understand these questions.

The second layer deals with the diversity of ethical and social values across Europe. The project is undertaking a comparative analysis of the decisions of different national data protection authorities on similar cases to identify values that may have informed and led to different outcomes in those cases. Specific values will not necessarily be obvious in those legal decisions and we do not aim to be able to describe or define national values.

The third and most specific layer involves what sorts of values are enacted by IoT developer communities. Our colleagues at the London School of Economics and at the IT University of Copenhagen are trying to learn from IoT developers through active participation in those communities. In doing so, they hope to gain an understanding of their work practices and the ethical and social values they enact in the course of technology development.

We will be developing the PESIA assessment over the next year and then trialling it with IoT developers. We hope these trial sessions will provide feedback about the content and process of the assessment. If you are a UK-based IoT developer and are interested in participating in the these trial sessions, please contact ORG researchers at and we will contact you closer to the time.

[Read more]

September 04, 2018 | Mike Morel

We need copyright reform that does not threaten free expression

The controversial Copyright Directive is fast approaching another pivotal vote on 12 September. For the third time in half a year MEPs will decide whether Article 13 - or something even worse - will usher in a new era, where all content is approved or rejected by automated gatekeepers.

Seen through an economic lens the Directive’s journey is viewed as a battle between rights holders and tech giants. Yet a growing chorus of ordinary citizens, Internet luminaries, human rights organisations and creatives have rightly expanded the debate to encompass the missing human dimension.

Open Rights Group opposes  Article 13 - or any new amendments proposing similar ideas - because it poses a real threat to the fundamental right to free speech  online.

Free speech defenders claimed a victory over industry lobbyists this summer when MEPs rejected plans to fast-track the Directive and a lasting triumph is now in reach. UK residents are in an especially strong position to make a difference because many of their MEPs remain undecided. Unlike some other EU states, voting patterns aren’t falling strictly on party lines in the UK.

This time new amendments will be added, and the underlying principles of Article 13 will again face a vote. They include:

Changes to Internet platform liability

If Internet platforms become directly liable for user content, they will become de facto copyright enforcers. This will leave them little choice but to introduce general monitoring of all user content with automated filters. Companies are not fit to police free speech. To avoid penalties they will err on the side of caution and over-block user content.

The Implicit or explicit introduction of upload filters

Everything we know about automated filters shows they struggle to comprehend context. Yet identifying the vital legal exceptions to copyright that enable research, commentary, creative works, parody and more requires a firm grasp of context. An algorithm’s poor judgement will cause innocent speech to be routinely blocked along with copyright violations.

The introduction of general monitoring

General monitoring of all user content is a step backwards for a free and open Internet. It is also technically infeasible to monitor every form of content covered in Article 13's extraordinarily wide mandate which includes text, audio, video, images and software.

Outspoken Article 13 cheerleader Axel Voss MEP said “Censorship machines is not what we intend to implement and no one in the European Parliament wants that.” This is what happens when copyright reform is pursued with little consideration to human rights.

The proposals within Article 13 would change the way that the Internet works, from free and creative sharing, to one where anything can be removed without warning, by computers. This is far too high a price to pay for copyright enforcement. We need a copyright reform which does not sacrifice fundamental human and digital rights.

If you’re a UK resident, please contact your MEP before they vote Wednesday 12 September. You can use our tool to reach them:

If you live outside the UK, try this website:

[Read more]

August 21, 2018 | Javier Ruiz

The right of access in GDPR: What are the debates?

There is much debate about the right of access in GDPR but many of them are yet to be discussed widely outside of academia. Here we discuss some of those debates including around subject access requests and privacy protections.

This blogpost is a small section of a much larger research report Debates, awareness, and projects about GDPR and data protection. The report complements the launch of the Digital Rights Finder tool delivered by Projects by IF and Open Rights Group. Here we highlight some of the most interesting and important debates around GDPR (General Data Protection Regulation).

The right of access to personal information, codified in Article 15 of GDPR, is one of the key elements of the European data protection framework, enabling the exercise of other rights and providing for the fundamental “right to data”. This right is set to expand, as GDPR removes several practical barriers to its exercise, such as the payment of fees. This new situation presents some potential challenges, particularly for organisations implementing automated tools to support the exercise of the right.

Third parties

The use of third parties for subject access requests (SARs) is fairly common, for example solicitors acting on behalf of their clients. The removal of the associated fees in GDPR will almost certainly trigger a huge increase in the use of bulk third party SARs as part of a growing contestation of data practices.

Many of the tools we discuss in the report facilitate SARs in ways that do not require the third party to make the request. For example, by creating templates or pre-populated emails that are sent by the data subject from their own email client. In these cases, there is no real third party, although the facilitators will bear some responsibility if the texts they provide are inaccurate or somehow lead to the request to fail.

In other cases, the intermediary will communicate directly with the organisation holding the data. This is perfectly admissible and the ICO has provided guidance on the matter that essentially says that it is the responsibility of the third party to demonstrate there are entitled to act on behalf of the data subject. The ICO also says that if a data controller is concerned that the subject may not fully understand the implications of sharing the data with a third party they can send the data to the subject directly.

Organisations carrying out SARs will need to ensure they document their entitlement and that the people on whose behalf they act are fully aware of the implications, including what these organisations may want to do with the data. In some cases these third parties will want to analyse the responses for research or other purposes, or the SARs may be part of some broader complaint or legal action. This will create a new set of data protection obligations for the third party.

SARs involving children require particular care, as in principle the child should be the recipient of the response if he/she is mature enough – which can be complicated to assess.

Coordination of subject access requests

There are many projects that attempt to coordinate subject access requests targeting a company or a sector. There are concerns among some privacy activists that this could be used by some data controllers to reject the requests as excessive or manifestly unfounded, or attempt to charge a fee.

In principle each request should be considered independently, and the organisation will have to demonstrate they grounds for rejection. Batch requests are fairly common and should not be considered excessive.

The debate centres on whether a company can try to reject a coordinated SAR campaign as unfounded if they can argue that the individuals are using the SARs as a punitive tool or for reasons unrelated to data protection, for example in order to reverse engineer a database.

Recital 63 GDPR states that the right of access is there “in order to be aware of, and verify, the lawfulness of the processing”, which could be understood in fairly narrow terms of data protection. However, Art 15 GDPR simply states that individuals have the right to obtain information without any consideration as to the purposes. Given that recitals are not legally binding, it seems that there are no strong grounds for such rejection, but national courts may take a different view.

Repeated requests by the same person are a different matter, and may be considered excessive more easily if not enough time has passed or it is unlikely that enough has changed to deserve another request.

Privacy protections that hinder access

One potential pitfall and controversial topic around the right of access is the extent to which privacy protecting practices may hinder the exercise of this and other data rights. Companies increasingly use pseudonymisation, data compartmentalisation and other technical measures that can make it harder to exploit the data if there were any security breaches. In some cases not even company employees can fully access the data and link it to a named individual.

These practices generally fall under the rubric of privacy – or data protection – by design, which is part of GDPR and something that normally is perceived in positive terms. The problems arise when the person trying to access the data is not a hostile third party, but the person whose data was processed in the first place.

Michael Veale, Reuben Binns and Jef Ausloos have argued that these privacy by design techniques focus exclusively on protecting confidentiality and the identification of individuals, but the data is still potentially re-identifiable by third parties with enough capabilities. At the same time the added difficulties in identifying specific individuals make it very difficult to exercise data subject rights, such as access, erasure and objection.

The authors document their own research with two case studies. In one case involving wifi data collected by TfL in the London Underground and used to track movements, subject access requests could not work because the data had been masked using cryptographic techniques. However, it has been demonstrated that location traces are so unique that re-identification is very easy.

Michael Veale, Reuben Binns and Jef Ausloos also attempted to obtain recordings from the personal assistant provided in Apple products, Siri. Apple said they were unable to provide the recordings they hold because these cannot be linked to the individual, as they have different specific identifiers and they have not retrieval mechanisms. The authors made various proposals for how information systems could be engineered to improve rights while preserving privacy and who to manage any trade offs involved.

A similar case has been documented by Paul Olivier Dehaye, who asked Facebook for certain information held on him and was rejected because the data in question was kept in backup storage and was not accessible.

The above are only some of the possible points of contention around the right of access to personal data. Once GDPR sinks in we will have a better understanding of whether these issues become widespread obstacles to the exercise of the right, or whether new issues appear.

[Read more]