Blog


July 05, 2018 | Mike Morel

MEPs hold off Article 13's Censorship Machine

In a major triumph for free speech over digital censorship, MEPs voted today to REJECT fast tracking the EU Copyright Directive, which contains the controversial Article 13.

The odds were steep, but thanks to everyone who contacted their representatives, MEPs got the message that Article 13 and it’s automated “upload filters” would be a catastrophe for free expression online.

In the final days, support for Article 13 collapsed, as pressure from real people like you convinced MEPs that they needed a rethink.

Today’s victory is a rude awakening to industry lobbyists who expected Article 13 to pass quietly under the radar. We can expect a fierce battle in the coming months as the Copyright Directive returns to the EU Parliament for further debate.

Today’s vote preserves our ability to speak, create, and express ourselves freely on the Internet platforms we use everyday. Instead of rolling over and putting computers in charge of policing what we say and do, we’ve bought ourselves some time to foster public debate about the wisdom, or lack thereof, behind automated censorship.

Today’s battle is won but the filter wars against free speech continue. We need your help, because the proposal is not dead yet. We will be fighting it again in September. If you haven’t already, join thousands of ORG members who make our work possible. Become a member today!

[Read more]


June 26, 2018 | Ed Johnson-Williams

A new GDPR digital service: the crowdsourced ideas

ORG supporters sent in some great ideas for a new digital service about rights under GDPR. We take a look at some of the best ones.

A few months ago we put out a call for ideas for a new digital service that would help people use their rights under General Data Protection Regulation (GDPR).

Open Rights Group supporters sent in some great ideas for a new digital service about rights under GDPR. We take a look at some of the best ones.

post it notes laid out with a big light bulb in the middle

People sent in some great ideas. They were really varied and we could have gone with many of them. Some were brilliant, but were outside of the scope of the project or too complex for us to do justice to. We didn't want to let good ideas go to waste so we thought we'd tell you about a few of them here. Hopefully someone will spot a project they want to build!

Currently, we're working with Projects by IF on a tool that aims to make it easier for consumers to understand and exercise their rights under GDPR, starting by making privacy policies easier to understand. We're focusing first on businesses in the fintech sector, as this is an area of innovative and complex data practices, but we plan to expand that over time. This work is funded by a grant from the Information Commissioner's Office (ICO).

Here are some of the ideas supporters sent in.

Young person's data cleaner

This idea was to help teenagers to understand and then 'clean' their digital identity using the right to erasure under GDPR. People would see visualisations of information that pulled in data from key online accounts. They would then be able to use different website's facilities for requesting that data relating to them is deleted – possibly by using links that jump straight to the relevant functionality on a website.

A consumer compensation tool

This tool idea would make it easier for people to claim compensation after a data breach. It would use a form that asked for all the information required to make a claim and would then submit the claim. Ideally, an organisation would take note that lots of people were making a claim and could handle all the claims en masse in a consistent manner. Users could say that they only wanted a nominal amount of compensation or that they wanted to donate the compensation to a rights-based cause such as Open Rights Group or to the site itself.

GDPR benchmark

This idea from William Heath and others was to build a tool which helps people easily to make a Subject Access Request (SAR) to an organisation, and also let them give feedback about how good or bad their experience was. The website would present aggregates of the ratings of the quality of data that an organisation typically sends back to people.

GDPR Quiz

This idea also came from Williams Heath and others. It was to create a social media quiz that helps people better understand their rights and learn about how to use them. It would link to good resources and would be frame everything in positive language, reassuring people and encouraging them to use their rights in a constructive way.

Open Rights Group recently launched a quiz in response to this great suggestion idea so take a look!

[Read more]


June 13, 2018 | Alex Haydock

Victory for Open Rights Group in Supreme Court web blocking challenge

The Supreme Court ruled today that trade mark holders must bear the cost of blocking websites which sell counterfeit versions of their goods, rather than passing those costs on to Internet service providers.

Jim, Alex and Myles at the Supreme Court The Supreme Court ruled today that trade mark holders must bear the cost of blocking websites which sell counterfeit versions of their goods, rather than passing those costs on to Internet service providers.

This decision comes in the case of Cartier v BT & Others, in which the jeweller Cartier sought a court order requiring ISPs to block websites which sold goods infringing their trade marks. ORG have been intervening in the case along with help from solicitor David Allen Green.

Lower courts have already ruled in the this case that trade mark holders such as Cartier have the right to request blocking injunctions against websites in the same way as copyright holders are already able to. The courts decided that general powers of injunction were sufficient for trade mark holders to request such blocks, even though copyright holders have a specific power granted by the Copyright, Designs and Patents Act 1988.

The question for the Supreme Court today was whether a trade mark holder who obtained a blocking injunction against infringing sites would be required to indemnify ISPs against the costs of complying with those injunctions.

If trade mark holders were able to demand blocks which ISPs were required to pay for, this could open the door to large-scale blocking of many kinds of websites. ORG was concerned that the costs would be passed on by ISPs to their customers. Increasingly trivial blocks could be requested with little justification, even when it would not be economically justifiable.

Today, the Supreme Court ruled unanimously that the rights-holders would be required to reimburse the ISPs for reasonable costs of complying with blocking orders.

The Court based their judgment on a few factors.

  • Firstly, that there is a general principle in law that an innocent party is entitled to the remuneration of their reasonable costs when complying with orders of this kind.

  • Secondly, the Court rejected the suggestion that it was fair to for ISPs to contribute to the costs of enforcement because they benefit financially from the content which is available on the internet, including content which infringes intellectual property rights.

Although ISPs have been complying with orders to block copyright-infringing sites for a number of years, the issue of who bears the cost of implementing such blocking has not been challenged until today.

We expect that, in future, ISPs may wish to try and use this ruling to argue that they should be remunerated for the cost of complying with copyright injunctions as well as just trade mark injunctions.

This is a very welcome judgment for ORG. ISPs are now administering lists of around 2,500 domain blocking injunctions. ORG’s Blocked! tool is currently tracking over 1,000 sites which have been blocked using these orders. If rights-holders are now required to bear costs, then we should see better administration of the blocks by the ISPs themselves.

[Read more]


May 08, 2018 | Jim Killock

The government is acting negligently on privacy and porn AV

Last month, we responded to the BBFC’s Age Verification consultation. Top of our concerns was the lack of privacy safeguards to protect the 20 million plus users who will be obliged to use Age Verification tools to access legal content.

We asked the BBFC to tell government that the legislation is not fit for purpose, and that they should halt the scheme until privacy regulation is in place. We pointed out that card payments and email services are both subject to stronger privacy protections that Age Verification.

The government’s case for non-action is that the Information Commissioner and data protection fines for data breaches are enough to deal with the risk. This is wrong: firstly because fines cannot address the harm created by the leaking of people’s sexual habits. Secondly, it is wrong because data breaches are only one aspect of the risks involved.

We outlined over twenty risks from Age Verification technologies. We pointed out that Age Verification contains a set of overlapping problems. You can read our list below. We may have missed some: if so, do let us know.

The government has to act. It has legislated this requirement without properly evaluating the privacy impacts. If and when it goes wrong, the blame will lie squarely at the government’s door.

The consultation fails to properly distinguish between the different functions and stages of an age verification system. The risks associated with each are separate but interact. Regulation needs to address all elements of these systems. For instance:

  1. Choosing a method of age verification, whereby a user determines how they wish to prove their age.

  2. The method of age verification, where documents may be examined and stored.

  3. The tool’s approach to returning users, which may involve either:

    1. attaching the user’s age verification status to a user account or log-in credentials; or

    2. providing a means for the user to re-attest their age on future occasions.

  4. The re-use of any age verified account, log-in or method over time, and across services and sites.

The focus of attention has been on the method of pornography-related age verification, but this is only one element of privacy risk we can identify when considering the system as a whole. Many of the risks stem from the fact that users may be permanently ‘logged in’ to websites, for instance. New risks of fraud, abuse of accounts and other unwanted social behaviours can also be identified. These risks apply to 20-25 million adults, as well as to teenagers attempting to bypass the restrictions. There is a great deal that could potentially go wrong.

Business models, user behaviours and potential criminal threats need to be taken into consideration. Risks therefore include:

Identity risks

  1. Collecting identity documents in a way that allows them to potentially be correlated with the pornographic content viewed by a user represents a serious potential risk to personal and potentially highly sensitive data.

Risks from logging of porn viewing

  1. A log-in from an age-verified user may persist on a user’s device or web browser, creating a history of views associated with an IP address, location or device, thus easily linked to a person, even if stored ‘pseudonymously’.

  2. An age verified log-in system may track users across websites and be able to correlate tastes and interests of a user visiting sites from many different providers.

  3. Data from logged-in web visits may be used to profile the sexual preferences of users for advertising. Tool providers may encourage users to opt in to such a service with the promise of incentives such as discounted or free content.

  4. The current business model for large porn operations is heavily focused on monetising users through advertising, exacerbating the risks of re-use and recirculation and re-identification of web visit data.

  5. Any data that is leaked cannot be revoked, recalled or adequately compensated for, leading to reputational, career and even suicide risks.

Everyday privacy risks for adults

  1. The risk of pornographic web accounts and associated histories being accessed by partners, parents, teenagers and other third parties will increase.

  2. Companies will trade off security for ease-of-use, so may be reluctant to enforce strong passwords, two-factor authentication and other measures which make it harder for credentials to leak or be shared.

  3. Everyday privacy tools used by millions of UK residents such as ‘private browsing’ modes may become more difficult to use to use due to the need to retain log-in cookies, increasing the data footprint of people’s sexual habits.

  4. Some users will turn to alternative methods of accessing sites, such as using VPNs. These tools have their own privacy risks, especially when hosted outside of the EU, or when provided for free.

Risks to teenagers’ privacy

  1. If age-verified log-in details are acquired by teenagers, personal and sexual information about them may become shared including among their peers, such as particular videos viewed. This could lead to bullying, outing or worse.

  2. Child abusers can use access to age verified accounts as leverage to create and exploit a relationship with a teenager (‘grooming’).

  3. Other methods of obtaining pornography would be incentivised, and these may carry new and separate privacy risks. For instance the BitTorrent network exposes the IP addresses of users publicly. These addresses can then be captured by services like GoldenEye, whose business model depends on issuing legal threats to those found downloading copyrighted material. This could lead to the pornographic content downloaded by young adults or teenagers being exposed to parents or carers. While copyright infringement is bad, removing teenagers’ sexual privacy is worse. Other risks include viruses and scams.

Trust in age verification tools and potential scams

  1. Users may be obliged to sign up to services they do not trust or are unfamiliar with in order to access specific websites.

  2. Pornographic website users are often impulsive, with lower risk thresholds than for other transactions. The sensitivity of any transactions involved gives them a lower propensity to report fraud. Pornography users are therefore particularly vulnerable targets for scammers.

  3. The use of credit cards for age verification in other markets creates an opportunity for fraudulent sites to engage in credit card theft.

  4. Use of credit cards for pornography-related age verification risks teaching people that this is normal and reasonable, opening up new opportunities for fraud, and going against years of education asking people not to hand card details to unknown vendors.

  5. There is no simple means to verify which particular age verification systems are trustworthy, and which may be scams.

Market related privacy risks

  1. The rush to market means that the tools that emerge may be of variable quality and take unnecessary shortcuts.

  2. A single pornography-related age verification system may come to dominate the market and become the de-facto provider, leaving users no real choice but to accept whatever terms that provider offers.

  3. One age verification product which is expected to lead the market — AgeID — is owned by MindGeek, the dominant pornography company online. Allowing pornographic sites to own and operate age verification tools leads to a conflict of interest between the privacy interests of the user, and the data-mining and market interests of the company.

  4. The online pornography industry as a whole, including MindGeek, has a poor record of privacy and security, littered with data breaches. Without stringent regulation prohibiting the storage of data which might allow users’ identity and browsing to be correlated, there is no reason to assume that data generated as a result of age verification tools will be exempt from this pattern of poor security.

[Read more]


April 30, 2018 | Alex Haydock

The latest ruling against the Snooper's Charter is welcome, but the Courts need to do more

Last Friday - 27 April 2018 - the High Court delivered its judgment in a challenge brought by human rights organisation Liberty against the mass surveillance powers of the Investigatory Powers Act.

The Investigatory Powers Act was the Government’s answer to the expiry of DRIPA, the infamous emergency surveillance legislation rushed through Parliament in a matter of days back in 2014. DRIPA automatically expired at the end of 2016, so the Government ensured that most of the powers it contained were written into permanent legislation with the Investigatory Powers Act.

Liberty launched their challenge following the Court of Justice of the European Union’s judgment against DRIPA, in which the Court had ruled that the bulk and non-targeted surveillance powers found in DRIPA were not compatible with EU law.

In their judgment last Friday, the High Court agreed with some of the points the CJEU had raised in their ruling, and judged that parts of the Investigatory Powers Act were similarly unlawful under EU law because:

  • Access to retained data was not limited to the purposes of combating “serious crime”; and
  • Access to retained data was not subject to prior review by a court or administrative body.

The Court issued the Government with a deadline of 1 November 2018 to ensure that the Investigatory Powers Act’s surveillance provisions were brought into line with EU law.

As can be seen from the wording of the points above, the Court mostly took issue with the fact that data which had been collected under the surveillance regime could be accessed without proper safeguards, and did not condemn the collection of the data in the first place.

The Court rejected Liberty’s argument that the Investigatory Powers Act amounted to mass surveillance of the sort that the CJEU had ruled unlawful in Watson:

"In the light of this analysis of the structure and content of Part 4 of the 2016 Act, we do not think it could possibly be said that the legislation requires, or even permits, a general and indiscriminate retention of communications data."

We are disappointed that the Court stopped short of ruling that the indiscriminate collection of surveillance data was unlawful. The CJEU have been quite clear in their opinion that the mass retention of data and the associated chilling effects result in a disproportionate intrusion into human rights.

While the Court claims that the powers found in the Investigatory Powers Act do not mandate indiscriminate retention of internet records, the reality is that the UK already has in place the indiscriminate retention of telecoms records for all major providers, and one of the functions of the IPA is to expand this to internet records.

The Courts need to curtail the abilities of the Government to use their powers to retain all internet and phone records by returning to the original CJEU ruling and taking on board the CJEU’s advice on how to do so.

For a closer look at the High Court’s recent ruling, Graham Smith has written a fantastic blog post on the topic.

This recent ruling offers some promise, but is only a minor victory and the battle against mass surveillance is far from over. Liberty are currently crowdfunding for the next stage of their legal challenge.

[Read more]


April 25, 2018 | Alex Haydock

We asked the BBFC to warn the Government about the dangers of age verification

Today we are publishing our response to the BBFC consultation on age verification for online pornography. We're calling on the BBFC to tell the Government about the dangers of the policy.

On the 23rd April 2018, the British Board of Film Classification (BBFC) closed their consultation on their age verification guidelines for online pornography. The consultation called for the public’s views on the guidance that the BBFC plan to issue to the providers of age verification tools.

Under the Digital Economy Act, websites will soon have to ensure that all UK users are above the age of 18 before allowing them to view pornographic content. As the age verification regulator, it is the BBFC’s job to dictate how these age verification systems should work.

Open Rights Group submitted a response and highlighted a number of issues with the proposed age verification system. Today we are publishing our full 22-page consultation response, which you can find linked in this blog post. We are also grateful to all the members and supporters who used our online tool to submit their own responses to the BBFC. We counted over 500!

Open Rights Group’s Recommendations to the BBFC

In our consultation response, we raised a number of concerns with age verification systems. Most notably, we suggested that:

  • The aim of age verification is defined as being for the “protection of children”, however, under scrutiny, it is clear that the scheme will be unable to achieve this aim.
  • This consultation indicates that the BBFC intend to consider material which ought to be out-of-scope for an age verification system, such as extreme pornography and child abuse material.
  • The BBFC also indicate that they intend to consider the effectiveness of a response to a non-compliant person before issuing it, but do not indicate an intent to consider the proportionality of that response.
  • The scheme as a whole lacks any specific and higher level of privacy protection, despite the existence of unique problems. In particular, any data breaches cannot be properly compensated for in terms of reputational, career and relationship consequences.
  • The scheme risks infringing free expression rights by granting the BBFC web blocking powers.
  • The ability of the BBFC to give notice to ancillary service providers creates legal uncertainty and incentivises disproportionate actions on non-UK persons.
  • As a whole, the age verification scheme fails to understand the limitations faced by the BBFC in terms of regulating overseas providers in a fair and proportionate manner.

Our full response and next steps

If you wish to read our full consultation response, you can download the full PDF here.

The BBFC will now spend some time in the coming weeks considering submissions to the consultation, and may choose to amend some of their guidance in response. The guidance must then be approved by both Houses of Parliament before it becomes official.

Other responses

Some other responses to the BBFC’s consultation can be found below:

[Read more]


April 24, 2018 | Jim Killock

Google or CTIRU: who is fibbing about terror takedowns?

Today, Google release their latest transparency report, for Youtube takedowns. It contains information about the number of government requests for terrorist or extremist content to be removed. For a number of years, the government has promoted the idea that terrorist content is in rampant circulation, and that the amount of material is so abundant that the UK police alone are taking down up to 100,000 pieces of content a year.

CTIRU statistics graphic: 2016 249,091 pieces of material removedThese referrals, to Google, Facebook and others, come from a unit hosted at the Metropolitan Police, called CTIRU, or the Counter-Terrorism Internet Referrals Unit. This unit has very minimal transparency about its work. Apart from claiming to have removed over 300,000 pieces of terrorist-related content over a number of years, it refuses to say how large its workforce or budget are, and has never defined what a piece of content is.

Google and Twitter publish separate takedown request figures for the UK that must be largely from CTIRU. The numbers are much smaller than the tens of thousands that might be expected at each platform given the CTIRU figures of around 100,000 removals a year. For instance, Google reported 683 UK government takedown requests for 2,491 items through Jan-June 2017.

Google and Twitter’s figures imply that CTIRU file perhaps 2,000-4,000 removal requests a year, for maybe 12,000 items at most, implying a statistical inflation by CTIRU of around 1,000%.

A number of CTIRU requests have been published on the takedown transparency database Lumen. These sometimes have more than one URL for takedown. However this alone does not explain the disparity.

Perhaps a ‘piece of terrorist content’ is counted as that ‘piece’ viewed by each person known to follow a terrorist account, or perhaps everything on a web page is counted as a piece of terrorist content, meaning each web page might contain a terrorist web font, terrorist Javascript and terrorist CSS file.

Nonetheless, we cannot discount the possibility that the methodologies for reporting at the companies are in some way flawed. Without further information from CTIRU, we simply don’t know whose figures are more reliable.

There are concerns that go beyond the statistics. CTIRU’s work is never reviewed by a judge, and there are no appeals to ask CTIRU to stop trying to remove a website or content. It compiles a secret list of websites to be blocked on the public estate, such as schools, departmental offices or hospitals, supplied to unstated companies via the Home Office. More or less nothing is known: except for the headline figure.

Certainly, CTIRU do not provide the same level of transparency as Google and other companies claim to be providing. 

People have tried extracting further information from CTIRU, such as the content of the blacklist, but without success. Ministers have refused to supply financial information to Parliament, citing national security. ORG is the latest group to ask for information, for a list of statistics and a list of documents; turned down on grounds of national security and crime prevention. In the case of statistics, CTIRU are currently claiming they hold no statistics other than their overall takedown figure; which if true, seems astoundingly lax from even a basic management perspective.

The methodology for calculating CTIRU’s single statistic needs to be published, because what we do know about CTIRU is meaningless without it. Potentially, Parliament and the public are be being misled; or otherwise, misreporting by Internet platforms needs to be corrected.

 

[Read more]


April 04, 2018 | Henry Prince

Filters are for coffee and water, not copyright.

The EU’s draft Directive on Copyright is a massive problem. It is an attempt by the EU to reform copyright legislation - in practice it effectively involves a huge ‘censorship machine’ which would filter all uploads from people in the EU to determine if they are infringing on copyright.

It would be the largest internet filter Europe has ever seen - reading every single piece of text uploaded to the internet, and watching every video. An algorithm will decide whether what you want to post will be seen or not.

In practice, the vague wording of the draft Directive would make a huge number of online platforms uncertain about whether or not they are breaking the law. This means that many platforms are likely to err on the side of aggressive filtering rather than getting embroiled in long and extremely expensive legal battles.

Not all user-generated content sites are Google/Youtube. Many fringe culture sites, like LGBTQ+ dating apps are smaller operations that would sooner limit their users’ activities rather than risk being taken to court. Wouldn’t this homogenise the rich cultural landscape that we benefit from in the EU? Surely, in this age of fierce fighting for gender equality, we shouldn’t be allowing new laws that unfairly restrict the activities of minority groups.

Sites wishing to stay active and comply with these new regulations, could face the crippling costs of implementing the new measures, such as the cost of scanning and identifying all images or songs uploaded (very expensive content identification technology) and getting permission from the owners to host and give access to all the content. It seems to be a lose-lose scenario for sites that have enabled people all over the world to communicate and share content on this unprecedented scale.

The current system regulating copyright infringement online is a negligence-based liability regime. This means, so long as platforms take action when notified of illegal content, they can’t be held liable for it. This ‘notice and take down’ system places the responsibility of tackling infringement on rights owners and leaves platforms open for users to communicate in a free and uninhibited way. Article 13 would flip that system and place the burden of monitoring user-uploads on the platforms. Social media sites, video sharing sites and even dating sites would be liable for any unlicensed uploads on their servers.

But what prompted this new directive? The major music corporations are upset because Youtube is the biggest streaming site, but the revenue from Youtube is significantly less than other streaming services like Spotify. The trouble is, their arguments never acknowledge that Youtube isn’t just a place for streaming music, it’s a platform where you can learn pretty much any skill. Plus, Youtube provides added value to musicians by having the biggest audience (by far) and by being a video platform. Why should Youtube pay the same as Spotify when it has that extra all-important feature? I would certainly argue that current digital content ecosystems are flawed, BUT this new directive is a totally barbaric solution.

The changes pushed for by the music industry would have much broader and far more severe implications than the damages they allegedly suffer currently. Article 13 applies to ALL types of content that people upload on the internet; this means from the industry-leading content creators – the award winners, to all the empowered user-creators who create and remix in their spare time. ALL types of content also means anything protected by copyright i.e. photos, videos, podcasts, software code, articles, music recordings and so on. Everything.

Making a mental list of the sites that host any one of these types of content as uploaded by users is like trying to list all 50 states in the United States; it takes ages and you never finish because there are always some you can’t think of, but that you know exist. Innocent bystanders like Wikipedia hosting text and images, could be subjected to these new rules. Wikipedia is a not-for-profit, how can it be fair to impose these heavy burdens onto the world’s first free online information repository?

In practice, this new directive could also affect what we choose to say online. Exceptions to copyright that allow us to make limited uses of works without permission, exist for our benefit. They include research, private study, commentary, criticism, education, parody and more. Article 13 would obscure these exceptions, which stem from fundamental rights like the right to education and freedom of expression. Sound good for democracy? The fact is, an automated content filter can’t tell the difference between someone uploading for review and someone just uploading. By placing the burden of policing content on platforms, we would see them tighten their belts and over-compensate by blocking legitimate content, to protect themselves from lawsuits.

The killer is, all these risks and dangers come from a set of laws lobbied for by music honchos who mainly have it in for Youtube. But Youtube is already complying - it already has licenses with rights owners, Content ID technology in place, and usage data reports for creators. So… will the reforms even affect their newfound nemesis? It seems as though they’re pushing forward, crossing their fingers and toes in the hopes of getting a bigger slice of the Youtube pie, without giving a second thought to the collateral damage.

That’s why the Open Rights Group are asking you to write to tell your MEP to SAY NO to Article 13!

[Read more]