Blog


March 11, 2019 | Amy Shepherd

Brexit, Data Privacy and the EU Settled Status Scheme

The UK is expected to leave the EU in less than four working weeks’ time. Roughly 3 million EU citizens reside in the UK. These two facts are not unrelated.

The EU Settled Status Scheme (“the scheme”) provides the administrative route through which all EU nationals must apply to remain in the UK after 30 June 2021, in the event of a deal, or 31 December 2020, in the event of no deal.

Operation of the scheme relies heavily on an automated data check: enter your national insurance number on the online portal and the Home Office will use HM Revenue & Customs (HRMC) and/or Department for Work and Pensions (DWP) data to identify if you’ve reached the required five years of continuous residence to qualify for “Settled Status”. If the data says ‘no’, you’ll be invited to accept an outcome of “Pre-Settled Status” or to upload further documents evidencing your residence for manual checking by a Home Office caseworker.

The system is supposedly designed to operate smoothly and effectively. So far, however, it fails to meet either of these objectives.

Open Rights Group (ORG) and the Immigration Law Practitioners Association (ILPA) have been working together to examine the data check system operation, ask questions to the Home Office to press them to share more information about the data processing in the system and propose outcomes which would remedy current deficiencies.
 
The main concern our research has explored is the opaque manner in which the HMRC/DWP data is processed by the Home Office to produce an output result.

As the system presently operates, applicants aren’t fully informed about what data is reviewed in the process of deciding to grant them settled status or not, nor are they told how the Home Office algorithm applies to calculate an output result. This lack of information means that applicants who are refused settled status cannot easily identify why, or locate any errors in the data or the process which they should challenge.

Ideally, all applicants should be presented at the outcome stage with a printable document or web-page listing the data checked and what logical process the data underwent in order to inform the assessment of eligibility.

There is also no clear picture about what data is retained by the Home Office after the output decision is issued, with whom this data might be shared and to what purpose this data will or might be put. The Home Office states that the “raw data” provided by HMRC/DWP “disappears” as soon as an outcome is produced, but this does not answer the question of what it is doing or plans to do with the data created through the application process.

The scheme is expected to come into full effect on 30 March 2019 and millions of people will need to register to remain in the UK post-2020/2021. The issues with how the system operates are leading to a lack of trust by potential applicants, a lack of safeguards for vulnerable groups and potentially poor data protection, all of which hinder the scheme from working effectively.

The Home Office has specific legal duties to make sure the data check is conducted lawfully. These include a duty to give reasons for check outcomes, to put safeguards in place to limit how much data is collected and how it is retained and shared, to ensure caseworkers have meaningful manual oversight of the automated parts of the process, and to provide public information on how the check really operates so that potential applicants can understand how the scheme is likely to operate in their case.

We’ve reached out to the Home Office with a range of questions and proposals for action but have yet to receive any substantive response. We hope that they will engage with us soon. The problems we’ve identified are largely not difficult to fix and it would be beneficial to make the necessary changes before full roll-out at the end of the month. This positive action would also significantly assist in building trust and breaking down entry barriers to citizens already worried about how so many aspects of Brexit will affect them.

For more information, or to share additional information with us, please see ILPA’s research here, our advocacy briefing here or contact amy@openrightsgroup.org

[Read more]


March 08, 2019 | Jim Killock

Informal Censorship: The Internet Watch Foundation (IWF)

The following is an excerpt from the report UK Internet Regulation Part I: Internet Censorship in the UK Today.

Read the full report here.

The IWF, as a de facto Internet censor, has been popular with some people and organisations and controversial with others. The IWF deals with deeply disturbing material, which is relatively easily identified and usually uncontroversial to remove or block. This is relatively unique for content regulation.

Nevertheless, their partners’ systems for blocking have in the past created visible disruption to Internet users, including in 2008 to Wikipedia across the UK.[1] Companies employing IWF lists have often blocked material with no explanation or notice to people accessing a URL, causing additional problems.Some of its individual decisions have also been found wanting. Additional concerns have been created by apparent mission creep, as the IWF has sought or been asked to take on new duties. Its decisions are not subject to prior independent review, and it is unclear that ISP-imposed restrictions are compatible with EU law, in particular the EU Open Internet regulations, which indicate that a legal process should be in place to authorise any blocking.[2] Ideally, this would be resolved by the government providing a simple but independent authorisation process that the IWF could access.

The IWF has made some good and useful steps to resolve some of these issues.

It has chosen to keep its remit narrow and restricted to child abuse images published on websites. This allows it to reduce the risk that its approach creates over-reach. The IWF model is not appropriate where decisions are likely to be controversial.

It has an independent organisational external review, albeit this is not well-publicised, which could be a good avenue for people to give specific and confidential feedback.

The IWF has an appeals process, and an independent legal expert to review its decisions on appeal. The decisions it makes have the potential for widespread public effect on free expression, so could be subject to judicial review, which the IWF has recognised.

A significant disadvantage of the IWF process is that the external review applies legal principles, but is not itself a legal process, so does not help the law evolve. This weakness is found in other self-regulatory models.

There is also a lack of information about appeals, why decisions were made, and how many were made. There is incomplete information about how the process works.[3] Appeal findings are not made public.[4] Notifications are not always placed on content blocks by ISPs. This is currently voluntary.

Recommendations to IWF:

1. Adopt Freedom of Information principles for information requests
2. Ensure that the IWF’s external evaluation process is visible and accessible by third parties
3. Ensure that processes are clearly documented
4. Require notices to be placed at blocked URLs
5. Publish information about appeals, such as: the numbers made internally and externally each year; whether successful or not; and the reasoning in particular decisions

[1] See https://en.wikipedia.org/wiki/Internet_Watch_Foundation_and_Wikipedia
[2] See reference 38 below
[3] http://web.archive.org/web/20180605155231/https://www.iwf.org.uk/sites/default/files/inline-files/Content%20assessment%20appeal%20flow%20chart%20process.pdf 

[4] The BBFC, for instance, operates a system of publishing the reasoning behind its content classification appeals.

[Read more]


March 05, 2019 | Jim Killock

Informal Internet Censorship: The Counter Terrorism Internet Referral Unit (CTIRU)

The following is an excerpt from the report UK Internet Regulation Part I: Internet Censorship in the UK Today.

Read the full report here.

The CTIRU’s work consists of filing notifications of terrorist-related content to platforms, for them to consider removals. They say they have removed over 300,000 pieces of extremist content.

Censor or not censor?

The CTIRU consider its scheme to be voluntary, but detailed notification under the e-Commerce Directive has legal effect, as it may strip the platform of liability protection. Platforms may have “actual knowledge” of potentially criminal material, if they receive a well-formed notification, with the result that they would be regarded in law as the publisher from this point on.[1]

At volume, any agency will make mistakes. The CTIRU is said to be reasonably accurate: platforms say they decline only 20 or 30% of material. That shows considerable scope for errors. Errors could unduly restrict the speech of individuals, meaning journalists, academics, commentators and others who hold normal, legitimate opinions.

A handful of CTIRU notices have been made public via the Lumen transparency project.[2] Some of these show some very poor decisions to send a notification. In one case, UKIP Voices, an obviously fake, unpleasant and defamatory blog portraying the UKIP party as cartoon figures but also vile racists and homophobes, was considered to be an act of violent extremism. Two notices were filed by the CTIRU to have it removed for extremism. However, it is hard to see that the site could fall within the CTIRU’s remit as the site’s content is clearly fictional.

In other cases, we believe the CTIRU had requested removal of extremist material that had been posted in an academic or journalistic context. [3]

Some posters, for instance at wordpress.com, are notified by the service’s owners, Automattic, that the CTIRU has asked for content to be removed.This affords a greater potential for a user to contestor object to requests. However, the CTIRU is not held to account for bad requests. Most people will find it impossible to stop the CTIRU from making requests to remove lawful material, which might still be actioned by companies, despite the fact that the CTIRU would be attempting to remove legal material, which is clearly beyond its remit.

When content is removed, there is no requirement to notify people viewing the content that it has been removed because it may be unlawful or what those laws are, nor that the police asked for it to be removed. There is no advice to people that may have seen the content or return to view it again about the possibility that the content may have been intended to draw them into illegal and dangerous activities, nor are they given advice about how to seek help.

There is also no external review, as far as we are aware. External review would help limit mistakes. Companies regard the CTIRU as quite accurate, and cite a 70 or 80% success rate in their applications. That is potentially a lot of requests that should not have been filed, however, and that might not have been accepted if put before a legally-trained and independent professional for review.

As many companies will perform little or no review, and requests are filed to many companies for the same content, which will then sometimes be removed in error and sometimes not, any errors at all should be concerning.

Crime or not crime?

The CTIRU is organised as part of a counter-terrorism programme, and claim its activities warrant operating in secrecy, including rejecting freedom of information requests on the grounds of national security and detection and prevention of crime.

However, its work does not directly relate to specific threats or attempt to prevent crimes. Rather, it is aimed at frustrating criminals by giving them extra work to do, and at reducing the availability of material deemed to be unlawful.

Taking material down via notification runs against the principles of normal criminal investigation. Firstly, it means that the criminal is “tipped off” that someone is watching what they are doing. Some platforms forward notices to posters, and the CTIRU does not suggest that this is problematic.

Secondly, even if the material is archived, a notification results in destruction of evidence. Account details, IP addresses and other evidence normally vital for investigations is destroyed.

This suggests that law enforcement has little interest in prosecuting the posters of the content at issue. Enforcement agencies are more interested in the removal of content, potentially prioritised on political rather than law enforcement grounds, as it is sold by politicians as a silver bullet in the fight against terrorism. [4]

Beyond these considerations, because there is an impact on free expression if material is removed, and because police may make mistakes, their work should be seen as relating to content removal rather than as a secretive matter.

Statistics

Little is know about the CTIRU’s work, but it claims to be removing up to 100,000 “pieces of content” from around 300 platforms annually. This statistic is regularly quoted to parliament, and is given as an indication of the irresponsibility of major platforms to remove content. It has therefore had a great deal of influence on the public policy agenda.

However, the statistic is inconsistent with transparency reports at major platforms, where we would expect most of the takedown notices to be filed. The CTIRU insists that its figure is based on individual URLs removed. If so, much further analysis is needed to understand the impact of these URL removals, as the implication is that they must be hosted on small, relatively obscure services. [5]

Additionally, the CTIRU claims that there are no other management statistics routinely created about its work. This seems somewhat implausible, but also, assuming it is true, negligent. For instance, the CTIRU should know its success and failure rate, or the categorisation of the different organisations or belief systems it is targeting. An absence of collection of routine data implies that the CTIRU is not ensuring it is effective in its work. We find this position, produced in response to our Freedom of Information requests, highly surprising and something that should be of interest to parliamentarians.

Lack of transparency increases the risks of errors and bad practice at the CTIRU, and reduces public confidence in its work. Given the government’s legitimate calls for greater transparency on these matters at platforms, it should apply the same standards to its own work.

Both government and companies can improve transparency at the CTIRU. The government should provide specific oversight, much in the same way as CCTV and Biometrics have a Commissioner. Companies should publish notifications, redacted if necessary, to the Lumen database or elsewhere. Companies should make the full notifications available for analysis to any suitably-qualified academic, using the least restrictive agreements practical.

FoIs, accountability and transparency

Because the CTIRU is situated within a terrorism- focused police unit, its officers assume that their work is focused on national security matters and prevention and detection of crime. The Metropolitan Police therefore routinely decline requests for information related to the CTIRU.

The true relationship between CTIRU content removals and matters of national security and crime preventions is likely to be subtle, rather than direct and instrumental. If the CTIRU’s removals are instrumental in preventing crime or national security incidents, then the process should not be informal.

On the face of it, the CTIRU’s position that it only files informal requests for possible content removal, and that this activity is also a matter of national security and crime prevention that mean transparency requests must be denied, seems illogical and inconsistent.

The Open Right Group has filed requests for information about key documents held, staff and finances, and available statistics. So far, only one has been successful, to confirm the meaning of a piece of content.

During our attempts to gain clarity over the CTIRU’s work, we asked for a list of statistics that are kept on file, as discussed above. This request for information was initially turned down on grounds of national security. However, on appeal to the Information Commissioner, the CTIRU later claimed that no such statistics existed. This appears to suggest that the Metropolitan Police did not trouble to find out about the substance of the request, but simply declined it without examining the facts because it was a request relating to the CTIRU.[6]

We recommend that the private sector takes specific steps to help improve the situation with CTIRU.

Recommendations to Internet platforms:

1. Publication of takedown requests at Lumen
2. Open academic analysis of CTIRU requests

[1] European E-Commerce Directive (2000/31/EC) https://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX:32000L0031
[2] https://www.lumendatabase.org
[3] Communication with Automattic, the publishers of wordpress.com blogs
[4] https://www.theguardian.com/uk-news/2017/sep/19/theresa-may-will-tell-internet-firms-to-tackle-extremist-content and https://www.bbc.co.uk/news/uk-42526271 for instance
[5] https://www.whatdotheyknow.com/request/ctiru_statistical_methodology “A terrorist group may circulate one product (terrorist magazine or video) – this same product may be uploaded to 100 different file-sharing websites. The CTIRU would make contact with all 100 file sharing websites and if all 100 were removed, the CTIRU would count this as 100 removals.”
[6] Freedom of Information Act 2000 (FOIA) Decision notice: Commissioner of the Metropolitan Police Service; ref FS50722134 21 June 2018 https://ico.org.uk/media/action-weve-taken/decision-notices/2018/2259291/fs50722134.pdf

[Read more]


February 28, 2019 | Jim Killock

Informal Internet Censorship: Nominet domain suspensions

The following is an excerpt from the report UK Internet Regulation Part I: Internet Censorship in the UK Today.

Read the full report here.

In December 2009, Nominet began to receive and act on bulk law enforcement requests to suspend the use of certain .uk domains believed to be involved in criminal activity. [1] At the request of the Serious and Organised Crime Agency (SOCA), Nominet subsequently consulted about creating a formal procedure to use when acceding to these requests and provide for appeals and other safeguards. [2] Nominet’s consultations failed to reach consensus, with many participants including ORG arguing for law enforcement to seek injunctions to seize or suspend domains, not least because it became apparent that the procedure would be widely used once available. [3]

As with any system of content removal at volume, mistakes will be made. These pose potential damage to individuals and businesses.

Nominet formalised their policy in 2014. [4] It can suspend any domain that it believes is being used for criminal activity; in practice this means any domain it is notified about by a UK law enforcement agency.

A domain may be regarded as property or intellectual property. It can certainly represent an asset with tradeable value well beyond the cost of registration fees.

Many countries require a court process for such actions, including the USA and Denmark. Such actions usually result in control of the domain being passed to the litigant. The EU is asking for every member state to have a legal power for domain suspension or seizure relating to consumer harms. [5]

Some domains are used by criminals, as with any communications tool. There is a case for a suspension or seizure procedure to exist, although it should be understood that seizing or suspending a domain represents disruption for a website owner, rather than a means to cease their activities. For instance, it would not be difficult for the owner of rolexreplicas.co.uk to register replicarolex.co.uk and use the new domain to serve the same website.

Although Nominet failed to get agreement about a procedure for suspension requests, it has continued to accede to requests, which have roughly doubled in number each year from 2014, totalling over 16,000 in 2017. [6] The reasons requests have doubled is unclear, and ORG has not been given clear answers. It maybe in part because the costs of domain registrations decreases over time, in part because detection has improved, and in part because it becomes necessary for a criminal enterprise to register new domains once they are suspended. Parties we spoke to agreed that it is unlikely that the number of criminals is doubling.

Around eight authorities have been using the domain suspension process, one of which, National Trading Standards, is legally a private company and not subject to Freedom of Information Act requests.

Nominet does not require any information from these organisations, it simply requires them to request suspensions in writing. For instance, they are not asked to publish a policy explaining when the organisation might ask for domains to be suspended, or what the level of evidence required to act might be.

Several of the organisations making requests were unable to supply a policy, or refused to supply information about their policy, when we made Freedom of Information requests. [7] The National Crime Agency refused to respond, as it is not subject to the Freedom of Information Act. National Trading Standards spoke to us, but did not supply a policy; it is not subject to the Act. The Fraud and Linked Crime Online (FALCON) Unit at the Metropolitan Police Service confirmed that it has no policy, but decide on an ad hoc basis. The National Fraud Intelligence Bureau at City of London, which suspended over 2,700 domains last year, says: “We do not have a formal Policy”. [8]

Nominet’s process is:

  1. The agency concerned files a request to Nominet, citing the domains it believes are engaged in criminal activity. This may be one or a list of thousands of domains.

  2. Nominet ensure the owners are notified and given a period to remove anything contravening the law.

  3. If there is a response from a domain owner, the law enforcement agency is asked to review its decision.

  4. If there is no response from a domain owner, the domains are suspended.

  5. Any further complaints are referred back to the law enforcement agency.

There is no independent appeals mechanism. If a domain owner asks for a domain suspension to be reconsidered, they are referred back to the police or agency that made the request, who can revisit the decision. As most of the agencies have no policy, or will not publicise it, this does not seem to be a procedure that would give confidence to people whose domains are wrongly targeted.

This is in contrast to the Internet Watch Foundation(IWF)’s procedure, which provides an appeal process with an independent retired judge to consider whether in fact material should be removed or blocked, or left published, once the IWF has made an internal review of its original decision.

The IWF’s decisions are relatively simple compared to the range of concerns advanced to Nominet by the various agencies involved. Despite this, it is surprising that there is no independent review of the grounds for a suspension. It seems unlikely that the police and agencies will always be able to review their own work and check if their initial decision is correct without bias or repeating their error. It also seems unlikely that everyone who wishes to complain would have confidence in the police’s ability to review a complaint.

Ultimately, the decision to suspend a domain is Nominet’s. Nominet owes its customers, domain owners, a trustworthy process that ensures that domain owners are able to have their voices heard if they believe a mistake has been made. Asking the police to review their request does not meet a standard of independence and robust review.

There is also a lack of transparency for potential victims as a result of Nominet’s policy to suspend domains rather than seize them. Suspensions simply make domains fail to work. A domain seizure would allow agencies to display “splash pages” warning visitors about the operation with which they may have done business. If goods are dangerous, such as unlicensed medicines or replica electronics, this may be important.

In our view, an independent prior decision and an independent reviewer are needed for Nominet’s process to be legitimate, fair and transparent, along with splash pages giving sufficient warning to prior customers. Domain seizure processes should replace informal suspension requests and the process should be established by law.

Because some improvements can be made by Nominet that fall short of a fully accountable, court-supervised process, we propose these as short term measures.

Recommendations to Nominet

1. Adopt Freedom of Information principles

2. Ask the government for a legal framework for domain seizure based on court injunctions for domain seizures

3. Require notices to be placed after seizures to explain the legal basis and outline any potential dangers to consumers posed by previous sales made via the domain. This could include contact details for anyone wishing to understand any risks to which they may have been exposed

4. Short term: Offer an independent review panel

5. Short term: Require government organisations to publish their policies relating to domain suspension requests

6. Short term: Publish the list of suspended domains, including the agency that made the request and the laws cited

7. Short term: Require government organisations to take legal responsibility for domain suspension requests

[1] http:/www.dailymail.co.uk/news/article-1233016/Over-thousand-scam-websites-targeting-Christmas-shoppers-shut-online-raid-Scotland-Yards-e-crime-unit.html Over a thousand scam websites targeting Christmas shoppers shut down after an online raid by Scotland Yard’s e-crime unit, 4 December 2009, dailymail.co.uk

[2] http://web.archive.org/web/20111113021751/http://www.nominet.org.uk/news/releases/?contentId=8216 Nominet calls on stakeholders to get involved in policy process, nominet.org.uk, 09 February 2011 (webarchive)

[3] https://www.theregister.co.uk/2011/11/25/nominet_domain_takedowns/ ISP outcry halts cybercops’ automatic .UK takedown plan, The Regis- ter 25 November 2011

[4] https://www.nominet.uk/nominet-formalises-approach-to-tackling-criminal-activity-on-uk-domains/

[5] Regulation (EU) 2017/2394 of the European Parliament and of the Council of 12 December 2017 on cooperation between national authorities responsible for the enforcement of consumer protection laws and repealing Regulation (EC) No 2006/2004 https://eur-lex.europa.eu/legal-content/EN/TX-T/?uri=uriserv:OJ.L_.2017.345.01.0001.01.ENG&toc=OJ:L:2017:345:TOC

https://wiki.openrightsgroup.org/wiki/Consumer_Protection_Cooperation_Regulation

[6] https://wiki.openrightsgroup.org/wiki/Nominet/Domain_suspension_statistics has a table of statistics derived and referenced from Nominet’stransparency reports

[7] The results of our FoI requests for domain suspension policies are summarised with references at https://wiki.openrightsgroup.org/wiki/Nominet/Domain_suspension_statistics

[8] https://www.whatdotheyknow.com/request/national_fraud_intelligence_bure#incoming-1115354

[Read more]


February 27, 2019 | Jim Killock

We met to discuss BBFC's voluntary age verification privacy scheme, but BBFC did not attend

Today Open Rights Group met a number of age verification providers to discuss the privacy standards that they will be meeting when the scheme launches, slated for April. Up to 20 million UK adults are expected to sign up to these products.

We invited all the AV providers we know about, and most importantly, the BBFC, at the start of February. BBFC are about to launch a voluntary privacy standard which some of the providers will sign up to. Unfortunately, BBFC have not committed to any public consultation about the scheme, relying instead on a commercial provider to draft the contents with providers, but without wider feedback from privacy experts and people who are concerned about users.

We held the offices close to the BBFC’s offices in order that it would be convenient for them to send someone that might be able to discuss this with us. We have been asking for meetings with BBFC about the privacy issues in the new code since October 2018: but have not received any reply or acknowledgement of our requests, until this morning, when BBFC said they would be unable to attend today’s roundtable. This is very disappointing.

BBFC’s failure to consult the public about this standard, or even to meet us to discuss our concerns, is alarming. We can understand that BBFC is cautious and does not wish to overstep its relationship with its new masters at DCMS. BBFC may be worried about ORG’s attitude towards the scheme: and we certainly are critical. However, it is not responsible for a regulator to fail to talk to its potential critics.

We are very clear about our objectives. We are acting to do our best to ensure the risk to adult users of age verification technologies are minimised. We do not pose a threat to the scheme as a whole: listening to us can only result in making the pornographic age verification scheme more likely to succeed, and for instance, to avoid catastrophic failures.

Privacy concerns appear to have been recognised by BBFC and DCMS as a result of consultation responses from ORG supporters and others, which resulted in the voluntary privacy standard. These concerns have also been highlighted by Parliament, whose regulatory committee expressed surprise that the Digital Economy Act 2017 had contained no provision to deal with the privacy implications of pornographic age verification.

Today’s meeting was held to discuss:

  1. What the scheme is likely to cover; and what it ideally should cover;

  2. Whether there is any prospect of making the scheme compulsory;

  3. What should be done about non-compliant services;

  4. What the governance of the scheme should be in the long tern, for instance whether it might be suitable to become an ICO backed code, or complement such as code

As we communicated to BBFC in December 2018, we have considerable worries about the lack of consultation over the standard they are writing, which appears to be truncated in order to meet the artificial deadline of April this year. This is what we explained to BBFC in our email:

  1. Security requires as many perspectives to be considered as possible.

  2. The best security standards eg PCI DSS are developed in the open and iterated

  3. The standards will be best if those with most to lose are involved in the design.

    1. For PCI DSS, the banks and their customers have more to lose than the processors

    2. For Age Verification, site users have more to lose than the processors, however only the processors seem likely to be involved in setting  the standard

We look forward to BBFC agreeing to meet us to discuss the outcome of the roundtable we held about their scheme, and to discuss our concerns about the new voluntary privacy standard. Meanwhile, we will produce a note from the meeting, which we believe was useful. It covered the concerns above, and issues around timing, as well as strategies for getting government to adjust their view of the absence of compulsory standards, which many of the providers want. In this, BBFC are a critical actor. ORG also intends as a result of the meeting to start to produce a note explaining what an effective privacy scheme would cover, in terms of scope, risks to mitigate, governance and enforcement for participants.

[Read more]


February 26, 2019 | Matthew Rice

The missing piece from the DCMS report? Themselves

In all the outrage and column inches generated by DCMS' Disinformation and 'fake news' report campaigners for political office and representatives of political parties have failed to acknowledge one of the most critical actors in personal data and political campaigning: themselves.

The Disinformation and ‘fake news’ report from the House of Commons Digital, Culture, Media and Sport (DCMS) Committee splashed onto front pages, news feeds and timelines on 18 February. And what a response it provoked. Parliamentarians are once again talking tough against the role of big tech in one of the key areas of our life: the democratic process. The custodians of the democratic tradition are vociferously calling time on “out of control” social media platforms and demanding further regulation.

In all the outrage and column inches generated by the report, however, campaigners for political office and representatives of political parties being angrily vocal about platforms and adverts and elections have failed to acknowledge one of the most critical actors in personal data and political campaigning: themselves.

PLATFORMS ARE JUST ONE PIECE OF THE PUZZLE

The DCMS report is damning in its critique of platforms’ cavalier approach to data protection, both during election cycles and generally. It hones in on the role of Facebook in the Cambridge Analytica scandal and condemns in scathing remarks Facebook’s business focus on data as a commodity to be wielded for commercial leverage. It expresses disapproval and disappointment and paints a vivid portrait of how platforms presumptively treat our personal data as their property to use, abuse and barter away. The regulatory proposals focus sharply on creating liability for tech companies and subjecting them to a compulsory code of ethics.

Whilst these are all strong outcomes and largely welcomed, to focus on the platforms alone is to focus only on one piece of the puzzle.

The debate over the use of personal data in political campaigns goes far beyond a call on platforms to remove harmful content. Political ads might be ultimately served by online platforms, but the messages themselves and their targets are created and set by political parties, their affiliated campaigning organisations, and, increasingly, data analytic companies on behalf of the parties or campaigners.

Parts of the report comment on the responsibility of political parties and campaigners in electioneering. However, media coverage and commentary has so far limited itself to emphasising the role of social media companies. This is disappointing messaging. The role of the key political actors in using and misusing data is no less significant than that of the platforms, and their culpability needs to be equally subject to critical scrutiny.

POLITICAL ACTORS NEED REGULATING TOO

The first layer of responsibility that political actors have for data processing is in gathering and storing target audience data. Some audience data will inevitably come from online platforms, but it will often be accompanied by or added to data separately held by parties and campaigners. Increasingly, political parties purchase this data from data brokers, and those transactions are highly questionable to the point arguably of being in breach of data protection standards.

Next, whilst spending on “digital” in election campaigning is increasing, the specificity of what “digital” means is decreasing. Is this just online ad buys? It is data collection? Message development? A bit of both? All of that and some more via third parties? At present, it's nearly impossible to tell where the money is going. Election finance regulations in Britain urgently need to be redrafted with tighter spending and reporting requirements that reflect the contemporary change in where and how advertising money is distributed.

The actors involved in political campaigns are changing too: gone are the days of straightforward political parties and designated campaign organisations. 21st century electioneering involves a shadowy web of non-registered campaigners that can have decisive influence, or at least spend decisive amounts of money, on digitally-targeted and harder to track campaigns. Transparency is sorely lacking in how these campaigners operate, at a significant risk to fair and open democratic processes.

DATA PROCESSING INCURS DEMOCRATIC DUTIES

Platform responsibility for data protection and fair election campaigning is important, but the problem doesn’t begin and end with Facebook. A bigger discussion on the responsibility of political parties when it comes to handling personal data is required.

The Data Protection Act 2018 handed political parties wide powers to process personal data if it “promotes democratic engagement” or is part of their “political activities”, including campaigning and fundraising. What this means is unclear. One obvious impact, though, is that parties will no longer have to work so hard to justify their processing of personal data. This sets a worryingly lax data protection standard.

The democratic trust deficit existing in Britain today is not corrected by riding an anti-social-media wave. It begins with admitting that powerful political parties, and their affiliates, have a core responsibility when it comes to the collection, combination and use of personal data in political campaigns.

The DCMS report offers a powerful opportunity to look at data processing in elections through a wide lens and do some vitally needed internal housekeeping. It could be used, as we intend to use it, to publicly cement the duty of parliament and its occupants to uphold data protection standards, go beyond just the letter of the law, and carefully consider whether the same system that is used to sell us shoes and holidays and 10% off vouchers can be used to facilitate open and informed democratic debates.

NEW REPORTS, NEW RULES, NEW OPPORTUNITIES FOR ACTION

Open Rights Group is just beginning to work on Data and Democracy, and we expect to find opportunities to raise the above points as the DCMS report continues to be further digested. Other outputs examining this issue are also expected soon. In particular, the Information Commissioner’s Office asked for views late last year on a code of practice for the use of personal information in political campaigns; ORG responded, calling for greater clarity on the new “political” bases for processing sensitive personal data and explaining what is appropriate in terms of targeting political messaging and how electoral roll data should be used. These are topics worthy of discussion and we look forward to seeing ICO guidance in the near future.

The DCMS report is a strongly positive step towards setting new rules for political campaigning. The question we ask is - to whom are these rules going to apply? For fair campaigning and truly democratic data protection, we hope the list comprises more than just tech companies.

[Read more]


February 19, 2019 | Jim Killock

Formal Internet Censorship: BBFC pornography blocking

The following is an excerpt from the report UK Internet Regulation Part I: Internet Censorship in the UK Today.

Read the full report here.

1. Administrative blocking powers

The Open Rights Group is particularly concerned that the BBFC, as the age verification regulator, has been given a general administrative power to block pornographic websites where those sites do not employ an approved age verification mechanism. We doubt that it is in a good position to judge the proportionality of blocking; it is simply not set up to make such assessments. Its expertise is in content classification, rather than free expression and fundamental rights assessments. [1]

In any case, state powers’ censorship should always be restrained by the need to seek an independent decision. This provides accountability and oversight of particular decisions, and allows the law to develop a picture of necessity and proportionality.

The BBFC’s blocking powers are not aimed at content but the lack of age verification (AV) in some circumstances. Thus they are a sanction, rather than a protective measure. The BBFC does not seek to prevent the availability of pornography to people under 18, but rather to reduce the revenues to site operators in order to persuade them to comply with UK legislative requirements.

This automatically leads to a risk of disproportionality, as the block will be placed on legal content, reducing access for individuals who are legally entitled to view it. For instance, this could lead to some marginalised sexual communities finding content difficult to access. Minority content is harder to find by definition, thus censoring that legal content is likely to affect minorities disproportionately. It is unclear why a UK adult should be prevented from accessing legal material.

At another level, the censorship will easily appear irrational and inconsistent. An image that is blocked on a website and lacks AV could be available on Twitter or Tumblr, or available on a non-commercial site.

The appeals mechanisms for BBFC blocks are also unclear. In particular, it is not clear what happens when an independent review is completed but the appellant disagrees with the decision.

2. BBFC requests to “Ancillary Service Providers”

Once section 14 of the Digital Economy Act (DEA) 2017 is operational, the BBFC will send requests to an open-ended number of support services for pornographic sites that omit age verification.[2] The BBFC hopes that once notified these services will comply with their request to cease service. Complying with a notice could put these services in legal jeopardy as they could be in breach of contract if they cease business with a customer without a legal basis for its decision. If these are companies based outside of the UK, no law is likely to have been broken.

Furthermore, some of the “services”, such as “supplying” a Twitter account, might apply to a company with a legal presence in the UK, but the acts (tweeting about pornography) would be lawful, including sharing pornographic images without age verification.

If a voluntary notice is acted on, however, then free expression impacts could ensue, with little or no ability for end users to ask the BBFC to cease and desist in issuing notices, as the BBFC will believe it is merely asking for voluntary measures for which it has no responsibility.

This is an unclear process and should be removed from the Digital Economy Act 2017.

Recommendations to government:

1. The BBFC’s blocking powers should be removed.

2. Cease obligations to the BBFC to notify ASPs for voluntary measures.

Recommendations to BBFC:

1. Ask for the application of the FoI Act to the BBFC's statutory work.

[1] This report does not cover privacy concerns, but it is worth noting that privacy concerns could easily lead to a chilling effect, whereby UK residents are dissuaded from accessing legal material because of worries about being tracked or their viewing habits being leaked.

Robust privacy regulation could reduce this risk, but the government has chosen to leave age verification technologies entirely to the market and general data protection law. This leaves age verification (AV) for pornography less legally protected than card transactions and email records.

See https://www.openrights-group.org/about/reports/response-to-bbfc-age-verification-consultation

and https://www.openrightsgroup.org/blog/2018/the-government-is-acting-negli-gently-on-privacy-and-porn-av

[2] Digital Economy Act 2017 s14 http://www.legislation.gov.uk/ukpga/2017/30/section/14/enacted

[Read more]


February 15, 2019 | Jim Killock

Formal Internet Censorship: Copyright blocking injunctions

The following is an excerpt from the report UK Internet Regulation Part I: Internet Censorship in the UK Today.

Read the full report here.

Open-ended powers

Copyright-blocking injunctions have one major advantage over every other system except for defamation. They require a legal process to take place before they are imposed. This affords some accountability and that necessity and proportionality are considered before restrictions are put in place.

However, as currently configured, copyright injunctions leave room for problems. We are confident that court processes will be able to resolve many of these. Further advantages of a process led by legal experts are that they are likely to want to ensure that rights of all parties are respected, and appeals processes in higher courts and the application of human rights instruments can ensure that problems are dealt with over time.

A process led by legal experts offers further advantages, including that it will be likely to ensure that rights of all parties are respected and that appeals processes in higher courts and the application of human rights instruments will ensure that problems are dealt with over time.

Copyright blocking injunctions are usually open-ended. There is not usually an end date, so they are a perpetual legal power. The injunction is against the ISPs. Rights-holders are allowed under the standard terms of the injunctions to add new domains or IP addresses that are in use by an infringing service without further legal review. ISPs and rights-holders do not disclose what exactly is blocked.

It has been reported that around 3,800 domains [1] are blocked by 31 injunctions, against around 179 sites or services. [2]

The government is preparing to consult on making copyright blocking an administrative process. We believe this would be likely to reduce accountability for website blocking, and extend it in scope. At present, website blocking takes place where it is cost effective for private actors to ask for blocks. Administrative blocking would place the cost of privately-demanded blocking onto the UK taxpayer, making it harder for economic rationality to constrain blocking. Without economic rationale,and with widening numbers of blocks, it would be harder to keep mistakes in check.

38% of observed blocks in error

Open Rights Group has compiled public information about clone websites that might be blocked, for instance the many websites that have presented full copies of the Pirate Bay website.

We ran tests on these domains to identify which domains are blocked on UK networks. As of 25 May 2018, we found 1,073 blocked domains. Of these, we found 38% of the blocks had been done in error. [3]

To be clear, each block would generally have been valid when the block was initially requested and put in place by the ISP, but not many of these blocks were removed once the websites ceased to infringe copyright laws.

The largest group of errors identified concerned websites that were no longer operational. The domains were for sale or parked, that is flagged as not in use, (151), not resolving (76), broken (63), inactive (41) or used for abusive activities such as “click-fraud” (78). [4] At other times, we had detected three or four that had been employed in active unrelated legitimate use [5], and several that could be infringing did not seem to be subject to an injunction, but were blocked in any case. [6]

That means a total of 409 out of 1075 domains were being blocked with no current legal basis, or 38%.

These errors could occur for a number of reasons. Nearly all of the domains would have been blocked as they were in use by infringing services. However, overtime they will have fallen into disuse, and some then reused by other services. In some cases, the error lies with ISPs failing to remove sites after notification by rights-holders that they no longer need to be blocked. In other cases, the rights-holders have not been checking their block lists regularly enough. While only a handful of blocks have been particularly significant,it is wrong for parked websites and domains for sale to be blocked by injunction. It is also concerning that the administration of these blocks appears very lax. Anear 40% error rate is not acceptable.

To be clear, there is no legal basis for a block against a domain that is no longer being used for infringement.The court injunctions allow blocks to be applied when a site is in use by an infringing service, but it is accepted by all sides that blocks must be removed when infringing uses cease.

Open Rights Group is concerned about its inability to check the existing blocks. What is or is not blocked should not be a secret, even if that is convenient for rights-holders. Without the ability to check, it is unlikely that independent and thorough checking will take place. Neither the ISPs or rights-holders have particular incentive to add to their costs by making thorough checks. As of the end of July 2018, most of the mistakes had remained unresolved, after three months of notice and a series of meetings with ISPs to discuss the problem. The number by October 2018 had reduced to nearer 30%, but progress in resolving these remains very slow. [7]

Many blocking regimes do not offer the flexibility to add on further blocks, but require rights-holders to return to court. The block lists are entirely public in many European countries.

ISPs should at a minimum publish lists of domains that they have “unblocked”. This would allow us and others to test and ensure that blocks have been removed.

Poor notifications by ISPs

A further concern is that the explanations for website blocks and how to deal with errors is very unclear.This has no doubt contributed to the large proportion of incorrect blocks.

At present some basic information about the means to challenge the injunction at court is available.However, in most cases this is not what is really needed. Rather, a website user or owner needs information about the holder of the injunction and how to ask them to correct an error. This information is currently omitted from notification pages.

Notifications should also include links to the court judgment and any court order sent to the ISP. This would help people understand the legal basis for blocks.

Our project blocked.org.uk includes this information where available. We also generate example notification pages.

While ISPs could implement these changes without instruction from courts, they have been reluctant to improve their practice without being told. Open Rights Group’s interventions in the Cartier court cases helped persuade the courts to specify better information on notification pages, but we believe there is some way to go before they are sufficiently explanatory.

Proposal for administrative blocking

The government is considering administrative blocking of copyright-infringing domains. This poses a number of problems. The current system requires rights holders to prioritise asking for blocks where is it is cost effective to do so. This keep censorship of websites to that which is economically efficient to require, rather than allowing this task to expand beyond levels which are deemed necessary.

As we see with the current system, administering large lists of website blocks efficiently and accurately is not an easy task. Expanding this task at the expense of the taxpayer could amount to unnecessary levels of work that are not cost efficient. It will be very hard for a government body to decide “how much” blocking toask for, as its primary criteria will be ensuring material is legal. Unfortunately, there are very large numbers of infringing services and domains, with very small or negligible market penetration.

Secondly, it makes no sense for a growing system of censorship to keep what is blocked secret from the public. Administrative systems will need to be seen to be accurate, not least because sites based overseas will need to know when and why they are blocked in the UK in order to be able to appeal and remove the block. This may be resisted by rights-holder organisations, who have so far shown no willingness to make the block lists public. Administrative blocking could be highly unaccountable and much more widespread than at present, leading to hidden, persistent and unresolvable errors.

Thirdly, combining wide-scale pornography blocking with widening copyright blocking risks making the UK a world leader in Internet censorship. Once the infrastructure is further developed, it will open the door to further calls for Internet censorship and blocking through lightweight measures. This is not an attractive policy direction.

Recommendations to government:

  1. Future legislation should specify the need for time limits to injunctions and mechanisms to ensure accuracy and easy review

  2. Open-ended, unsupervised injunction and blocking powers should not be granted

  3. Administrative blocking should be rejected

Recommendations to courts and parties to current injunction:

  1. Current injunction holders and ISPs must urgently reduce the error rates within their lists, as incorrect blocks are unlawful

  2. Courts should reflect on the current problems of accuracy in order to ensure future compliance with injunctions

  3. It should be mandatory for blocking notices to link to legal documents such as a judgment and court order

  4. It should be mandatory for blocking notices to explain who holds the injunction to block the specific URL requested

  5. Assurance should be given that there is transparency over what domains are blocked

  6. ISPs and right-holders should be required to check block lists for errors 

References

[1] https://torrentfreak.com/uks-piracy-blocklist-exceeds-3800-urls-170321/

[2] See https://wiki.451unavailable.org.uk/wiki/Main_Page and https://www.blocked.org.uk/legal-blocks for the lists of sites.

[3] https://www.blocked.org.uk/legal-blocks/errors maintains the error rates; results as of 4 June 2018 are available here: http://web.archive.org/web/20180604092443/https://www.blocked.org.uk/legal-blocks/errors.

Reports and data can be downloaded from https://www.blocked.org.uk/legal-blocks

[4] These categories are defined as follows: (i) Parked or for sale: the site displays a notice explaining that the domain is for sale, or has a notice saying the domain is not configured for use; (ii) not resolving means that DNS is not configured so the URL does not direct anywhere; (iii) broken means that a domain resolves but returns an error, such as a 404, database error etc; (iv) inactive means that the site resolves, does not return an error, returns a blank page or similar, but does appear to be configured for use; (v) abusive means that the domain is employed in some kind of potentially unlawful or tortious behaviour other than copyright infringement.

[5] A blog and website complaining about website blocking ,for instance .These were not functional as we completed the review.

[6] See also our press release: https//:www.openrightsgroup.org/press/releases/2018/nearly-40-of-court-order-blocks-are-in-error-org-finds

[7] https//:www.blocked.org.uk/legal-blocks/errors Errors on 10 October 2018 stood at 362 domains out of.1128

[Read more]