Blog


May 09, 2019 | Pascal Crowe

More than money - How to tame online political ads

The Electoral Commission’s Director of Regulation, Louise Edwards, recently put out a call for new laws to regulate online political adverts. She argued that the adverts need to show clearly and directly who has paid for them. [1] Whilst knowing who has paid for online ads is important, it’s only part of the picture. The whole process of online political advertising needs to be more tightly regulated.

Political parties target ads online by using personal data to include or exclude potential voters. This drives down spending by targeting only a narrow slice of the population. In addition, automated messaging is becoming both cheaper and more sophisticated. Both of these practices will significantly reduce the amount of money needed by campaigns.

To regulate online political advertising effectively, we need to look beyond campaign spending. It’s equally crucial to have greater transparency over parties’ use of personal data. Consequently, we should be looking at organisations beyond the Electoral Commission, for example the Information Commissioner’s Office.

Transparency is critical. Both political actors that use online advertising, and the platforms that facilitate them, should be forced to come clean on their sources of personal data and how their targeting works. Public reporting can be supported, to a degree, by initiatives such as Facebook’s online ad library. The limited data that Facebook provides, however, allows shady individuals who pay for ads online to conduct ‘astroturf’ campaigns hidden behind shell companies.

Open Rights Group is concerned that narrowly targeted online political advertising is contributing to the polarisation of democratic discourse. When parties’ messaging is designed only to be seen by the people already most likely to vote for them, it becomes less about consensus and increasingly geared towards riling up supporters in order to drive them to the ballot box.

Britain’s political discourse has never been totally impartial. But rarely has it been more fractured. Properly regulating online political ads takes a first step to repairing it.

[1] https://www.bbc.co.uk/news/business-48174817

 

[Read more]


April 08, 2019 | Jim Killock and Amy Shepherd

The DCMS Online Harms Strategy must “design in” fundamental rights

After months of waiting and speculation, the Department for Digital, Culture, Media and Sport (DCMS) has finally published its White Paper on Online Harms - now appearing as a joint publication with the Home Office. The expected duty of care proposal is present, but substantive detail on what this actually means remains sparse: it would perhaps be more accurate to describe this paper as pasty green.

Read the White Paper here.

Increasingly over the past year, DCMS has become fixated on the idea of imposing a duty of care on social media platforms, seeing this as a flexible and de-politicised way to emphasise the dangers of exposing children and young people to certain online content and make Facebook in particular liable for the uglier and darker side of its user-generated material.

DCMS talks a lot about the ‘harm’ that social media causes. But its proposals fail to explain how harm to free expression impacts would be avoided.

On the positive side, the paper lists free expression online as a core value to be protected and addressed by the regulator. However, despite the apparent prominence of this value, the mechanisms to deliver this protection and the issues at play are not explored in any detail at all.

In many cases, online platforms already act as though they have a duty of care towards their users. Though the efficacy of such measures in practice is open to debate, terms and conditions, active moderation of posts and algorithmic choices about what content is pushed or downgraded are all geared towards ousting illegal activity and creating open and welcoming shared spaces. DCMS hasn’t in the White Paper elaborated on what its proposed duty would entail. If it’s drawn narrowly so that it only bites when there is clear evidence of real, tangible harm and a reason to intervene, nothing much will change. However, if it’s drawn widely, sweeping up too much content, it will start to act as a justification for widespread internet censorship.

If platforms are required to prevent potentially harmful content from being posted, this incentivises widespread prior restraint. Platforms can’t always know in advance the real-world harm that online content might cause, nor can they accurately predict what people will say or do when on their platform. The only way to avoid liability is to impose wide-sweeping upload filters. Scaled implementation of this relies on automated decision-making and algorithms, which risks even greater speech restrictions given that machines are incapable of making nuanced distinctions or recognising parody or sarcasm.

DCMS’s policy is underpinned by societally-positive intentions, but in its drive to make the internet “safe”, the government seems not to recognise that ultimately its proposals don’t regulate social media companies, they regulate social media users. The duty of care is ostensibly aimed at shielding children from danger and harm but it will in practice bite on adults too, wrapping society in cotton wool and curtailing a whole host of legal expression.

Although the scheme will have a statutory footing, its detail will depend on codes of practice drafted by the regulator. This makes it difficult to assess how the duty of care framework will ultimately play out.

The duty of care seems to be broadly about whether systemic interventions reduce overall “risk”. But must the risk be always to an identifiable individual, or can it be broader - to identifiable vulnerable groups? To society as a whole? What evidence of harm will be required before platforms should intervene? These are all questions that presently remain unanswered.

DCMS’s approach appears to be that it will be up to the regulator to answer these questions. But whilst a sensible regulator could take a minimalist view of the extent to which commercial decisions made by platforms should be interfered with, allowing government to distance itself from taking full responsibility over the fine detailing of this proposed scheme is a dangerous principle. It takes conversations about how to police the internet out of public view and democratic forums. It enables the government to opt not to create a transparent, judicially reviewable legislative framework. And it permits DCMS to light the touch-paper on a deeply problematic policy idea without having to wrestle with the practical reality of how that scheme will affect UK citizens’ free speech, both in the immediate future and for years to come.

How the government decides to legislate and regulate in this instance will set a global norm.

The UK government is clearly keen to lead international efforts to regulate online content. It knows that if the outcome of the duty of care is to change the way social media platforms work that will apply worldwide. But to be a global leader, DCMS needs to stop basing policy on isolated issues and anecdotes and engage with a broader conversation around how we as society want the internet to look. Otherwise, governments both repressive and democratic are likely to use the policy and regulatory model that emerge from this process as a blueprint for more widespread internet censorship.

The House of Lords report on the future of the internet, published in early March 2019, set out ten principles it considered should underpin digital policy-making, including the importance of protecting free expression. The consultation that this White Paper introduces offers a positive opportunity to collectively reflect, across industry, civil society, academia and government, on how the negative aspects of social media can be addressed and risks mitigated. If the government were to use this process to emphasise its support for the fundamental right to freedom of expression - and in a way that goes beyond mere expression of principle - this would also reverberate around the world, particularly at a time when press and journalistic freedom is under attack.

The White Paper expresses a clear desire for tech companies to “design in safety”. As the process of consultation now begins, we call on DCMS to “design in fundamental rights”. Freedom of expression is itself a framework, and must not be lightly glossed over. We welcome the opportunity to engage with DCMS further on this topic: before policy ideas become entrenched, the government should consider deeply whether these will truly achieve outcomes that are good for everyone.

[Read more]


March 19, 2019 | Jim Killock

Jeremy Wright needs to act to avert disasters from porn age checks

Age Verification for porn websites is supposed to be introduced in April 2019. Age Verification comes with significant privacy risks, and the potential for widespread censorship of legal material.

porn viewing histories what could possibly go wrongThe government rejected Parliamentary attempts to include privacy powers over age verification tools, so DCMS have limited possibilities right now. Last summer, BBFC consulted about their draft advice to website operators, called Guidance on Age Verification Arrangements. That consultation threw up all the privacy concerns yet again. BBFC and DCMS agreed to include a voluntary privacy certification scheme in response.

Unfortunately, there are two problems with this. Firstly, it is voluntary. It won’t apply to all operators, so consumers will sometimes benefit from the scheme, and sometimes they won’t. It is unclear why it is acceptable to government and the BBFC that some consumers should be put at greater risk by unregulated products.

There is nothing to stop a an operator from leaving the voluntary scheme so it can make its data less private, more shareable, or more monetisable. It’s voluntary, after all.

Secondly, the scheme is being drawn up hastily, without public consultation. It is a very risky business for a regulator to produce a complex and pivotal security and privacy standard with a limited field of view. It is talking to vendors, but not the public who are going to be using these products. Security experts, of whom there are many who might help, are unable to engage.

This haste to create a privacy scheme seems to be due to the desire of government to commence age verification as fast as possible. That risks the privacy standard being substandard, and effectively misleading to consumers, who will assume that it provides a robust and permanent level of protection.

DCMS and Jeremy Wright could solve this right now

They need to do two things:

  1. Tell industry that government will legislate to make the Privacy Certification scheme compulsory;

  2. Announce a public consultation on BBFC's Privacy Certification scheme.

That may involve a short delay to this already delayed scheme. But that is better, surely, than risking damage to the privacy, personal lives and careers of millions of UK people regularly visiting these websites.

[Read more]


March 14, 2019 | Javier Ruiz

US red lines for digital trade with the UK cause alarm

The US government has published its negotiating objectives for a trade deal with the UK, which include some worrying proposals on digital trade, including a ban on the disclosure of source code and algorithms, and potential restrictions on data protection.

CC-BY-NC-ND 2.0 Chad Horwedel

Trade negotiations between the US and the UK have recently received a lot of attention due to the publication of the official negotiating objectives of the US Government, which set out in sometimes candid detail the areas of interest and priorities. The US document is mainly written in coded “trade-speak”, with seemingly innocuous term such as “procedural fairness” or “science-based” masking huge potential impacts on a wide range of areas, from farming to NHS prescriptions. The document also sets out the priorities for the US around Digital Trade with the UK, with proposals that would affect the digital rights of people in the UK.

The UK started “non-negotiating” a trade agreement with the US soon after the country voted to leave the EU in 2016. While technically not allowed to enter formal negotiations on trade until it leaves the bloc at the end of this month, the UK government has conducted five official bilateral meetings and sent several business delegations, not counting the ongoing activity of UK officials in Washington.

A public consultation last year saw many consumer and rights groups raise concerns about a potential UK-US agreement, including ORG. We are worried about the inclusion of “Digital Trade” - also misleadingly termed “E-commerce” - in negotiations, which could lead to entrenched domination by US online platforms, lower privacy protections and more restrictions in access to information.

Last month a group of 76 countries, including the US, the EU and China, announced their intentions to start negotiations on “trade-related aspects of electronic commerce” at the World Trade Organisation (WTO). Once more this has led to widespread concerns by civil society groups such as the Transatlantic Consumer Dialogue, of which ORG is a member. The proposed agenda covers non-controversial improvements, such as the use of e-signatures or fighting spam, but it includes similar proposals to those presented by the US in their digital trade objectives. These proposals will severely impact internet regulation by controlling the building blocks of digital technology: data flows, source code and algorithms.

What the US wants from the UK in digital trade

Keeping source code and algorithms confidential

The US wants to stop the UK government from “mandating the disclosure of computer source code or algorithms”. This is one of the most concerning aspects of the new digital trade agenda, already found in other recent trade agreements, and criticised by groups such as Third World Network. Restricting source code and algorithms is problematic for various reasons. In particular, the UK government has been pioneering open source software, despite some setbacks, and these clauses could be used to challenge any public procurement perceived to give preference to open source.

There are growing concerns about potential unfairness and bias in decisions made or supported by the use of algorithms, from credit to court sentencing, including the status of EU citizens after Brexit. Preventing the disclosure of algorithms would hamper efforts to develop new forms of technological transparency and accountability. The EU GDPR includes a right for individuals in certain circumstances to be informed of the logic of the systems making decisions that significantly affect them, in a potential conflict with the US digital trade proposals.

Maintaining cross-border data flows

Another objective of the US in its trade negotiations with the UK is to ensure that the UK “does not impose measures that restrict cross-border data flows and does not require the use or installation of local computing facilities”.

These demands are becoming a central feature of contemporary trade negotiations, encapsulating the key aspect of the global Digital Trade agenda: ensuring a global data flow towards the largest US-based internet giants of Silicon Valley that currently dominate the global Internet outside China and Russia.

Additionally, as we said in our response to the government consultation on the US trade deal last year, these requirements could openly clash with the EU General Data Protection Regulation (GDPR), which prohibits unrestricted data transfers. Wilbur Ross, US Commerce Secretary, has openly called GDPR an unnecessary barrier to trade. Agreeing to US demands would put the UK in a double bind that could jeopardise data flows to and from the EU.

Limiting online platform liability for third-party content

The US will also try to limit the liability of online platforms for third-party content excluding intellectual property, with caveats allowing “non-discriminatory measures for legitimate public policy objectives or that are necessary to protect public morals”. This is one topic that receives widespread sympathy from digital rights advocates, as policymakers across Europe try to open a new debate on Internet liability protections that could see online providers being forced to increase censorship over their users. We recently heard this argument in the report on Internet regulation by the House of Lords. Leveraging trade policy to advance a progressive digital rights agenda may seem a good idea,  but unfortunately the positives tend to be bundled with other worrying proposals, and trade negotiators lack the expertise required, so subtleties can be lost and mistakes made.

The wording in the US document reflects agreed exemptions in international trade rules, which have been applied in very few occasions. The exemption has been used by the US - to try to restrict online gambling from the Caribbean island of  Antigua; by China - to try to control the foreign influx of ideas into the country; and by the EU has to restrict the importation of products made from seals. In most cases the claim was either not successful or required modifications to the policy.

The concept of “public morals” is far from clear and as we can see from these case it can be applied quite broadly. It is meant to encompass human rights and environmental concerns, without mentioning them, but there is no agreement to how universal such morals have to be. This shows the dangers of bringing more spheres of human activity under the umbrella of trade. The UK is preparing to regulate harms to UK-based users of social media platforms, which will impact US companies, and it is unclear whether this activity could be considered a trade barrier and consequently defended under the public morals exemption. In our view, regulating online harms should not be linked to trade negotiations but examined on its own merits.

Preventing border taxes on digital products

The US wants to ensure that digital products imported into the country (e.g., software, music, video, e-books) are not taxed at the border. Right now,digital goods are mainly classified under their physical characteristics rather than content, so that DVDs and “laser-disks” including CDs are counted separately by UK customs and are generally exempt from custom duties although importers need to pay VAT. This exemption may become less relevant as the imports of tangible digital goods go down globally when compared to those distributed electronically. DVD sales are displaced by online streaming, and e-books are almost exclusively bought online, with Amazon accounting for almost 90% of market share in the UK.

Goods transmitted electronically are currently exempt from custom duties thanks to a WTO moratorium in place since 1998, which is currently being challenged by developing countries led by India and South Africa for incurring unfair revenue losses given the massive growth of online trade in the past 20 years.

The US wants to avoid any supposed discrimination against their digital products. Given the importance of the Silicon Valley giants, many measures designed to deal with large internet companies will appear to target US businesses. We are not sure yet about the specific agenda under this item in the UK context, but it is likely that they have in mind proposals to increase the taxation of tech firms. The US government has described EU proposals in this direction as “discriminatory”.  It is then likely that the UK’s own plans to tax digital services will clash with US demands. The distinction between products and services can be confusing in the digital sphere, but it is critically important in trade. In many cases, consumers do not own the music, films or e-books they “buy” online, they merely have a licence to the content ruled by terms and conditions, which is rather a service. UK consumer law has tried to deal with this confusion by creating specific protections for download purchases, called “digital content not on a tangible medium”, but it is not clear how this would impact trade categories.

What’s next?

The negotiations are advancing apace but it is difficult to predict what will happen. As the US document shows, behind the rhetoric there are hard economic interests that could slow down the process.

The above are only the official top level demands from the US government: US business groups are lining up to include many other issues. A recent public US government hearing in Washington on the negotiating objectives saw calls for full liberalisation of services, particularly financial services, among other issues that included access to the UK labour market for US workers. The hearing stressed that the economic relationship is important for both countries, not just the UK. The UK is the US largest partner in services trade and the largest buyer of digital services, and both countries are each others’ largest direct foreign investors. The UK is one of the few countries that does more trade in services with the US than in goods.

Despite the issues raised, the publication of the US document provides some level of transparency and enables public debate. We hope that the UK government will follow suit and publish its own negotiating objectives. Unfortunately, our experience in other bilateral areas, such as surveillance, indicates that the level of public accountability of the heavily politicised US federal government is not generally matched by Whitehall’s circumspect civil service. The advisory group created by the Department for International Trade (DfIT) for discussions on trade policy around Intellectual Property is a very encouraging step. A similar space should be created by DfIT where digital trade issues can be discussed with the attention they deserve.



[Read more]


March 11, 2019 | Amy Shepherd

Brexit, Data Privacy and the EU Settled Status Scheme

The UK is expected to leave the EU in less than four working weeks’ time. Roughly 3 million EU citizens reside in the UK. These two facts are not unrelated.

The EU Settled Status Scheme (“the scheme”) provides the administrative route through which all EU nationals must apply to remain in the UK after 30 June 2021, in the event of a deal, or 31 December 2020, in the event of no deal.

Operation of the scheme relies heavily on an automated data check: enter your national insurance number on the online portal and the Home Office will use HM Revenue & Customs (HRMC) and/or Department for Work and Pensions (DWP) data to identify if you’ve reached the required five years of continuous residence to qualify for “Settled Status”. If the data says ‘no’, you’ll be invited to accept an outcome of “Pre-Settled Status” or to upload further documents evidencing your residence for manual checking by a Home Office caseworker.

The system is supposedly designed to operate smoothly and effectively. So far, however, it fails to meet either of these objectives.

Open Rights Group (ORG) and the Immigration Law Practitioners Association (ILPA) have been working together to examine the data check system operation, ask questions to the Home Office to press them to share more information about the data processing in the system and propose outcomes which would remedy current deficiencies.
 
The main concern our research has explored is the opaque manner in which the HMRC/DWP data is processed by the Home Office to produce an output result.

As the system presently operates, applicants aren’t fully informed about what data is reviewed in the process of deciding to grant them settled status or not, nor are they told how the Home Office algorithm applies to calculate an output result. This lack of information means that applicants who are refused settled status cannot easily identify why, or locate any errors in the data or the process which they should challenge.

Ideally, all applicants should be presented at the outcome stage with a printable document or web-page listing the data checked and what logical process the data underwent in order to inform the assessment of eligibility.

There is also no clear picture about what data is retained by the Home Office after the output decision is issued, with whom this data might be shared and to what purpose this data will or might be put. The Home Office states that the “raw data” provided by HMRC/DWP “disappears” as soon as an outcome is produced, but this does not answer the question of what it is doing or plans to do with the data created through the application process.

The scheme is expected to come into full effect on 30 March 2019 and millions of people will need to register to remain in the UK post-2020/2021. The issues with how the system operates are leading to a lack of trust by potential applicants, a lack of safeguards for vulnerable groups and potentially poor data protection, all of which hinder the scheme from working effectively.

The Home Office has specific legal duties to make sure the data check is conducted lawfully. These include a duty to give reasons for check outcomes, to put safeguards in place to limit how much data is collected and how it is retained and shared, to ensure caseworkers have meaningful manual oversight of the automated parts of the process, and to provide public information on how the check really operates so that potential applicants can understand how the scheme is likely to operate in their case.

We’ve reached out to the Home Office with a range of questions and proposals for action but have yet to receive any substantive response. We hope that they will engage with us soon. The problems we’ve identified are largely not difficult to fix and it would be beneficial to make the necessary changes before full roll-out at the end of the month. This positive action would also significantly assist in building trust and breaking down entry barriers to citizens already worried about how so many aspects of Brexit will affect them.

For more information, or to share additional information with us, please see ILPA’s research here, our advocacy briefing here or contact amy@openrightsgroup.org

[Read more]


March 08, 2019 | Jim Killock

Informal Censorship: The Internet Watch Foundation (IWF)

The following is an excerpt from the report UK Internet Regulation Part I: Internet Censorship in the UK Today.

Read the full report here.

The IWF, as a de facto Internet censor, has been popular with some people and organisations and controversial with others. The IWF deals with deeply disturbing material, which is relatively easily identified and usually uncontroversial to remove or block. This is relatively unique for content regulation.

Nevertheless, their partners’ systems for blocking have in the past created visible disruption to Internet users, including in 2008 to Wikipedia across the UK.[1] Companies employing IWF lists have often blocked material with no explanation or notice to people accessing a URL, causing additional problems.Some of its individual decisions have also been found wanting. Additional concerns have been created by apparent mission creep, as the IWF has sought or been asked to take on new duties. Its decisions are not subject to prior independent review, and it is unclear that ISP-imposed restrictions are compatible with EU law, in particular the EU Open Internet regulations, which indicate that a legal process should be in place to authorise any blocking.[2] Ideally, this would be resolved by the government providing a simple but independent authorisation process that the IWF could access.

The IWF has made some good and useful steps to resolve some of these issues.

It has chosen to keep its remit narrow and restricted to child abuse images published on websites. This allows it to reduce the risk that its approach creates over-reach. The IWF model is not appropriate where decisions are likely to be controversial.

It has an independent organisational external review, albeit this is not well-publicised, which could be a good avenue for people to give specific and confidential feedback.

The IWF has an appeals process, and an independent legal expert to review its decisions on appeal. The decisions it makes have the potential for widespread public effect on free expression, so could be subject to judicial review, which the IWF has recognised.

A significant disadvantage of the IWF process is that the external review applies legal principles, but is not itself a legal process, so does not help the law evolve. This weakness is found in other self-regulatory models.

There is also a lack of information about appeals, why decisions were made, and how many were made. There is incomplete information about how the process works.[3] Appeal findings are not made public.[4] Notifications are not always placed on content blocks by ISPs. This is currently voluntary.

Recommendations to IWF:

1. Adopt Freedom of Information principles for information requests
2. Ensure that the IWF’s external evaluation process is visible and accessible by third parties
3. Ensure that processes are clearly documented
4. Require notices to be placed at blocked URLs
5. Publish information about appeals, such as: the numbers made internally and externally each year; whether successful or not; and the reasoning in particular decisions

[1] See https://en.wikipedia.org/wiki/Internet_Watch_Foundation_and_Wikipedia
[2] See reference 38 below
[3] http://web.archive.org/web/20180605155231/https://www.iwf.org.uk/sites/default/files/inline-files/Content%20assessment%20appeal%20flow%20chart%20process.pdf 

[4] The BBFC, for instance, operates a system of publishing the reasoning behind its content classification appeals.

[Read more]


March 05, 2019 | Jim Killock

Informal Internet Censorship: The Counter Terrorism Internet Referral Unit (CTIRU)

The following is an excerpt from the report UK Internet Regulation Part I: Internet Censorship in the UK Today.

Read the full report here.

The CTIRU’s work consists of filing notifications of terrorist-related content to platforms, for them to consider removals. They say they have removed over 300,000 pieces of extremist content.

Censor or not censor?

The CTIRU consider its scheme to be voluntary, but detailed notification under the e-Commerce Directive has legal effect, as it may strip the platform of liability protection. Platforms may have “actual knowledge” of potentially criminal material, if they receive a well-formed notification, with the result that they would be regarded in law as the publisher from this point on.[1]

At volume, any agency will make mistakes. The CTIRU is said to be reasonably accurate: platforms say they decline only 20 or 30% of material. That shows considerable scope for errors. Errors could unduly restrict the speech of individuals, meaning journalists, academics, commentators and others who hold normal, legitimate opinions.

A handful of CTIRU notices have been made public via the Lumen transparency project.[2] Some of these show some very poor decisions to send a notification. In one case, UKIP Voices, an obviously fake, unpleasant and defamatory blog portraying the UKIP party as cartoon figures but also vile racists and homophobes, was considered to be an act of violent extremism. Two notices were filed by the CTIRU to have it removed for extremism. However, it is hard to see that the site could fall within the CTIRU’s remit as the site’s content is clearly fictional.

In other cases, we believe the CTIRU had requested removal of extremist material that had been posted in an academic or journalistic context. [3]

Some posters, for instance at wordpress.com, are notified by the service’s owners, Automattic, that the CTIRU has asked for content to be removed.This affords a greater potential for a user to contestor object to requests. However, the CTIRU is not held to account for bad requests. Most people will find it impossible to stop the CTIRU from making requests to remove lawful material, which might still be actioned by companies, despite the fact that the CTIRU would be attempting to remove legal material, which is clearly beyond its remit.

When content is removed, there is no requirement to notify people viewing the content that it has been removed because it may be unlawful or what those laws are, nor that the police asked for it to be removed. There is no advice to people that may have seen the content or return to view it again about the possibility that the content may have been intended to draw them into illegal and dangerous activities, nor are they given advice about how to seek help.

There is also no external review, as far as we are aware. External review would help limit mistakes. Companies regard the CTIRU as quite accurate, and cite a 70 or 80% success rate in their applications. That is potentially a lot of requests that should not have been filed, however, and that might not have been accepted if put before a legally-trained and independent professional for review.

As many companies will perform little or no review, and requests are filed to many companies for the same content, which will then sometimes be removed in error and sometimes not, any errors at all should be concerning.

Crime or not crime?

The CTIRU is organised as part of a counter-terrorism programme, and claim its activities warrant operating in secrecy, including rejecting freedom of information requests on the grounds of national security and detection and prevention of crime.

However, its work does not directly relate to specific threats or attempt to prevent crimes. Rather, it is aimed at frustrating criminals by giving them extra work to do, and at reducing the availability of material deemed to be unlawful.

Taking material down via notification runs against the principles of normal criminal investigation. Firstly, it means that the criminal is “tipped off” that someone is watching what they are doing. Some platforms forward notices to posters, and the CTIRU does not suggest that this is problematic.

Secondly, even if the material is archived, a notification results in destruction of evidence. Account details, IP addresses and other evidence normally vital for investigations is destroyed.

This suggests that law enforcement has little interest in prosecuting the posters of the content at issue. Enforcement agencies are more interested in the removal of content, potentially prioritised on political rather than law enforcement grounds, as it is sold by politicians as a silver bullet in the fight against terrorism. [4]

Beyond these considerations, because there is an impact on free expression if material is removed, and because police may make mistakes, their work should be seen as relating to content removal rather than as a secretive matter.

Statistics

Little is know about the CTIRU’s work, but it claims to be removing up to 100,000 “pieces of content” from around 300 platforms annually. This statistic is regularly quoted to parliament, and is given as an indication of the irresponsibility of major platforms to remove content. It has therefore had a great deal of influence on the public policy agenda.

However, the statistic is inconsistent with transparency reports at major platforms, where we would expect most of the takedown notices to be filed. The CTIRU insists that its figure is based on individual URLs removed. If so, much further analysis is needed to understand the impact of these URL removals, as the implication is that they must be hosted on small, relatively obscure services. [5]

Additionally, the CTIRU claims that there are no other management statistics routinely created about its work. This seems somewhat implausible, but also, assuming it is true, negligent. For instance, the CTIRU should know its success and failure rate, or the categorisation of the different organisations or belief systems it is targeting. An absence of collection of routine data implies that the CTIRU is not ensuring it is effective in its work. We find this position, produced in response to our Freedom of Information requests, highly surprising and something that should be of interest to parliamentarians.

Lack of transparency increases the risks of errors and bad practice at the CTIRU, and reduces public confidence in its work. Given the government’s legitimate calls for greater transparency on these matters at platforms, it should apply the same standards to its own work.

Both government and companies can improve transparency at the CTIRU. The government should provide specific oversight, much in the same way as CCTV and Biometrics have a Commissioner. Companies should publish notifications, redacted if necessary, to the Lumen database or elsewhere. Companies should make the full notifications available for analysis to any suitably-qualified academic, using the least restrictive agreements practical.

FoIs, accountability and transparency

Because the CTIRU is situated within a terrorism- focused police unit, its officers assume that their work is focused on national security matters and prevention and detection of crime. The Metropolitan Police therefore routinely decline requests for information related to the CTIRU.

The true relationship between CTIRU content removals and matters of national security and crime preventions is likely to be subtle, rather than direct and instrumental. If the CTIRU’s removals are instrumental in preventing crime or national security incidents, then the process should not be informal.

On the face of it, the CTIRU’s position that it only files informal requests for possible content removal, and that this activity is also a matter of national security and crime prevention that mean transparency requests must be denied, seems illogical and inconsistent.

The Open Right Group has filed requests for information about key documents held, staff and finances, and available statistics. So far, only one has been successful, to confirm the meaning of a piece of content.

During our attempts to gain clarity over the CTIRU’s work, we asked for a list of statistics that are kept on file, as discussed above. This request for information was initially turned down on grounds of national security. However, on appeal to the Information Commissioner, the CTIRU later claimed that no such statistics existed. This appears to suggest that the Metropolitan Police did not trouble to find out about the substance of the request, but simply declined it without examining the facts because it was a request relating to the CTIRU.[6]

We recommend that the private sector takes specific steps to help improve the situation with CTIRU.

Recommendations to Internet platforms:

1. Publication of takedown requests at Lumen
2. Open academic analysis of CTIRU requests

[1] European E-Commerce Directive (2000/31/EC) https://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX:32000L0031
[2] https://www.lumendatabase.org
[3] Communication with Automattic, the publishers of wordpress.com blogs
[4] https://www.theguardian.com/uk-news/2017/sep/19/theresa-may-will-tell-internet-firms-to-tackle-extremist-content and https://www.bbc.co.uk/news/uk-42526271 for instance
[5] https://www.whatdotheyknow.com/request/ctiru_statistical_methodology “A terrorist group may circulate one product (terrorist magazine or video) – this same product may be uploaded to 100 different file-sharing websites. The CTIRU would make contact with all 100 file sharing websites and if all 100 were removed, the CTIRU would count this as 100 removals.”
[6] Freedom of Information Act 2000 (FOIA) Decision notice: Commissioner of the Metropolitan Police Service; ref FS50722134 21 June 2018 https://ico.org.uk/media/action-weve-taken/decision-notices/2018/2259291/fs50722134.pdf

[Read more]


February 28, 2019 | Jim Killock

Informal Internet Censorship: Nominet domain suspensions

The following is an excerpt from the report UK Internet Regulation Part I: Internet Censorship in the UK Today.

Read the full report here.

In December 2009, Nominet began to receive and act on bulk law enforcement requests to suspend the use of certain .uk domains believed to be involved in criminal activity. [1] At the request of the Serious and Organised Crime Agency (SOCA), Nominet subsequently consulted about creating a formal procedure to use when acceding to these requests and provide for appeals and other safeguards. [2] Nominet’s consultations failed to reach consensus, with many participants including ORG arguing for law enforcement to seek injunctions to seize or suspend domains, not least because it became apparent that the procedure would be widely used once available. [3]

As with any system of content removal at volume, mistakes will be made. These pose potential damage to individuals and businesses.

Nominet formalised their policy in 2014. [4] It can suspend any domain that it believes is being used for criminal activity; in practice this means any domain it is notified about by a UK law enforcement agency.

A domain may be regarded as property or intellectual property. It can certainly represent an asset with tradeable value well beyond the cost of registration fees.

Many countries require a court process for such actions, including the USA and Denmark. Such actions usually result in control of the domain being passed to the litigant. The EU is asking for every member state to have a legal power for domain suspension or seizure relating to consumer harms. [5]

Some domains are used by criminals, as with any communications tool. There is a case for a suspension or seizure procedure to exist, although it should be understood that seizing or suspending a domain represents disruption for a website owner, rather than a means to cease their activities. For instance, it would not be difficult for the owner of rolexreplicas.co.uk to register replicarolex.co.uk and use the new domain to serve the same website.

Although Nominet failed to get agreement about a procedure for suspension requests, it has continued to accede to requests, which have roughly doubled in number each year from 2014, totalling over 16,000 in 2017. [6] The reasons requests have doubled is unclear, and ORG has not been given clear answers. It maybe in part because the costs of domain registrations decreases over time, in part because detection has improved, and in part because it becomes necessary for a criminal enterprise to register new domains once they are suspended. Parties we spoke to agreed that it is unlikely that the number of criminals is doubling.

Around eight authorities have been using the domain suspension process, one of which, National Trading Standards, is legally a private company and not subject to Freedom of Information Act requests.

Nominet does not require any information from these organisations, it simply requires them to request suspensions in writing. For instance, they are not asked to publish a policy explaining when the organisation might ask for domains to be suspended, or what the level of evidence required to act might be.

Several of the organisations making requests were unable to supply a policy, or refused to supply information about their policy, when we made Freedom of Information requests. [7] The National Crime Agency refused to respond, as it is not subject to the Freedom of Information Act. National Trading Standards spoke to us, but did not supply a policy; it is not subject to the Act. The Fraud and Linked Crime Online (FALCON) Unit at the Metropolitan Police Service confirmed that it has no policy, but decide on an ad hoc basis. The National Fraud Intelligence Bureau at City of London, which suspended over 2,700 domains last year, says: “We do not have a formal Policy”. [8]

Nominet’s process is:

  1. The agency concerned files a request to Nominet, citing the domains it believes are engaged in criminal activity. This may be one or a list of thousands of domains.

  2. Nominet ensure the owners are notified and given a period to remove anything contravening the law.

  3. If there is a response from a domain owner, the law enforcement agency is asked to review its decision.

  4. If there is no response from a domain owner, the domains are suspended.

  5. Any further complaints are referred back to the law enforcement agency.

There is no independent appeals mechanism. If a domain owner asks for a domain suspension to be reconsidered, they are referred back to the police or agency that made the request, who can revisit the decision. As most of the agencies have no policy, or will not publicise it, this does not seem to be a procedure that would give confidence to people whose domains are wrongly targeted.

This is in contrast to the Internet Watch Foundation(IWF)’s procedure, which provides an appeal process with an independent retired judge to consider whether in fact material should be removed or blocked, or left published, once the IWF has made an internal review of its original decision.

The IWF’s decisions are relatively simple compared to the range of concerns advanced to Nominet by the various agencies involved. Despite this, it is surprising that there is no independent review of the grounds for a suspension. It seems unlikely that the police and agencies will always be able to review their own work and check if their initial decision is correct without bias or repeating their error. It also seems unlikely that everyone who wishes to complain would have confidence in the police’s ability to review a complaint.

Ultimately, the decision to suspend a domain is Nominet’s. Nominet owes its customers, domain owners, a trustworthy process that ensures that domain owners are able to have their voices heard if they believe a mistake has been made. Asking the police to review their request does not meet a standard of independence and robust review.

There is also a lack of transparency for potential victims as a result of Nominet’s policy to suspend domains rather than seize them. Suspensions simply make domains fail to work. A domain seizure would allow agencies to display “splash pages” warning visitors about the operation with which they may have done business. If goods are dangerous, such as unlicensed medicines or replica electronics, this may be important.

In our view, an independent prior decision and an independent reviewer are needed for Nominet’s process to be legitimate, fair and transparent, along with splash pages giving sufficient warning to prior customers. Domain seizure processes should replace informal suspension requests and the process should be established by law.

Because some improvements can be made by Nominet that fall short of a fully accountable, court-supervised process, we propose these as short term measures.

Recommendations to Nominet

1. Adopt Freedom of Information principles

2. Ask the government for a legal framework for domain seizure based on court injunctions for domain seizures

3. Require notices to be placed after seizures to explain the legal basis and outline any potential dangers to consumers posed by previous sales made via the domain. This could include contact details for anyone wishing to understand any risks to which they may have been exposed

4. Short term: Offer an independent review panel

5. Short term: Require government organisations to publish their policies relating to domain suspension requests

6. Short term: Publish the list of suspended domains, including the agency that made the request and the laws cited

7. Short term: Require government organisations to take legal responsibility for domain suspension requests

[1] http:/www.dailymail.co.uk/news/article-1233016/Over-thousand-scam-websites-targeting-Christmas-shoppers-shut-online-raid-Scotland-Yards-e-crime-unit.html Over a thousand scam websites targeting Christmas shoppers shut down after an online raid by Scotland Yard’s e-crime unit, 4 December 2009, dailymail.co.uk

[2] http://web.archive.org/web/20111113021751/http://www.nominet.org.uk/news/releases/?contentId=8216 Nominet calls on stakeholders to get involved in policy process, nominet.org.uk, 09 February 2011 (webarchive)

[3] https://www.theregister.co.uk/2011/11/25/nominet_domain_takedowns/ ISP outcry halts cybercops’ automatic .UK takedown plan, The Regis- ter 25 November 2011

[4] https://www.nominet.uk/nominet-formalises-approach-to-tackling-criminal-activity-on-uk-domains/

[5] Regulation (EU) 2017/2394 of the European Parliament and of the Council of 12 December 2017 on cooperation between national authorities responsible for the enforcement of consumer protection laws and repealing Regulation (EC) No 2006/2004 https://eur-lex.europa.eu/legal-content/EN/TX-T/?uri=uriserv:OJ.L_.2017.345.01.0001.01.ENG&toc=OJ:L:2017:345:TOC

https://wiki.openrightsgroup.org/wiki/Consumer_Protection_Cooperation_Regulation

[6] https://wiki.openrightsgroup.org/wiki/Nominet/Domain_suspension_statistics has a table of statistics derived and referenced from Nominet’stransparency reports

[7] The results of our FoI requests for domain suspension policies are summarised with references at https://wiki.openrightsgroup.org/wiki/Nominet/Domain_suspension_statistics

[8] https://www.whatdotheyknow.com/request/national_fraud_intelligence_bure#incoming-1115354

[Read more]