CleanIT: bad policy making

Thanks to an EDRi leak, European proposals for widespread action against “terrorism” were revealed last week, with press coverage in the Telegraph and elsewhere.

The project – Clean IT – moved swiftly to deny that they had been a closed project, which is partly true. They also tried to reduce the significance of the document they had produced, saying it was “for discussion” (even though page one of the leaked document suggests the contents are ‘detailed recommendations’).

The plans include measures for upload filtering, corporate censorship, plus procedures for flagging dubious content.

The first of these – “upload filtering” – has significant commercial backing, according to EDRi, but would pose a huge privacy problem. Effectively, all content by all users would have to be machine-read as it was submitted to a service like Google Docs, in case it contained “terrorist” material.

The other discussions, focusing on terms and conditions, try to pass responsibility for free speech to companies, rather than courts. Civil society groups have been saying very strongly that this kind of approach is dangerous. Companies are cautious and of course want to avoid being held liable and responsible for their users’ content. So using T&Cs is likely to lead to overly sensitive reactions about what content to take action against. Asking companies to use T&Cs is lazy – it allows government to see policies put in place without legislation or safeguards.

It is also something that civil society has been stressing should be avoided in submissions to the European Commission’s consultation on ‘notice and action’ – a process this is seemingly not connected to. 

In the UK, some similar ideas are being considered under the Home Office’s Prevent Counter Terrorism strategy. This strategy has mooted the idea of blocking of “terrorist” websites on the government estate, and “encouraging” private ISPs to voluntarily block the same list.

However, what links these proposals is the absence of an understanding and definition of the problem – for example, clear evidence that terrorism really can be tackled effectively in these ways. The assumption appears to have been made in both cases that terrorist material is easy to define, is in some way “circulating” and is “recruiting” people to extreme and pro-violence views, and then helping shift these people into actively violent networks. 

Surely it is important to know whether recruitment is between people, in specific kinds of real life locations, targeting individuals with particular vulnerabilities or experiences; or whether it is in fact being conducted via certain websites?

This kind of absence of evidence and adoption of wide assumptions is all too prevalent in Internet policy. In the case of supposed terrorist content, it is particularly problematic as the understandable desire to do something about terrorism can swiftly become a reason to support any measure, no matter how unproven.

Quite a few other Internet policies fall into this category of laws, including the now-dying Hadopi, the troubled Digital Economy Act, and the Australian attempts to create a national adult content firewall. Others, like Data Retention, are under legal challenge. Yet others, like the Claire Perry and Daily Mail-inspired adult content filters proposed for the UK, look like angering the public and potentially harming their supposed objectives.

There is hope. EDRi have embarrassed the CleanIT group, and the EU for funding them. Moving straight to solutions without clearly establishing the problem to address; duplicating work the Commission is already doing, for example on notice and action; and failing to take into account due process and the legal obligations created by human rights law: the Clean IT project is seemingly wasting taxpayers’ money with incompetent and dangerous proposals for the private policing of online content.

This is another signal that politicians and policy makers need to gain some scepticism and rigour when thinking about Internet policy, instead of dealing with it on the basis of rhetoric and first guesses about public harms.