Digital Privacy

Internet censorship and child protection

What’s happening?

The government is thinking about placing restrictions on access to some Internet sites because they are worried about children accessing inappropriate material. This would happen through Internet ‘filtering’, which is an effort to prevent Internet users from accessing particular websites. There are also proposals for more censorship for other reasons, including to stop access to sites related to terrorism and to try to prevent copyright infringement. This briefing is focused on the child protection issue.

The Culture Secretary Jeremy Hunt MP set out plans last year to “protect consumers from offensive and unlawful content”[1] which will include new proposals for internet filtering to protect children. Since then there have been calls from Claire Perry MP and the Daily Mail, in particular, for Internet filters that block adult content and which are turned on by default. The Government are due to run a consultation on this soon.

The child protection and censorship debate is mainly about what tools should be available to parents, how they work, and whether filtering should be turned on by default. It is important that restrictions on access should require an explicit decision (an ‘active choice’) by those wanting to turn filtering on – most likely parents. And the government should not force ISPs to offer ISP-level filtering. Otherwise, we are encouraging a system of censorship that will inevitably block too much for too many people.

How filtering works: a brief introduction

There are three different ways that filtering can work:

1. ‘Device-based’ filtering (meaning tools installed on each device, such as a laptop, ‘tablet’ or mobile phone, tailored to the user of that device)

2. ‘Router-based’ filtering (meaning tools installed on the device in a household that delivers their Internet connection).

3. ‘ISP-level’ filtering (meaning filtering that is managed by the Internet Service Provider when they handle Internet traffic).

Filtering can be based on either a ‘blacklist’ or a ‘whitelist’ of websites. A whitelist is a list of sites that a filtering tool allows the user to see. Whitelists tend to be small and therefore well categorised. They are better suited to younger users, but do not scale well. A blacklist is a list of sites that a filtering tool should block. Given that there are millions of websites, blacklists are typically created through some form of automated classification process, and are prone to errors.

What ISPs are already doing

In October 2011 the four main UK ISPs agreed to a code of practice regarding adult content filters.[2] The published code of practice did not mandate specific technical measures, but that they would provide new customers with an opportunity to make an “Active Choice” about the use of filtering tools (which they commit to offering free of charge). An ‘active choice’ means that the customer will have to explicitly specify whether they would or would not like to have filtering on, with no default option.

Why is Internet filtering a problem?

1. It blocks the wrong content and people: Filtering tools often block the wrong content. For example, our research into mobile networks’ child protection found that large numbers of sites were inappropriately blocked, including a church, a political party, community sites and many personal blogs[3]. This means too many people are prevented from looking at legitimate content. It will lead to mistakes that see legitimate traffic disrupted.

It is also difficult to define what should be blocked. For example, it is hard to define the boundaries of pornography or adult sexual content, especially when considering what is appropriate for young people of different ages.

It is also very simple to evade blocking. So whilst it might prevent accidental access to material, it can do little to prevent anyone, including children, who is determined to access a site. 

2. It will lead to an infrastructure of censorship: In encouraging ISPs to implement network-level filtering, these proposals will lead to the development of a greater infrastructure of control and surveillance. With ISP filtering, control over what is blocked, and why, rests ultimately with ISPs. This means it is less transparent. The onus is on the ISP to communicate to users what filtering is happening on their networks. ISP-filters can therefore be subject to abuse for reasons other than child protection, for example for commercial or political reasons. There is also a privacy risk because it requires the ISP to monitor user traffic in some way. Often filtering tools are supplied to ISPs by third party companies, which means that details of Internet use are potentially gathered by those companies as well.

3. It undermines international commitments to freedom of expression: If the government promotes the development and deployment of the same equipment that is used by regimes practicing political censorship, it could undermine our international commitments to freedom of expression online, made so strongly by the Foreign Secretary and Prime Minister in the past year[4].

4. It discourages active parenting and can harm children’s learning: Filtering cannot replace involved and engaged parenting – and may induce a false sense of security on the part of parents and policy makers. This issue was highlighted by Professor Tanya Byron in her reports for the UK Government.[5] Further, as children grow older their ability to make their own decisions about what information to access grows in importance too[6].

The solution: Active choice, device-based filtering

People should be able to specify whether Internet filtering is on or off.  That filtering is best done at the device level. Any filtering Internet filtering should follow two principles:

1. The choice to have filtering should involve active, informed consent by the account subscriber.

2. The filtering should happen as ‘close’ to the person requiring the filtering as possible, preferably on the device itself.

‘Active choice’ was recommended in the previous two reviews of child safety online. And to date, the Government has been supportive of a policy of promoting active choices by parents, rather than default filtering.

Some key reading

1. “Mobile Internet censorship: what’s happening and what to do about it”, Open Rights Group / LSE Media Policy Project, May 2012: http://www.openrightsgroup.org/ourwork/reports/mobile-internet-censorship:-whats-happening-and-what-we-can-do-about-it

2. “Why device-based Internet filtering via active choice is the best solution for child-protection online”, Open Rights Group, February 2012, http://www.openrightsgroup.org/assets/files/files/pdfs/Net%20Filtering%20Brief.pdf 

3. “Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression”, Frank La Rue, May 2011 (especially pages 6-16), http://www2.ohchr.org/english/bodies/hrcouncil/docs/17session/A.HRC.17.27_en.pdf



[4] See our Zine for example

[5] See Professor Tanya Byron, 2008, Safer Children in a Digital World: The Report of the Byron Review page 81

[6] See for example See Tim Davies, Sangeet Bhullar, and Terri Dowty, “Rethinking responses to children and young people’s online lives”, September 2011, and “Firewalling child rights in the name of “protection“”, Jenny Thomas, Senior Child Rights Officer at Child Rights International Network (CRIN), June 15th 2012