Response to CMS Committee inquiry into online safety

Context

The CMS Committee Online Safety inquiry launched in August 2013 and focuses on three issues:

  • How best to protect minors from accessing adult content;
  • Filtering out extremist material, including images of child abuse and material intended to promote terrorism or other acts of violence;
  • Preventing abusive or threatening comments on social media.

Their deadline for written comments was 30th September. Below is our written response to the inquiry.

For more information please contact Peter Bradwell, peter@openrightsgroup.org

 

Open Rights Group response, September 2013

We welcome the attention that the Committee is devoting to looking at these important issues and appreciate the opportunity to offer our views.

The areas covered by the Inquiry have been the subject of quite intense, heated and not always helpful or constructive debate this summer. We hope that this Inquiry provides an opportunity for a calm and considered look at these difficult issues.

Open Rights Group have done a significant amount of work on how Internet filtering works over the past few years. That has included:

1. Setting up a website called Blocked.org.uk, which helps us monitor reports of overblocking on mobile phone networks’ Internet filtering services.
2. In May 2012 producing a report, published jointly with LSE Media Policy Project, looking at how mobile networks implement Internet filtering, in particular addressing issues with over-blocking.
3. A response to the Department for Education’s consultation on parental Internet controls.
4. In September 2012, co-signing a letter to the Prime Minister alongside a coalition of civil society groups including Index on Censorship, Article 19, Big Brother Watch and Consumer Focus.
5. A fact sheet summarising some relevant evidence in late 2012.
6. Co-signing a joint letter to the Culture Secretary Maria Miller MP in June 2013, along with English PEN, Index on Censorship, and Big Brother Watch, regarding her more recent ‘summits’ with Internet companies.
7. A list of 20 key questions directed at ISPs regarding their implementation of Internet filtering.

As an organisation focused on human rights and civil liberties in the digital age, we believe it is possible to reconcile the idea that the Internet can create greater opportunities for exercising the right to freedom of expression with a desire to tackle the problems and dangers young people now face online.

We do not see these as mutually exclusive aims. But we also believe that a simplistic ‘restrictions’ based approach to child safety will: fail on its own terms due largely to the limitations of filtering technology; ignore the wider social issues that young people face; and lead to overly restrictive content practices that reduce the usefulness of the Internet for everybody.

 

1. How best to protect minors from accessing adult content

In summary, some of our key recommendations to the Government are:

1. Help parents make decisions about what is best for their household.
2. Do not create false expectations about what technology can do, and avoid thinking of filtering and restrictions as the solution.
3. Do not mandate network-level filtering for households. Endorse “active choice”, not default-on, filtering.
4. Distinguish clearly between different types of content and the responses appropriate for each.
5. Acknowledge over- and under-blocking as a serious problem. Ensure that providers of filtering services address this. Ensure they offer transparency regarding what their filters block and why, alongside an efficient, easy to use process for quickly correcting mistakes.
6. Improve sexual health and relationship education and ensure children and young people have supportive and easy to find routes to finding advice and help.

We believe that the Department for Education response to their consultation in December 2012 arrived at a reasonable position – that the Government should help parents make their own decisions about what is best for their household. It has been disappointing to see the Government somewhat turn away from this and effectively mandate default-on filtering.

 

Challenging the ‘one click to safety’ approach

In his summer speech on filtering and child protection the Prime Minister promised ‘no more hassle of downloading filters for every device, just one click protection. One click to protect your whole home and to keep your children safe..’

We believe this is unhelpful and misleading, and runs against the advice of the previous Byron and Bailey reviews of online safety undertaken for the Government.

There is no switch that will keep children and young people safe. If we encourage parents to believe that there is, too many will assume their job is done when they press it. That is to ignore the limitations of the filtering services and the broader issues and pressures that children and young people face online.

 

Start with good evidence

Too often anecdotes, unsourced figures or insufficiently robust evidence has been used by those in the debate about online safety. We would urge the Committee to base its findings in the best available evidence – and there is plenty of good quality work, for example from the aforementioned EU Kids Online project run by Professor Sonia Livingstone – and also to be rigorous in questioning the evidence presented to it.

 

Avoiding default-on filters and endorse active choice

There is a risk that default internet filtering will move decisions about what is appropriate for families and households further out of parents’ hands. The Government should instead promote an ‘active choice’ model that encourages parents to make their own decisions about what is appropriate and what tools to use.

We believe the available evidence (for example, from the EU Kids Online project) does not support an approach focused simply or primarily on filtering restrictions, and certainly not a default ‘on’ Internet filter.

 

Filtering services are error-prone

We know that filtering systems often, through error or overreach, lead to the blocking of legal and legitimate content. This is a byproduct of trying categorise and filter such a massive volume of content. We would point here, for example, to our work looking at mobile Internet filtering products (which is noted in the introduction). We found amongst others, political blogs, political campaigns, community websites, gay news sites, church groups and technology news sites blocked by accident by mobile networks.

Rather than looking at whether this is a problem, policy makers should be asking how to take this fact into account. We are concerned that some vocal supporters of default filtering play down concerns about mistaken blocking, seeing them as just pedantic quibbles.

There are a number of damaging consequences of ignoring this problem.

First, website owners find it harder to figure out if their site is blocked by one of the filters. This is not an easy task, as most operators of blocking services do not offer an easy way to check URLs (O2 is a notable exception – see their URL checker). This problem grows when a website owner is faced with a number of blocking services across the various ISPs and service providers, and grows further when they face having to look at other country’s policies. If we pretend that over-blocking is not a problem, there will be no effort to improve matters for affected website operators.

Second, our experience of talking to affected website owners is that they find it hard to get their site unblocked when it is mistakenly categorised as ‘blockable’ by a filtering service.

Service providers operating filtering services should be asked how they have assessed the risks of over-blocking and whether they have taken steps to ensue their deployment of filtering tries to address this. They should explain how they will ensure website owners are able to report mistakes and get problems fixed. And service providers should ensure that website owners are able to easily check whether their site has been categorised by the filtering system.

We would like to highlight two damaging consequences of over-blocking – on access to information, and on the economy.

 

a. Access to information

First, it would be worrying if sites blocked by accident or through overly broad cateogries are considered collateral damage of filtering services, as this would be to deny young people access to information at the very moment when it is most important to be helping them satisfy their curiosity and interests.

Filtering can lead to children, young people, and adults being denied access to legitimate and age-appropriate information and resources such as sexual health information and advice.

Filtering that covers different age ranges and / a broadly defined set of ‘adult’ content can deny young people access to material appropriate to their development and needs. In a paper to the EU Kids Online conference in 2011, Tim Davies, Sangeet Bhullar and Terri Dowty argued that filtering can therefore restrict young people’s rights in the name of protecting them from risk – specifically “rights to freedom of expression and access to information across frontiers (Article 13, 17), rights to freedom of association (Article 14), rights to preparation for responsible life in a free society (Article 29) and rights to protection of privacy (Article 16)”. They argue that:

“…these broader rights are frequently neglected – with young people’s access to information on key topics of health, politics and sexuality limited by Internet filtering – and with a lack of critical formal and informal education supporting young people to gain the skills to live creative and responsible lives in increasingly digitally mediated societies.”

(See Tim Davies, Sangeet Bhullar, and Terri Dowty, “Rethinking responses to children and young people’s online lives”, September 2011)

 

b. An unintended economic impact

Second, to give a specific example, we heard from the owner of an online gift shop that their site had been blocked by Orange’s Safeguard system over Christmas last year. The site sold engraved cigarette lighters, and so we assumed the filters had mistakenly categorised the site under its ‘tobacco’ category. Despite reporting the issue in early December it took until January to get the problem sorted and the site removed from the block list.

A larger business may have been able to pressure for a resolution sooner – we see no good reason that smaller businesses should be inhibited in their efforts to reach consumers online in ways larger businesses are not.

To build systems in which less established businesses or organisations are hampered in this way undermines one of the core benefits that the Internet offers for economic and social innovation.

 

Device based filters are preferable to network level filters

We urge the Committee to look seriously at the relative merits of network-level and device-based filters, across different contexts. We have previously written about some of the benefits and problems of each, and refer the Committee to this previous briefing.

A healthy market for parental controls is developing; everything proposed regarding filtering technology is available to parents already. Mandating network level filtering would amount to an intervention that could disrupt an emerging market for Internet access tools, whilst imposing significant costs on Internet Service Providers.

The Government’s role should be to support this variety of tools and services by working with industry to ensure these are easily available and that parents understand how to use them.

 

Filtering services block more than pornography

So far the debate has tended to focus on how filters should block ‘pornography’ or even more specifically ‘hard core pornography’. Yet most if not all filtering systems are set up to block a range of categories of content extending beyond adult sexual content. We urge the Committee to look at the variety of content that filtering systems block, how those categories are developed, who considers them to be ’18 rated’, and what sort of sites those offering filtering think should fall under those categories.

 

Different categories of material require different approaches

Too often in the recent debate, a variety of different categories of material have been confused and muddled together. That has led to a confused discussion about the appropriate way to deal with such a variety of material.

For example, child abuse images are illegal and dealt with by the Internet Watch Foundation. Other types of unlawful content may be subject to powers that can lead to content being taken down.

Other material that is considered merely unsuitable for people of different ages however – and thus relevant when talking about home Internet filters – will involve more subjective judgments. Those will of course likely vary across households or value systems.

When talking about legal material, there is no easy way to define a category of unsuitable content that young people or children should be protected from. This is especially problematic when considering that filtering can happen in a variety of settings (libraries or shops and cafes or schools or homes or on devices taken across all of them), for a variety of age groups, and for people with a variety of values and beliefs.

We urge the Committee to keep a clear head about the very different types of material at issue and how each requires a different approach when considering how to create safer experiences for young people online across contexts.

 

Ensuring that policy makers are asking the right questions

We have not been convinced that in pushing the filtering policies forward, policy makers this year have examined the most important practical questions closely enough.

As a result, we have asked 20 questions of ISPs (also noted in the introduction), which cover some of the most significant practical issues with implementation of filtering services including privacy, how choices will be framed and how mistakes will be dealt with. We will submit answers when we receive them. We urge the Committee to look at these questions and consider the extent to which ISPs have addressed them adequately.

 

Improve sexual health and relationships education

We support the work of academics such as Professor Andy Phippen, who in his research engages with young people in schools to talk about the pressures and problems they face in a world in which ‘digital’ and ‘real’ life are not distinct.

We support the suggestion that these issues are social problems facilitated by technology that cannot be fixed through technological fixes. We also support the recommendation that we should endeavour to create an environment in which young people feel safe and supported in talking about the problems they face. One aspect of this is to look at improving sexual health and relationship education. So we urge the Committee to look carefully at this context in which children and young people experience problematic material online, rather than just at the limited tools available to try and limit access to some of it.

2. Filtering out extremist material, including images of child abuse and material intended to promote terrorism or other acts of violence

We would firstly echo our previous comments. For example, as noted above, it is important for policy makers to recognise that we are talking about a variety of different categories of material. We should not assume that it is easy to define what content to filter out for which people or age group, and we should keep illegal and merely objectionable material distinct.

Our comments above about the subjective judgments involved in deciding what content to filter are particularly relevant when looking at a broader range of content. Words like ‘violent’, ‘extreme’, ‘upsetting’ or ‘offensive’ will have different meanings to people of different ages, beliefs and will this will vary across different settings.

If policy makers do push for an overly simple approach of specifying broad categories of legal but objectionable content to be filtered in a particular places, it is important to remember that this is not stopping these subjective judgement being made. Instead, those judgments are being made by a combination of those in charge of the provision of Internet access in that context (for example, perhaps a building maintenance service), the Internet service providers they use and the filtering service that ISP uses. Each will have their own attitude towards the blockable categories and what material should fall within them. Where these systems are opaque, as they often are, it just becomes harder to understand what decisions about access have been made.

3. Preventing abusive or threatening comments on social media.

We believe that an important aspect of dealing with this is to create a support system for young people with clear routes for them to report problems and find help.

Once again, it is important to distinguish between different types of behaviour, such as offensiveness, abusiveness or threats, which will require different approaches.

We also urge the Committee to look carefully at the issue of anonymity and relatedly pseudonymity. It is too easy to assume that tackling anonymity online is a simple solution to abusiveness.

In fact, people are usually not truly ‘anonymous’ when they are online. People leave all sorts of information that can identify them. It is sometimes possible to use this information to identify somebody with varying levels of confidence – even if the person posts messages as an ‘anonymous’ or ‘pseudonymous’ user. For example an ISP may try to ‘match’ an IP address with one of their subscribers. There are various legal powers that in some circumstance, require Internet companies to disclose this data, and which permit the use of it in various contexts for the purposes of trying to identify a user.

Further, anonymity in fact serves many positive purposes in a variety of circumstances. For further information, please see our short introduction to anonymity online, available from our website.