Digital Privacy

What mobile internet filtering tells us about porn blocks

There’s been plenty of coverage today of calls to do more to block access to pornography, and specifically pornography on the Internet. There’s plenty to be frustrated about much of this, for example an inability to distinguish between child abuse images and pornography in some cases.

But for this post I’ll focus on some lessons we can draw from the Internet filtering that already takes place, mostly by default, on mobile networks.

This post does not deal with whether blocking actually does what it is supposed to – for example, preventing access or exposure to adult material, or helping improve attitudes to sex or gender. This post is also not intended to tacitly endorse blocking as a strategy – it rather is written to report what we’ve observed happens when it is deployed. It is always worth reiterating that restricting access to pornography, in the way described below, is a different issue to tackling child abuse and access to images of it.

Whether you think that website blocking is a good idea or not, it is important to at the very least recognise that it has serious, tangible, negative consequences, especially when it is switched on by default at the network level. This post helps demonstrate what some – but by no means all – of these issues are and why they happen.

1. It’s very hard to define what you think should be blocked

Blocking is almost never just about stopping access to pornography. Mobile networks block a wide variety of content, including blogs to content related to tobacco or alcohol. It is hard enough trying to define pornography, let alone what fits into categories such as ‘bombs’ or ‘esoteric content’. Orange listed both of these as blockable categories on their website until a recent update to their customer services pages. I assume that the blocking categories remain the same, but have asked if this is the case and will report back if not.

Even if blocking is restricted to subcategories like ‘violent porn’, the ability to define what that means is made extremely difficult because websites and content are, due to the sheer volume of sites to check, by robots, not human beings.

An added complication is what age-level filtering is set at. What sort of material should a 17 year old have access to? What about a 13 year old? What about an 8 year old? All very different. Any default blocking involves decisions about what level to set content restrictions at for a given age. On mobile networks, there is no facility to allow parents to tailor filtering to suit their children’s age.

2. The ‘wrong’ things will be blocked

Through mistakes or abuse, too many sites will be blocked. This is not in question. 

We have found political blogs, technology news sites, shops, community sites, a blog about things that go on a shelf, campaign websites and churches blocked on various mobile networks. The primary website of the privacy tool Tor (meaning the HTTP version of the Tor Project website, rather than connections to the Tor network) was blocked on Vodafone, O2 and Three last year. We heard from an online gift shop that was blocked over Christmas last year. We assumed this was because they sold engraved lighters – and thus were categorised as ‘tobacco’. It took over a month to get this fixed. 

In Australia, 1,200 websites were accidentally blocked when the Australian Securities and Investment Commission tried to take down two sites it believed were behind a fraud campaign. 

There are a number of reasons why this might happen, including categories being too broadly defined and mistaken categorisations through to human error. As the filtering services are run by third party services, and exactly what is blocked is not known, there is also the danger that filtering will be abused for all of the reasons that someone might want to restrict access to a website – from commercial rivalry through to political censorship.

It is important to recognise that although it is often merely inconvenient for a user to be subject to filtering, it is more than inconvenience when your site is blocked. Because that could mean that customers can’t access your shop. Or people can’t read your political commentary. Or you can’t share your cat photos. Whatever, sites stuck behind filters are cut off from anybody on the relevant network.

3. It’s hard to find out what is blocked and why

There’s no easy way of finding out if your site is one of those blocked – unless we require people to have accounts with every mobile network so they can habitually check for themselves. And if you’re not in the UK, that’s not really an option. Only O2 have a (very useful) URL checker.

So we don’t really know why mobile operators block some websites, or how they come up with the categories that they think should be blocked, or how they decide what sites fit into those categories. They are not transparent enough about the categories they choose, what they consider fits in to those categories, or who decides these things.

Mobile operators all say that they act according to a code of conduct set by the Mobile Broadband Group. But this code does not itself provide any criteria for determining or defining ‘blockable’ content. It does point to a framework devised by the Independent Mobile Classification Body (IMCB).

The Mobile Broadband Group code of conduct that mobile operators adhere to states that filters are ‘set at a level that is intended to filter out content approximately equivalent to commercial content with a classification of 18.’ There is a process of interpretation, as mobile operators look to derive blocking lists from the framework specifications. There is an added layer of interpretation: these filtering lists are usually maintained by the external third-party providers of the filtering systems.

There is a further problem of how ‘current’ the frameworks are. The IMCB Framework to which mobile operators adhere in their filtering policies was written in 2005. The latest version of the code of practice on self-regulation was published in 2009, with the original published in 2004.

It is not clear how frequently the mobile operators, individually or collectively through the Mobile Broadband Group, review how appropriate the filtering classifications are, or more broadly the effectiveness of their filtering systems – whether mistakes are made, how prevalent they are, and how they deal with them.

4. Reporting problems and mistakes is very difficult

Mobile operators’ staff often seem uninformed about mobile Internet filtering, and thus they are poorly trained to help users making complaints – whether they are trying to report a mistaken block or have blocking removed. Furthermore, a customer’s request to have the filtering removed may be framed as a request to turn on ‘adult content’ – which suggests the primary interest is adult sexual material. That ignores the breadth of the content blocked under these filtering systems, noted above.

This is especially problematic for sites that are blocked. Most networks see this as only an issue for their customers, and specifically only about whether their account has blocking enabled. It can be especially difficult, therefore, to get your website delisted by the blocking service when it is blocked inappropriately. This for many will be a serious inhibition on their ability to trade or share information.

5. Failure to put effort into addressing the problems

One of the biggest problems with mobile blocking has been the haphazard way in which the systems work – for example, the way networks decide what content should be blocked, and how mistakes should be addressed. Mobile networks have faced significant political and media pressure to do something about possible access to pornography on their networks, but feel very little in the way of commercial or political pressure in the other direction.

So it’s easy to see that there are not too many incentives for them to take seriously issues such as over blocking, the reporting of mistakes or decisions about what is ‘blockable’. This is why it’s really important that policy makers understand that the ‘switch it off’ calls are not as simple as they sound, and that there is a tangible impact on freedom of expression.

We have been looking at this for a while now, and published a report almost exactly a year ago called ‘Mobile Internet Censorship: what’s happening and what to about it.” We made a number of asks of mobile networks – more transparency, more choice, better means of fixing mistakes. It’s fair to say that aside from networks being slightly easier to contact, very little has been done to fix the problems.

There are other problems with website blocking, for example related to how easy it is to get round blocks, how website blocking won’t really address access or distribution of illegal material, how it might lull parents into a false sense of security, how network blocking is a problematic technological solution, to name a few. And that’s without looking at some of the calls for age verfication and registration made today.

The Government’s position on website filtering for child protection currently seem reasonably sensible – help households make their own decisions about what is appropriate for them. We have to hope that the sort of pressure exerted by calls for more blocking made today, emotively made in the wake of such a tragedy, don’t pursuade them to take a more drastic route without considering the consequences. 

At ORGCon next week Professor Andy Phippen will be giving a talk called ‘Think of the children!’, where he’ll talk about his research into young people, technology and exposure or access to ‘harmful’ material. And Child Rights International Network will be talking about the impact of blocking on children’s rights in our rapid fire talks session (they have written previously about this for ORGZine).

A full programme and tickets are available at the ORGCon website