Blog


November 03, 2016 | Jim Killock

Age verification for porn sites is tricky so let's try censorship

A cross bench group of MPs has tabled an amendment to block pornographic websites that fail to provide ‘age verification’ technologies.

The amendment has been tabled because MPs understand that age verification cannot be imposed upon the entire mostly US-based pornographic industry by the UK alone. In the USA, age verification has been seen by the courts as an infringement on the right of individuals to receive and impart information. This is unlikely to change, so use of age verification technologies will be limited at best.

However, the attempt to punish websites by blocking them is also a punishment inflicted on the visitors to these websites. Blocking them is a form of censorship, it is an attempt to restrict access to them for everyone.
When material is restricted in this way, it needs to be done for reasons that are both necessary for the goal, and proportionate to the aim. It has to be effective in order to be proportionate.

The goal is to protect children, although the level of harm has not been established. According to OfCom: “More than nine in ten parents in 2015 said they mediated their child’s use of the internet in some way, with 96% of parents of 3-4s and 94% of parents of 5-15s using a combination of: regularly talking to their children about managing online risks, using technical tools, supervising their child, and using rules or restrictions.” (1)

70% of households have no children. These factors make the necessity and proportionality of both age verification and censorship quite difficult to establish. This issue affects 30% of households who can choose to apply filters and use other strategies to keep their children safe online.

It is worth remembering also that the NSPCC and others tend to accept that teenagers are likely to continue to access pornography despite these measures. They focus their concerns on 9-12 years olds coming across inappropriate material, despite a lack of evidence that there is any volume of these incidents, or that harm has resulted. While it is very important to ensure that 9-12 year olds are safe online, it seems more practical to focus attention directly on their online environment, for instance through filters and parental intervention, than attempting to make the entire UK Internet conform to standards that are acceptable for this age group.

That MPs are resorting to proposals for website blocking tells us that the age verification proposals themselves are flawed. MPs should be asking about the costs and privacy impacts, and why such a lack of thought has gone into this. Finally, they should be asking what they can do to help children through practical education and discussion of the issues surrounding pornography, which will not go away, with or without attempts to restrict access.

(1) Ofcom report on internet safety measures: Strategies of parental protection for children online, Ofcom, December 2015: http://stakeholders.ofcom.org.uk/binaries/internet/fourth_internet_safety_report.pdf pp7

[Read more] (1 comments)


November 02, 2016 | Jim Killock

Facebook is right to sink Admiral's app

Firstcarquote aimed to stick Admiral’s famous spyglass right up your Facebook feed.

Admiral logo, spyglass looking rightLate yesterday, on the eve before Admiral tried to launch Firstcarquote, their application’s permission to use Facebook data was revoked by the social media site.

According to Admiral’s press release their app would use, “social data personality assessments, matched to real claims data, to better understand first time drivers and more accurately predict risk.” So young people could offer up their Facebook posts in the hope of getting a reduction in their car insurance.

However, their application has been found to be in breach of Facebook's Platform Policy section 3.15, which states:

Don’t use data obtained from Facebook to make decisions about eligibility, including whether to approve or reject an application or how much interest to charge on a loan.

Firstcarquote’s site says:

“We were really hoping to have our sparkling new product ready for you, but there’s a hitch: we still have to sort a few final details.”

Like persuading Facebook to change their Platform Policy.

There are significant risks in allowing the financial or insurance industry to base assessments on our social media activity. We might be penalised for our posts or denied benefits and discounts because we don’t share enough or have interests that mark us out as different and somehow unreliable.  Whether intentional or not, algorithms could perpetuate social biases that are based on race, gender, religion or sexuality. Without knowing the criteria for such decisions, how can we appeal against them? Will we start self-censoring our social media out of fear that we will be judged a high risk at some point in the future?

These practices could not only change how we use platforms like Facebook but also have the potential to undermine our trust in them. It is sensible for Facebook to continue to restrict these activities, despite patents indicating that they may themselves wish to monetise Facebook data in this kind of way. 

Insurers and financial companies who are beginning to use social media data need engage in a public discussion about the ethics of these practices, which allow a very intense examination of factors that are entirely non-financial.

Companies like Admiral also need to think about how using such fluid personal information leaves their system vulnerable to being gamed.  How hard would it be to work out what “likes” Admiral views as favourable, or unfavourable, and alter your profile accordingly? What we regard as a chilling effect could also turn out to be an incentive to cheat.

We must also recognise that these problems may confront us in the future, as the result of the forthcoming changes created by the General Data Protection Regulation. The government is clear this will enter UK law regardless of Brexit, which is sensible.

The GDPR creates many new rights for people, one of which is the famous right to delete your data, and another is the right to obtain all of your information at no cost, in electronic format, called “data portability”.

Data portability creates significant risks as well as benefits. It could be very hard to stop some industries attempting to abuse the trust of individuals, asking them to wholesale share their data to obtain discounts or favourable deals, but perhaps not being completely upfront about the downsides to the consumer.

There are extra protections in the GDPR around profiling, and particularly important is the right to have information deleted, if you find you have “over shared”.

Nevertheless, Admiral’s application shows a lack of understanding of the risks and responsibilities in parts of the financial industry. Indeed, Admiral appear to have not even done the basics and read Facebook’s terms and conditions, or understood the capacity for their product to be gamed. If this disregard is symptomatic, it may point to a need for sector specific privacy legislation for the financial industry, to further protect consumers from abuse through use of inappropriate or unreliable data.

 

[Read more] (3 comments)


October 27, 2016 | Jim Killock

Now we want censorship: porn controls in the Digital Economy Bill are running out of control

The government’s proposal for age verification to access pornograpy is running out of control. MPs have worked out that attempts to verify adult’s ages won’t stop children from accessing other pornographic websites: so their proposed answer is to start censoring these websites.

That’s right: in order to make age verification technologies “work”, some MPs want to block completely legal content from access by every UK citizen. It would have a massive impact on the free expression of adults across the UK. The impact for sexual minorities would be particularly severe.

This only serves to illustrate the problems with the AV proposal. Age verification was always likely to be accompanied by calls to block “non-compliant” overseas websites, and also to be extended to more and more categories of “unsuitable” material.

We have to draw a line. Child protection is very important, but let’s try to place this policy in some context:

  • 70% of UK households have no children

  • Take up of ISP filters is around 10-30% depending on ISP, so roughly in line with expectations and already restricting content in the majority of households with children (other measures may be restricting access in other cases).

  • Most adults access pornography, including a large proportion of women.

  • Less that 3% of children aged 9-12 are believed to have accessed inappropriate material

  • Pornography can and will be circulated by young people by email, portable media and private messaging systems

  • The most effective protective measures are likely to be to help young people understand and regulate their own behaviour through education, which the government refuses to make compulsory

MPs have to ask whether infringing on the right of the entire UK population to receive and impart legal material is a proportionate and effective response to the challenges they wish to address.

Censorship is an extreme response, that should be reserved for the very worst, most harmful kinds of unlawful material: it impacts not just the publisher, but the reader. Yet this is supposed to be a punishment targeted at the publishers, in order to persuade the sites to “comply”.

If website blocking was to be rolled out to enforce AV compliance, then the regulator would be forced to consider whether to block a handful of websites, and fail to “resolve” the accessibility of pornography, or else to try to censor thousands of websites, with the attendant administrative burden and increasing likelihood of errors.

You may ask: how likely is this to become law? Right now, Labour seem to be considering this approach as quite reasonable. If Labour did support these motions in a vote, together with a number of Conservative rebels, this amendment could easily be added to the Bill.

Another area where the Digital Economy Bill is running out of control is the measures to target services who “help” pornography publishers. The Bill tries to give duties to “ancillary services” such as card payment providers or advertising networks, to stop the services from making money from UK customers. However, the term is vague. They are defined as someone who:

provide[s], in the course of a business, services which enable or facilitate the making available of pornographic material or prohibited material on the internet by the [publisher]

Ancillary services could include website hosts, search engines, DNS services, web designers, hosted script libraries, furniture suppliers … this needs restriction just for the sake of some basic legal certainty.

Further problems are arising for services including Twitter, who operate on the assumption that adults can use them to circulate whatever they like, including pornography. It is unclear if or when they might be caught by the provisions. They are also potentially “ancillary providers” who could be forced to stop “supplying” their service to pornographers to UK customers. They might therefore be forced to block adult content accounts to UK adults, with or without age verification.

The underlying problem starts with the strategy to control access to widely used and legal content through legislative measures. This is not a sane way to proceed. It has and will lead to further calls for control and censorship as the first steps fail. More calls to “fix” the holes proceed, and the UK ends up on a ratchet of increasing control. Nothing quite works, so more fixes are needed. The measures get increasingly disproportionate.

Website blocking needs to be opposed, and kept out of the Bill.

 

[Read more] (4 comments)


October 19, 2016 | Jim Killock

Fig leafs for privacy in Age Verification

The Digital Economy Bill mandates that pornographic websites must verify the age of their customers. Are there any powers to protect user privacy?

Yesterday we published a blog detailing the lack of privacy safeguards for Age Verification systems mandated in the Digital Economy Bill. Since then, we have been offered two explanations as to why the regulator designate, the BBFC, may think that privacy can be regulated.

The first and most important claim is that Clause 15 may allow the regulation of AV services, in an open-ended and non-specific way:

15 Internet pornography: requirement to prevent access by persons under the age of 18

  1. A person must not make pornographic material available on the internet on a commercial basis to persons in the United Kingdom except in a way that secures that, at any given time, the material is not normally accessible by persons under the age of 18

  2. [snip]

  3. The age-verification regulator (see section 17) must publish guidance about—

    (a) types of arrangements for making pornographic material available that the regulator will treat as complying with subsection (1);

However, this clause seems to regulate publishers who “make pornography material available on the internet” and what is regulated in 15 (3) (a) is the “arrangements for making pornography available”. They do not mention age verification systems, which is not really an “arrangement for making pornography available” except inasmuch as it is used by the publisher to verify age correctly.

AV systems are not “making pornography available”.

The argument however runs that the BBFC could under 15 (3) (a) tell websites what kind of AV systems with which privacy standards they can use.

If the BBFC sought to regulate providers of age verification systems via this means, we could expect them to be subject to legal challenge for exceeding their powers. It may seem unfair to a court for the BBFC to start imposing new privacy and security requirements on AV providers or website publishers that are not spelled out and when they are subject to separate legal regimes such as data protection and e-privacy. 

This clause does not provide the BBFC with enough power to guarantee a high standard of privacy for end users, as any potential requirements are undefined. The bill should spell out what the standards are, in order to meet an ‘accordance with the law’ test for intrusions on the fundamental right to privacy.

The second fig leaf towards privacy is the draft standard for age verification technologies drafted by the Digital Policy Alliance. This is being edited by the British Standards Institution, as PAS 1296. It has been touted as the means by which commercial outlets will produce a workable system.

The government may believe that PAS 1296 could, via Clause 15 (3) (a), be stipulated as a standard that Age Verifcation providers abide by in order to supply publishers, thereby giving a higher standard of protection than data protection law alone. 

PAS 1296 provides general guidance and has no means of strong enforcement towards companies that adopt it.  It is a soft design guide that provides broad principles to adopt when producing these systems.

Contrast this, for instance, with the hard and fast contractual arrangements the government’s Verify system has in place with its providers, alongside firmly specified protocols. Or card payment processors, who must abide by strict terms and conditions set by the card companies, where bad actors rapidly get switched off.

The result is that PAS 1296 says little about security requirements, data protection standards, or anything else we are concerned about. It stipulates that the age verification systems cannot be sued for losing your data. Rather, you must sue the website owner, i.e. the porn site which contracted with the age verifier.

There are also several terminological gaffes such as referring to PII (personally identifying information) which is a US legal concept, rather than EU and UK’s ‘personal data’; this suggests that PAS 1296 is very much a draft, in fact appears to have been hastily cobbled-together

However you look at it, the proposed PAS 1296 standard is very generic, lacks meaningful enforcement and is designed to tackle situations where the user has some control and choice, and can provide meaningful consent. This is not the case with this duty for pornographic publishers. Users have no choice but to use age verification to access the content, and the publishers are forced to provide such tools.

Pornography companies meanwhile have every reason to do age verification as cheaply as possible, and possibly to harvest as much user data as they can, to track and profile users, especially where that data may in future, at the slip of a switch, be used for other purposes such as advertising-tracking. This combination of poor incentives has plenty of potential for disastrous consequences.

What is needed is clear, spelt out, legally binding duties for the regulator to provide security, privacy and anonymity protections for end users. To be clear, the AV Regulator, or BBFC, does not need to be the organisation that enforces these standards. There are powers in the Bill for it to delegate the regulator’s responsbilties. But we have a very dangerous situation if these duties do not exist.

[Read more]


October 18, 2016 | Jim Killock

A database of the UK's porn habits. What could possibly go wrong?

The Government wants people who view pornography to show that they are over 18, via Age Verification systems. This is aimed at reducing the likelihood of children accessing inappropriate content.

To this end the Digital Economy Bill creates a regulator that will seek to  ensure that adult content websites will verify the age of users, or face monetary penalties, or in the case of overseas sites, ask payment providers such as VISA to refuse to process UK payments for non-compliant providers.

There are obvious problems with this, which we detail elsewhere.

However, the worst risks are worth going into in some detail, not least from the perspective of the Bill Committee who want the Age Verification system to succeed.

As David Austen, from the BBFC, who will likely become the Age Verification Regulator said:

Privacy is one of the most important things to get right in relation to this regime. As a regulator, we are not interested in identity at all. The only thing that we are interested in is age, and the only thing that a porn website should be interested in is age. The simple question that should be returned to the pornographic website or app is, “Is this person 18 or over?” The answer should be either yes or no. No other personal details are necessary.

However, the Age Verification Regulator has no duties in relation to the Age Verification systems. They will make sites verify age, or issue penalties, but they are given no duty to protect people’s privacy, security or defend against cyber security risks that may emerge from the Age Verification systems themselves.

David Austen’s expectations are unfortunately entirely out of his hands.

Instead, the government appears to assume that Data Protection law will be adequate to deal with the privacy and security risks. Meanwhile, the market will provide the tools.

The market has a plethora of possible means to solve this problem. Some involve vast data trawls through Facebook and social media. Others plan to link people’s identity across web services and will provide way to profile people’s porn viewing habits. Still others attempt to piggyback upon payment providers and risk confusing their defences against fraud. Many appear to encourage people to submit sensitive information to services that the users, and the regulator, will have little or no understanding of.

And yet with all the risks that these solutions pose, all of these solutions may be entirely data protection compliant. This is because data protection allows people to share pretty much whatever they agree to share, on the basis that they are free to make agreements with whoever they wish, by providing ‘consent’.

In other words: Data protection law is simply not designed to govern situations where the user is forced to agree to the use of highly intrusive tools against themselves.

What makes this proposal more dangerous is that the incentives for the industry are poor and lead in the wrong direction. They have no desire for large costs, but would benefit vastly from acquiring user data.

If the government wants to have Age Verification in place, it must mandate a system that increases the privacy and safety of end users, since the users will be compelled to use Age Verification tools. Also, any and all Age Verification solutions must not make Britain’s cybersecurity worse overall, e.g. by building databases of the nation’s porn-surfing habits which might later appear on Wikileaks.

The Digital Economy Bill’s impact on privacy of users should, in human rights law, be properly spelled out (“in accordance with the law”) and be designed to minimise the impacts on people (necessary and proportionate). Thus failure to provide protections places the entire system under threat of potential legal challenges.

User data in these systems will be especially sensitive, being linked to private sexual preferences and potentially impacting particularly badly on sexual minorities if it goes wrong, through data breaches or simple chilling effects. This data is regarded as particularly sensitive in law.

Government, in fact has at its hands a system called Verify which could provide age-verification  in a privacy friendly manner. The Government ought to be explaining why the high standards of its own Verify system are not being applied to Age Verification, or indeed, why the government is not prepared to use its own systems to minimise the impacts.

As with web filtering, there is no evidence that Age Verification will prevent an even slightly determined teenager from accessing pornography, nor reduce demand for it among young people. The Government appears to be looking for an easy fix to a complex social problem. The Internet has given young people unprecedented access to adult content but it’s education rather than tech solutions that are most likely to address problems arising from this. Serious questions about the efficacy and therefore proportionality of this measure remain.

However, legislating for the Age Verification problem to be “solved” without any specific regulation for any private sector operator who wants to “help” is simply to throw the privacy of the UK’s adult population to the mercy of the porn industry. With this mind, we have drafted an amendment to introduce the duties necessary to minimise the privacy impacts which could also reduce if not remove the free expression harms to adults.

 

[Read more] (14 comments)


October 17, 2016 | Pam Cowburn

In 'vest'ing in crime fighting technology – accountability versus privacy rights?

The Met Police have announced that body-worn cameras will be rolled out across the force. ORG's Javier Ruiz and Pam Cowburn spoke to Alex Heshmaty about the initiative when it was first announced in 2014.

What impact is wearable technology likely to have on police safety and effective crime fighting? Conversely, what's the impact on police accountability and reliability of evidence?

This initiative would further increase the scope of surveillance in the UK. Already, we have one of the highest rates of CCTV cameras by population in the world. A 2013 survey estimated that there could be up to 5.9 million surveillance cameras in the UK, one for every 11 people.

Wearable technology may be even more intrusive than CCTV, capturing up-close visuals and audio recordings which, in the case of the police, could be of victims and perpetrators involved in violent and graphic crimes.

While it's important to make policing more transparent and accountable, we need to make sure that we don't over-rely on technology to achieve this. Change must also come through wider policies and attempts to change cultural working practices.

Similarly, the effectiveness of surveillance as a crime prevention measure should not be over-stated and may not always be justified by the cost. Other more low-tech measures – such as better street lighting – may be more effective in preventing crime.

Although video recordings may provide useful evidence that can help to secure convictions, as with other kinds of evidence, they can also be misleading if presented without relevant context. If cameras are on all the time, the police are effectively filming the public on a continual basis regardless of whether they are involved in a crime. In terms of making sure the police are accountable, it is less likely that police abuses would happen in public places. But, it might be preferable to have cameras in police vehicles – where there have been accusations of abuse and where it is less likely that bystanders will be filmed – in the same way that there are cameras in police stations.

Issues may arise on how audio-visual materials are used and how long they are kept for, particularly when the police are filming members of the public not involved in criminal activity.

Arguably there may be benefits to the police wearing cameras at demonstrations. Protesters may feel that this might deter heavy handed dispersal tactics by the police or provide evidence of them if they occur. Conversely, police officers may feel that they have evidence to counter any claims of police brutality or provide evidence of provocation. But cameras would also give the police a visual record of everyone who attended a particular demonstration. How might that footage be used afterwards? Could facial recognition software be used to identify people to keep a note for future demonstrations or investigations?

Won't it just be possible to turn the camera off (in the same way as a recording can be stopped)?

If it is possible to turn a camera off, there would need to be mechanisms within the camera to keep a proper audit of when it has been switched on and off, and why.

Continual recording would mean that all of a police officer's daily activities would be recorded and they would be fully accountable for their actions. But it would also mean that many members of the public, not involved in crimes, would be captured on film and this would be an unnecessary intrusion on their privacy. In addition, there are times when police officers have to use their discretion. If they were wearing cameras, they might feel obliged to pursue minor infractions, which they might deal with differently otherwise.

Conversely, selective recording could lead to accusations that video footage is misleading, has been taken out of context, or deliberately manipulated to secure a conviction.

Does the use of such technology present any challenges to current criminal law and police practice?

The use of CCTV by public authorities is regulated under the Protection of Freedoms Act 2012 (PFA 2012). The Surveillance Camera Code of Practice pursuant to PFA 2012 provides guidance to public authorities.This guidance acknowledges that, "there may be additional standards applicable where the system has specific advanced capability... for example the use of body-worn video recorders". However, it does not give much detail about what these standards are.

The Information Commissioner's Office (ICO) has published more detailed guidance, which spell out further what these mean – In the picture: A data protection code of practice for surveillance cameras and personal information'. This recognises the threats to privacy:

"BWV [body-worn video] systems are likely to be more intrusive than the more "normal" CCTV style surveillance systems because of its mobility. Before you decide to procure and deploy such a system, it is important that you justify its use and consider whether or not it is proportionate, necessary and addresses a pressing social need."

It also outlines the data protection issues and offers guidance that data should be stored, "in a way that remains under your sole control, retains the quality of the original recording and is adequate for the purpose for which it was originally collected".

What are the potential human rights or privacy implications for individuals?

The police spend a lot of time talking to victims, witnesses and other members of the public, not just apprehending criminals. By wearing a camera they could essentially be continuously filming in public places and this has privacy implications for everyone in those places. The government's guidance says, "people in public places should normally be made aware whenever they are being monitored by a surveillance camera system" but it is difficult to see how this could work in practice if the camera is being worn by an officer.

Given the appetite for footage of real criminals being arrested, there are also risks of videos being leaked, hacked or shared inappropriately and this is likely to breach rights of privacy.

What measures would police need to take to ensure that their use of such technology complies with data protection laws?

The police have broad powers to hold and process data, and there are a number of data protection opt-outs available to them. If they are to record and keep video footage, they must have systems in place that store audio-visual material securely. There also need to be strict controls over who can access it. The guidance from the ICO outlines these requirements clearly. However, it is not only data protection law but also the Human Rights Act 1998 that the police must comply with.

This article was first published by Lexis®PSL IP & IT on 17 November 2014.

[Read more]


September 16, 2016 | Paul Sanders

A fair way to close the value gap

The music industry says that artists, labels, and songwriters are getting a raw deal from services that allow users to upload content. The beef is that user-uploaded songs, which may generate advertising revenue for the service and the uploader, compete directly with those same songs uploaded by the copyright owner. The difference in revenue between a user upload and a professionally supplied version is what the music industry means by the ‘value gap’.

And they don’t like it. As explained by record company trade body IFPI’s Frances Moore:

"The value gap is about the gross mismatch between music being enjoyed by consumers and the revenues being returned to the music community."

Copyright terms and conditions always make the uploader responsible for any copyright permission or licences, but sometimes uploaders don’t have all the rights they need. If services remove the content promptly when asked, they benefit from what is known as a 'safe harbour', and the copyright holder has no claim against them for infringement or loss of revenue.

So what does the music industry want? Frances Moore again: "The 'safe harbour' regime designed for the early days of the internet should no longer be used to exempt user upload services that distribute music online from the normal conditions of music licensing. Labels should be able to operate in a fair functioning market place, not with one hand tied behind their back when they are negotiating licences for music."

Unusually for the music industry, the IFPI position has managed to generate broad support among artists and indie labels, as well as songwriters and publishers. Over 1,300 artists have now signed a letter to the EC President Juncker, which you can read online here

The music industry is not calling for safe harbour to be abolished, rather that the qualification to benefit from it is drawn far more narrowly now that many platforms are less file hosting services and more media and advertising businesses. And the European campaign is mirrored by similar efforts in the US asking for changes to the DMCA safe harbour provisions.

The pushback has been quick and predictable, and based on the same set of positions that have been rehearsed over the last 20 years of Internet history. Some feel that copyright should be abolished, and artists who can play live should make money only from from ticket and tee shirt sales. Some suspect the campaign is just the biggest artists and labels wanting to add even more millions to their already vast riches. Many think that the music industry should share out more equally what it already has before seeking to get stronger rights. Some think that over-zealous labels and artists are harming other creators by issuing unfair take-down notices.

Legal sophisticates will recognise some important principles wrapped up in this debate. Citizens and consumers are clearly right to demand that an industry with unfair practices is not rewarded. And in a commercial environment in which only a tiny proportion of new work achieves a return on investment, there is a balance to be found between the value of distribution and promotion to the creator, and the value of the content to the service. There are too a whole class of either inadvertent, incidental, and innocent infringements where the uploader has no intent to profit to the detriment of the musicians.

But does the music industry have a point at all? The disparity in the money the music industry gets for the same consumer experience is real enough. IFPI calculated wholesale subscription revenue per user at just under $30 per year for 2015, while advertising brought in about $0.72 per user per year, albeit from a much larger user base. The advertising rates on UGC are generally much lower than on professionally supplied content; so where YouTube and other services are being used as a music jukebox, the hit to music industry revenue from this competition is very significant.

But of course there is no way to know whether there’s more money to be found. The services currently benefiting from safe harbour have every incentive to increase their own revenue. Other old copyright businesses that are moving to internet economics seem to be suffering similarly, so it might just be inherent in the way Internet media economics works. You can watch Professor Scott Galloway for an entertaining rant about this.

Internet advertising has its own set of issues quite apart from any music industry griping. We are learning that the cost of relying on advertising to support media generally is paid partly in greater intrusion into our private lives as trackers try to squeeze more value out of our daily traces, mostly with nothing like informed consent. High quality journalism is expensive. It might be weakened as more people have greater access to publishing platforms, with subsequent harm to political process and public life.

For me one of the ironies of the long standing conflict between copyright and Internet businesses is that if copyright was a tech startup innovation it would be lauded as a thing of progressive genius. It would not be organised in national silos, nor looked after by people who refuse to cooperate with each other. But there’s no more natural way, in a world of infinite replication and trackability, to incentivise and reward creators. And that is what ad-supported UGC services do, through unique digital object ids, channels, and the massively complex world of consumer tracking and advertising markets.

There is a great deal both sides can do to show they are fit for purpose. The music industry should finally deliver on the promise of digital technology and make it easy for everyone to identify and pay the creators and owners of the music, just like YouTube does with its own creators. Services need to show they deserve a safe harbour by demonstrating respect for the rights and privileges of everyone, from fair dealing student to striving artist to privacy deserving citizen. I would like to see the value gap closed by giving songwriters and musicians more say in the deals that affect their livelihoods, and by demanding more transparency from services on what they do with all of our data.

Paul Sanders is a member of ORG's Advisory Council and the cofounder of several music and technology companies. This blog is his personal view and we hope it will start a debate on the 'value gap'.

[Read more]


September 14, 2016 | Jim Killock

GCHQ should not push ISPs to interfere with DNS results

GCHQ have a dual and rather contradictory mandate: they are asked to get around security measures, break into systems and snoop on citizens. They are also asked to protect the UK from cyber attacks by improving security protections.

While these two goals are not automatically in conflict, they are certainly in tension, which will also raise questions of trust. Is GCHQ’s strategy intended to secure our systems, or in fact to keep them vulnerable?

Today’s announcement that GCHQ’s National Cyber Security Centre wish ISPs to manipulate DNS results to prevent access to phishing sites smacks of exactly this conflict. (The Domain Name System (DNS) is what resolves an ordinary web address like openrightsgroup.org to a unique number (IP address) that gets your web browser to the correct web server.)

Their Director General, Ciaran Martin, explained in a speech that:

The great majority of cyber attacks are not terribly sophisticated. They can be defended against. And if they get through their impact can be contained. But far too many of these basic attacks are getting through. And they are doing a lot of damage

we're exploring a flagship project on scaling up DNS filtering: what better way of providing automated defences at scale than by the major private providers effectively blocking their customers from coming into contact with known malware and bad addresses?

Now it's crucial that all of these economy-wide initiatives are private sector led. The Government does not own or operate the Internet. Consumers use have a choice. Any DNS filtering would have to be opt out based. So addressing privacy concerns and citizen choice is hardwired into our programme.

There are a number of problems with this approach. Privacy and logging are one; but so is the collateral damage that comes from DNS blocking. Phishing tends to abuse URLs rather than whole sites, so the impact of blocking entire sites can sometimes be huge. And there are alternatives targeting specific known problems, such as Chrome’s “safer browsing” product.

Having ISPs able to serve up “spoof” DNS results for whole websites is, perhaps coincidentally, tremendously useful when implementing censorship.

The DNS blocking approach, even if “voluntary” and a matter of choice, would potentially run up against industry initiatives to improve security of customers through preventing the manipulation of DNS results, such as DNSSEC (among others). The aim of these projects is to prevent “spoof” DNS results, which allow intermediaries to interfere with web pages, replace adverts, or serve fake pages based on users mis-spelling domains. It would have made it impossible for the Phorm model of interception of user web traffic to work, for instance. 

Even if we trust ISPs and governments not to abuse their extending powers of censorship, we ought to be worried that GCHQ are proposing at least one security measure which undermines international efforts to improve the integrity of the Internet, and thereby also, its security. Perhaps this reveals some of the weaknesses of a state-led approach to Internet security. It would also likely be redundant if clients switched to encrypted resolvers run by other parties.

For instance, GCHQ seems to be more keen on working with a handful of big players, who can make ‘major’ interventions to ‘protect’ the public. Rather than expecting the market, the endpoints, and helping users themselves to do better, GCHQ no doubt find it easier to work with people who can deliver change ‘at scale’.

Or to look at it another way, GCHQ’s proposed solution may not be mandatory, but could impose a certain kind of stasis on technical innovation in the UK, by retarding the adoption of better DNS security. Does GCHQ really know better than the technical bodies, such as the Internet Engineering Taskforce (IETF) and their commercial participants, who are promoting changes to DNS?

There is no doubt that GCHQ have information which would be useful for people’s security. However, precisely what their motivations are, and what their role should be, are much more open to question. For this reason, we have called for their cyber security role to be divorced from their surveillance capabilities and placed under independent management.

That aside, GCHQ’s idea to promote the tampering of DNS results may be superficially attractive in the short term, but would be a medium term mistake.

[Read more] (2 comments)