September 16, 2016 | Paul Sanders

A fair way to close the value gap

The music industry says that artists, labels, and songwriters are getting a raw deal from services that allow users to upload content. The beef is that user-uploaded songs, which may generate advertising revenue for the service and the uploader, compete directly with those same songs uploaded by the copyright owner. The difference in revenue between a user upload and a professionally supplied version is what the music industry means by the ‘value gap’.

And they don’t like it. As explained by record company trade body IFPI’s Frances Moore:

"The value gap is about the gross mismatch between music being enjoyed by consumers and the revenues being returned to the music community."

Copyright terms and conditions always make the uploader responsible for any copyright permission or licences, but sometimes uploaders don’t have all the rights they need. If services remove the content promptly when asked, they benefit from what is known as a 'safe harbour', and the copyright holder has no claim against them for infringement or loss of revenue.

So what does the music industry want? Frances Moore again: "The 'safe harbour' regime designed for the early days of the internet should no longer be used to exempt user upload services that distribute music online from the normal conditions of music licensing. Labels should be able to operate in a fair functioning market place, not with one hand tied behind their back when they are negotiating licences for music."

Unusually for the music industry, the IFPI position has managed to generate broad support among artists and indie labels, as well as songwriters and publishers. Over 1,300 artists have now signed a letter to the EC President Juncker, which you can read online here

The music industry is not calling for safe harbour to be abolished, rather that the qualification to benefit from it is drawn far more narrowly now that many platforms are less file hosting services and more media and advertising businesses. And the European campaign is mirrored by similar efforts in the US asking for changes to the DMCA safe harbour provisions.

The pushback has been quick and predictable, and based on the same set of positions that have been rehearsed over the last 20 years of Internet history. Some feel that copyright should be abolished, and artists who can play live should make money only from from ticket and tee shirt sales. Some suspect the campaign is just the biggest artists and labels wanting to add even more millions to their already vast riches. Many think that the music industry should share out more equally what it already has before seeking to get stronger rights. Some think that over-zealous labels and artists are harming other creators by issuing unfair take-down notices.

Legal sophisticates will recognise some important principles wrapped up in this debate. Citizens and consumers are clearly right to demand that an industry with unfair practices is not rewarded. And in a commercial environment in which only a tiny proportion of new work achieves a return on investment, there is a balance to be found between the value of distribution and promotion to the creator, and the value of the content to the service. There are too a whole class of either inadvertent, incidental, and innocent infringements where the uploader has no intent to profit to the detriment of the musicians.

But does the music industry have a point at all? The disparity in the money the music industry gets for the same consumer experience is real enough. IFPI calculated wholesale subscription revenue per user at just under $30 per year for 2015, while advertising brought in about $0.72 per user per year, albeit from a much larger user base. The advertising rates on UGC are generally much lower than on professionally supplied content; so where YouTube and other services are being used as a music jukebox, the hit to music industry revenue from this competition is very significant.

But of course there is no way to know whether there’s more money to be found. The services currently benefiting from safe harbour have every incentive to increase their own revenue. Other old copyright businesses that are moving to internet economics seem to be suffering similarly, so it might just be inherent in the way Internet media economics works. You can watch Professor Scott Galloway for an entertaining rant about this.

Internet advertising has its own set of issues quite apart from any music industry griping. We are learning that the cost of relying on advertising to support media generally is paid partly in greater intrusion into our private lives as trackers try to squeeze more value out of our daily traces, mostly with nothing like informed consent. High quality journalism is expensive. It might be weakened as more people have greater access to publishing platforms, with subsequent harm to political process and public life.

For me one of the ironies of the long standing conflict between copyright and Internet businesses is that if copyright was a tech startup innovation it would be lauded as a thing of progressive genius. It would not be organised in national silos, nor looked after by people who refuse to cooperate with each other. But there’s no more natural way, in a world of infinite replication and trackability, to incentivise and reward creators. And that is what ad-supported UGC services do, through unique digital object ids, channels, and the massively complex world of consumer tracking and advertising markets.

There is a great deal both sides can do to show they are fit for purpose. The music industry should finally deliver on the promise of digital technology and make it easy for everyone to identify and pay the creators and owners of the music, just like YouTube does with its own creators. Services need to show they deserve a safe harbour by demonstrating respect for the rights and privileges of everyone, from fair dealing student to striving artist to privacy deserving citizen. I would like to see the value gap closed by giving songwriters and musicians more say in the deals that affect their livelihoods, and by demanding more transparency from services on what they do with all of our data.

Paul Sanders is a member of ORG's Advisory Council and the cofounder of several music and technology companies. This blog is his personal view and we hope it will start a debate on the 'value gap'.

[Read more]

September 14, 2016 | Jim Killock

GCHQ should not push ISPs to interfere with DNS results

GCHQ have a dual and rather contradictory mandate: they are asked to get around security measures, break into systems and snoop on citizens. They are also asked to protect the UK from cyber attacks by improving security protections.

While these two goals are not automatically in conflict, they are certainly in tension, which will also raise questions of trust. Is GCHQ’s strategy intended to secure our systems, or in fact to keep them vulnerable?

Today’s announcement that GCHQ’s National Cyber Security Centre wish ISPs to manipulate DNS results to prevent access to phishing sites smacks of exactly this conflict. (The Domain Name System (DNS) is what resolves an ordinary web address like to a unique number (IP address) that gets your web browser to the correct web server.)

Their Director General, Ciaran Martin, explained in a speech that:

The great majority of cyber attacks are not terribly sophisticated. They can be defended against. And if they get through their impact can be contained. But far too many of these basic attacks are getting through. And they are doing a lot of damage

we're exploring a flagship project on scaling up DNS filtering: what better way of providing automated defences at scale than by the major private providers effectively blocking their customers from coming into contact with known malware and bad addresses?

Now it's crucial that all of these economy-wide initiatives are private sector led. The Government does not own or operate the Internet. Consumers use have a choice. Any DNS filtering would have to be opt out based. So addressing privacy concerns and citizen choice is hardwired into our programme.

There are a number of problems with this approach. Privacy and logging are one; but so is the collateral damage that comes from DNS blocking. Phishing tends to abuse URLs rather than whole sites, so the impact of blocking entire sites can sometimes be huge. And there are alternatives targeting specific known problems, such as Chrome’s “safer browsing” product.

Having ISPs able to serve up “spoof” DNS results for whole websites is, perhaps coincidentally, tremendously useful when implementing censorship.

The DNS blocking approach, even if “voluntary” and a matter of choice, would potentially run up against industry initiatives to improve security of customers through preventing the manipulation of DNS results, such as DNSSEC (among others). The aim of these projects is to prevent “spoof” DNS results, which allow intermediaries to interfere with web pages, replace adverts, or serve fake pages based on users mis-spelling domains. It would have made it impossible for the Phorm model of interception of user web traffic to work, for instance. 

Even if we trust ISPs and governments not to abuse their extending powers of censorship, we ought to be worried that GCHQ are proposing at least one security measure which undermines international efforts to improve the integrity of the Internet, and thereby also, its security. Perhaps this reveals some of the weaknesses of a state-led approach to Internet security. It would also likely be redundant if clients switched to encrypted resolvers run by other parties.

For instance, GCHQ seems to be more keen on working with a handful of big players, who can make ‘major’ interventions to ‘protect’ the public. Rather than expecting the market, the endpoints, and helping users themselves to do better, GCHQ no doubt find it easier to work with people who can deliver change ‘at scale’.

Or to look at it another way, GCHQ’s proposed solution may not be mandatory, but could impose a certain kind of stasis on technical innovation in the UK, by retarding the adoption of better DNS security. Does GCHQ really know better than the technical bodies, such as the Internet Engineering Taskforce (IETF) and their commercial participants, who are promoting changes to DNS?

There is no doubt that GCHQ have information which would be useful for people’s security. However, precisely what their motivations are, and what their role should be, are much more open to question. For this reason, we have called for their cyber security role to be divorced from their surveillance capabilities and placed under independent management.

That aside, GCHQ’s idea to promote the tampering of DNS results may be superficially attractive in the short term, but would be a medium term mistake.

[Read more] (1 comments)

August 31, 2016 | Javier Ruiz

Bad news in leaked EU Copyright Directive

Leaked EU Copyright Directive ignores ordinary internet users and presents limited reforms to support creators, researchers, teachers and librarians, while providing a sledgehammer of protectionist measures for the incumbent news, music and film industries.

Several documents have been leaked from the European Commission providing a clear picture of the proposed reforms to copyright that will be presented later in the year. The picture is quite negative as the proposals range from the timid to the openly regressive, such as the introduction of a new ancillary right for news publishers. Several key initiatives have been dropped, including changes to the current exceptions for freedom of panorama that allow taking the pictures of public art and buildings.

The documents leaked include the Impact Assessment on the Modernisation of EU Copyright Rules, prepared by EU officials; an official communication titled ‘Promoting a fair and efficient European copyright-based economy in the digital single market’ due to be published later this month; and finally the full text of the proposed new Directive on Copyright in the Single Market.

The new Directive will complement and not replace current legislation such as the Infosoc Directive, although there are some minor technical modifications. Existing directives will not be reopened for discussion, thus limiting the possibilities for reform in key areas such as Digital Rights Management. The directive applies to the European Economic Area (EEA) and will probably be relevant to the UK whatever shape Brexit eventually takes.

These leaks make the past two years of pre legislative discussions about comprehensive copyright reform feel like a waste of everyone’s time, except of course for a few industry lobbyists. The EU is about to throw away the first chance in over a decade to adapt copyright to the digital world, instead choosing classic protectionism for incumbent industries. These measures will not promote the creation of a vibrant digital industry in Europe capable of standing up to Silicon Valley - as EU policymakers want.

Below is a summary of the main contents of the leaked Directive. There are other initiatives in the wider package, including: a Regulation for online broadcasting, implementation of the Marrakesh treaty on accessibility for the visually impaired, a broad package on copyright enforcement and more provisions to promote European works.

The contents of the Directive are a mix of initiatives that include some mandatory limited exceptions for culture and education and measures to help improve remunerations for creators, but in the main are openly about supporting right holders and European industry.

Protectionist measures

Publishers ancillary right

This is the most controversial reform is the creation of a completely new intellectual property right for news publishers which lasts 20 years and will add a new layer of complexity to Internet regulation. The new right has the same scope as rights of reproduction and making available and it’s covered by the same exceptions, including criticism or review. The EU is open in aiming to support the financial sustainability of news publishers, and the right does not cover scientific or academic publishers. We are not completely clear on the situation of blogs, but the right is meant to cover only publications by a “service provider”.

This right is meant to stop internet news aggregators to simply copy a portion of the news article and stop revenues flowing to the original site. Similar initiatives in Germany and Spain had a disastrous effect on media access, but we will need more time to fully understand how bad this one is. The first analyses are extremely negative as the new right seems even less constrained than previous initiatives.

User uploaded content: YouTube Law

Another major concession to industry aiming to address the “value gap” created by the disparity between the number of people watching content in platforms - basically YouTube - and the revenues received. The Directive forces relevant online platforms to seek licenses from rightsholders. While there may a case for Google to share some of its profits with rightsholders it is unclear that copyright law is the best way to do it. This law will extend beyond YouTube with unpredictable effects on Internet activities, as lawyers cotton on to the new powers given to industry.

This new power goes to the heart of Internet regulation: the (lack of) liability of intermediaries that enables content to be hosted and linked around, expressed in the E-commerce Directive. In principle the new power covers services that go beyond providing “mere physical facilities” and perform and “act of communication to the public” by taking an active role in curating or promoting content, but this is not always clear cut.

The Directive does not include an obligation to monitor preemptively - which would contravene other laws - but it forces the implementation of technological protection measures to protect works, such as Google’s Content-ID - with transparency obligations towards rightsholders.

Fair remuneration for authors and performers

There are some positive measures to protect creators that include transparency over online media sales, powers to renegotiate contracts and alternative dispute resolution mechanisms. Overall they seem positive albeit a bit weak, when compared with the sledgehammers given to news publishers and the music industry.

New mandatory exceptions

These exceptions are positive but in all cases limited when compared to the initial demands of libraries, educators and cultural institutions. They do not include many of the more far fetching reforms proposed by civil society and even the European Parliament.

The call for a mandatory exception for “freedom of panorama” campaigned for by many civil society groups including ORG fell on deaf ears. The Commission has simply stated in their documents that the status quo works fine, while politely asking all countries to implement the exception.

Text and Data Mining exception

This exception allows the making of copies to perform analysis for scientific research by non-profit or public interest organisations. There is no compensation for rights holders and an explicit ban on contractual clauses overriding the exception. Technical measures to restrict access or copying are allowed but should not affect the exception.

This is a positive move, although many research organisations and libraries had been asking for a broader scope as they feared that a lot of important research may be excluded.

Online Teaching exception The rationale for this exception is the lack of clarity on whether existing exceptions in the Infosoc and Database Directive apply to online education, particularly cross border access . The exception covers only “educational establishments”, which must control access to the resources, and will likely exclude many online educational initiatives. The exception allows for licensing schemes to take precedence over the exception and this could be used to weaken the provisions.

Digital Preservation

Libraries, archives and similar cultural heritage institutions will be allowed to make necessary copies of works for preservation, but only of works in their permanent collections. The exception is only for internal copies and not for online libraries.

Supporting the digital market

A couple of fairly minor initiatives that are positive but of limited impact in the context of the once-in-twenty-years reform of copyright.

Out-of-commerce works

Libraries have been lobbying for a long time to be allowed to engage in collective licensing deals to digitise and distribute out-of-commerce works. They see this as both an extension of their mission and an opportunity to generate funds, although in principle this is framed as non-commercial cost recovery of the costs of mass digitisation. The exception only applies to works first published in the EU. This is not a full free copying exception, but the option to enter extended collective licensing deals without the need to get approval from every author. There is a six month compulsory notice in case authors are around and object.

Video on demand 

The directive forces member states to create a voluntary “negotiation mechanism” with the support of and impartial body to help parties license work for Video on demand (VoD) services.

In summary, a disappointing culmination of a two year discussion that started with high hopes of seeing Europe take bold moves to really modernise copyright. The legislative process starts now, however, and while the UK is in the EU, ORG will continue to try to influence the shape of these laws as they go through the European Parliament. We must also remember that this is all based on leaked documents and the European Commission may still make some changes.

[Read more] (1 comments)

August 23, 2016 | Javier Ruiz

Review of bulk surveillance powers gives one side of the argument

The review of bulk surveillance powers by David Anderson supports the operational case for the current practices of the agencies. The review did not look at the wider impacts of mass surveillance and any further discussions need to take both aspects into account.

David Anderson’s Review of Bulk Powers gives broad endorsement to the contents of the Investigatory Powers Bill. This should come as no surprise, given his job was to determine their utility to the government and not any wider social impacts, including on human rights. The review also had a limited remit to look at potential alternatives to bulk, and found that these may exist for some of the cases examined, but are generally too cumbersome, slower or less effective. The review makes only one recommendation for reform: the creation of a new Technical Advisory Panel of independent security cleared experts to support the Investigatory Powers Commissioner.

We are concerned that the government will try to use Anderson’s review to end the public debate on mass surveillance. This would be a mistake, as the underlying issues will not go away. Countless human rights experts and bodies have condemned mass surveillance and a large proportion of the population remains concerned about their online privacy.

The IPB will be scrutinised again by the Lords in the Autumn, and the other side of the debate needs to be included in the discussions. Focusing on the utility of bulk - the operational case - is important in establishing the necessity for such powers. This does not tell us however whether wholesale collection and analysis of data is proportionate in a democratic society. As Anderson himself repeatedly makes clear in his review, these are matters for Parliament to decide.

Anderson arrived at his conclusions after looking at the classified activities of the Security and Intelligence Agencies - MI5, MI6 and the GCHQ - including several visits by him and his team. The Review also summarised most influential reports in this area, including those by the Intelligence and security Committee, Anderson’s previous report, and also some US reports that cover similar issues. Anderson found that all available evidence supports the operational case for bulk interception, bulk datasets and acquisition of communications data.

The only area where in Anderson’s view the case is not yet fully proven is bulk “equipment interference”, aka bulk hacking, as this is a new power. He quotes extensively security personnel claiming this power is required to cope with the growing threats that encryption and anonymity present to other bulk collection methods. There is little discussion about what this power may look like in practice. Documents leaked by Snowden mention GCHQ’s interference activities causing disturbance to the internet traffic of whole countries, such as Pakistan. Unfortunately, Anderson does not engage with the leaked documents and glosses over “the potential of CNE to create security vulnerabilities or leave users vulnerable to damage” (p. 108).

The review was completed in a very short period of time, rushed to avoid delaying the legislative process. Concerns have been expressed from various quarters that a lack of time might have impacted the results, but it seems unlikely that more time would have changed these. The Review was asked whether there is an operational case for bulk, and given the scope and sources of evidence the answer would almost certainly have been the same. Anderson neutralises the dissonant evidence, such as the last report from the Intelligence and Security Committee that criticised bulk hacking, or the US Privacy and Civil Liberties Board report that found there was no value in giving all phone records to the government for analysis.

The main new piece of information is the extended evidence from the security and intelligence agencies. Throughout the paper Anderson deals with criticisms of bulk by referencing new secret evidence that he has been able to see and was not available - and never will be - to critics. The argument is collapsed into a question of trust, in this case of trusting Anderson to vouch for the agencies.

Government will brandish this review’s headlines to support the Investigatory Powers Bill, but a more detailed follow up is needed. For example, the discussions on alternatives to bulk seem to point at some options, but these are quickly dismissed. It would be important for Parliament to probe these potential options in much more detail.

The recommendation for a Technical Advisory Panel can hardly be rejected, but its success will depend on how it is handled in practice. There are numerous advisory boards across government with very mixed results. Technically proficient staff should be part of the oversight and authorisation bodies, not just in advisory roles.

The review joins a veritable body of publications from security related bodies and individuals that cross-referentially support the practices of the agencies while ostensibly asking for some reforms. Since the Snowden leaks three years ago there has been a huge improvement in the level of public debate in the UK about surveillance . It would be foolish to deny that we have a much better understanding of how these systems work, and not all from the Snowden leaks. At the same time we have to ask ourselves what has actually changed in practice.

Unfortunately, the most serious concerns about the agencies do not relate to whether surveillance powers are useful and not misused, but how secrecy and pervasive monitoring affect democracy and individuals’ personal development in the medium to long term. There is little that Anderson’s review can do to allay those fundamental concerns.

Anderson puts his finger briefly on these most basic aspects, quoting his own previous report to support the need for strong surveillance powers:

A perception that the authorities are powerless to act against external threats to the nation, or unable effectively to prosecute certain categories of crime (including low-level crime), can result in hopelessness, a sense of injustice and a feeling that the state has failed to perform its part of the bargain on which consensual government depends.(p. 120)

A very different conclusion could be reached. The pervasive sense of hopelessness and insecurity created by technological change, economic crises and broader geopolitical upheaval might be perceived as the failure of the state to successfully plan, regulate and intervene diplomatically. The development of an all-powerful mass surveillance state seems the wrong overcompensation mechanism, which will do little to address the underlying causes of insecurity, injustice and disenfranchisement.

[Read more]

July 22, 2016 | Slavka Bielikova

IPBill Committee stage latest sitting

The House of Lords debated the IPBill in the Committee this Wednesday for the last time before the summer recess. The topics covered the Internet Connection Records (ICRs), the Request Filter and equipment interference. Here is a brief breakdown of what was said. We will be back in September with more.

Throughout the debate, the Lib Dem Lords mounted strong opposition to the Government in all three areas being discussed.

Internet Connection Records

Lord Paddick and Baroness Hamwee presented their amendments requesting removal of the ICRs, arguing they fail to meet the basic test of necessity. Lord Paddick made excellent points to justify their position.

Internet connection records do not do what the Government claim they do. … At best, internet connection records provide only details of which communications platforms have been used, most of which are based in the United States.

Lord Paddick referred to earlier statements made by MI5, MI6 and GCHQ claiming they do not have explicit necessity for ICRs because they have other ways of securing the data they need.

Lord Strasburger seconded their reasoning and pointed to high costs and impracticalities related to keeping the records. He made a case for the ICRs weakening the safety of British citizens. Strasburger said

It is a matter of when, not if, these sensitive data get into the wrong hands.

The unique position of the UK stubbornly requiring ICRs was questioned by Lord Oates. He pointed out that none of the Five Eyes countries, or any western democracy, collects ICRs. According to Lord Oates, it will be a horrific experience for the public when they discover that government insists on the retention of the details of every single person in the country's access to every single website.

The Government maintained their position that ICRs are necessary in combatting crime but did not offer any compelling evidence, instead repeatedly stressing their desire for ICRs to future-proof the Bill.

Under the agreed amendments on ICRs, they can only be obtained by UK authorities if they are to be used to help prevent or detect crime. Lord Keen of Elie said that ICRs would only "be able to be acquired only for offences that are sufficiently serious that an offender can be sentenced to at least six months’ imprisonment". The amendments were still criticised because of their vague phrasing.

Request filter

The request filter was debated under similar narrative. Lord Strasburger argued the filter would be too intrusive. Ha called it “a bulk power masquerading as an innocuous safeguard to reduce collateral intrusion.” 

The Government's response to him was that the filter facilitates public authority cross-stakeholder communications; Lord Keen even called the request filter a safeguard because the authorities will only see the data they need to see (they did not provide any information on how this is technically possible).

Strasburger's concerns of misuse were also shared by the Conservative Lord Lucas.

The potential for casual misuse or misuse suborned by journalists will be considerable. On top of that is potential misuse by government.

Equipment interference

The Lords also discussed several probing amendments on thematic warrants and hacking, including training and testing warrants. The training and testing warrants became the centre of attention after it became obvious that an innocent citizen could be the subject of the above warrants.

Baroness Jones submitted her proposal to create the Investigatory Powers Commission instead of IP Commissioner. Her reasoning is that

this approach confuses and conflates the roles of authorisation and oversight. It is constitutionally inappropriate for those involved in decision-making to have responsibility for the oversight of those same decisions. Such conflation gives rise to a potential conflict of interest.

Mentions of the CJEU Advocate General's opinion

Regarding the opinion of the Advocate General published the morning of the debate, Lord Strasburger used it to challenge the ICRs collection. However, Lord Keen refused to comment on potential implementation of the opinion until the European Court of Justice rules in the DRIPA challenge.

The Government maintain that the existing regime for the acquisition of communications data and the proposals in the Investigatory Powers Bill are compatible with EU law, and clearly it would not be appropriate to comment further while legal proceedings are ongoing.

[Read more]

July 20, 2016 | Jim Killock

Is the CJEU passing the buck on data retention?

It is an increasing feature of debates about the mass retention of data that nobody wants to be the person that says yes or no. It is so clearly problematic to retain huge amounts of personal data, and in some cases to analyse it, that it is hard to see how it could ever be reconciled with the right to privacy.


However, who is prepared to stand up against these practices when police or others say it is necessary for their work?

That is the dilemma facing the Court of Justice of the European Union (CJEU) in their decision on the Davis-Watson (now just Watson) challenge to the Data Retention and Investigatory Powers Act (DRIPA). The CJEU were asked by the UK courts how EU law might restrict domestic data retention law, as the EU court had found the EU’s Data Retention Directive 2006 to be unlawful, after a challenge from Digital Rights Ireland (DRI). Open Rights Group intervened in the Watson case with Privacy International, and made oral submissions at the CJEU, thanks to the many hundreds of supporters that joined to help us challenge DRIPA in the courts.

The Advocate General’s opinion on this essentially appears to say that it may be possible – if difficult – to justify mass data retention, when there is literally no other means of solving serious crimes. However, he says that this decision has to be made in a national context, and is therefore up to national courts. In his Opinion, he states that the extent of EU law is to set compulsory minimum guidelines around any data retention scheme, that they must only relate to metadata, rather than content, and to insist that any scheme must protect the “essence” of the right to privacy.

Retention schemes must relate to serious crimes, not other, less important concerns. As with what the “essence” of the right to privacy could be interpreted to mean by our domestic courts, the same problem exists for what should be classified as a “serious” crime. There is no continuously applicable definition of what a serious crime is across English criminal law. Should “serious” crime be interpreted to mean: offences that are indictable only (which means they can only be tried in the Crown Court) such as murder, rape and false imprisonment; offences which are so serious that only the National Crime Agency should investigate them, such as human trafficking, kidnap and extortion; or offences which could attract Serious Crime Prevention Orders under the Serious Crime Act 2007 in the interests of public protection, such as drug and firearms trafficking. Or will the bar be set so low as to include offences which could attract a maximum of a six month custodial sentence at the Magistrates’ Court, such as common assault or criminal damage under £5,000?

The Opinion makes it clear that independent authorisation of access requests is absolutely critical to safeguard any retention scheme. It also notes that this is absent from the UK’s regime, which allows police officers to make these decisions. By reiterating the original criteria that the CJEU outlined for data retention in the DRI judgment, the Advocate General makes it clear that he believes a UK court should insist on independent authorisation as part of the minimum requirements under EU law.

The CJEU leaves it open to make further challenges to the proportionality of data retention. In the UK, this would require our Supreme Court and possibly the European Court of Human Rights to decide whether our own schemes are proportionate.

In some senses, this may be the natural balance, in the absence of more codified EU requirements and the longstanding assumption that domestic courts apply EU law directly, but it is also something of a cop out. If the UK becomes increasingly out of step with EU norms, would it still be reasonable to say that national courts should decide these balances, when it is every EU citizen that engages with the UK whose rights are affected? Why should different member states, each with the same right to privacy, come to wildly different conclusions about the legitimacy of data retention? And they have, with many EU countries simply ruling data retention incompatible with domestic constitutional privacy rights.

The interesting and difficult problem with data and Internet based services is that free expression and privacy are very often impacted. Unlike the sale of many traditional goods, human rights have to be a consideration.

This problem will not go away, even if the UK leaves the ambit of the CJEU and perhaps EU law altogether. The EU’s legal framework would insist that guarantees exist. This led Max Schrems to speculate that there could be a challenge to any data protection arrangement between a Brexit UK and the EU if our current surveillance laws are still in place. The new Investigatory Powers Bill (IP Bill), which will replace DRIPA, would in his view make a nonsense of the UK’s claims to protect data and privacy.

The government may be tempted to play down or ignore these concerns, as it has done in the past. This is tempting, as the IP Bill will need to be challenged afresh.

However, this clash is not something where Theresa May or Amber Rudd are simply in control of events, and can face down opponents. The courts will be forced to make judgments, sooner or later, and the EU and its legal system will be under increasing pressure to ensure that the UK has sufficient respect for the rule of law and fundamental rights as it concludes agreements with us as an external partner. The safe option is to do everything possible to comply with these judgements, so that they do not become a matter of dispute in our new relationship with the EU.

[Read more]

July 14, 2016 | Pam Cowburn

Could Boris Johnson’s appointment persuade the Lords that we need judicial authorisation?

Boris Johnson’s appointment as Foreign Secretary has become a listicle lover’s dream as every news outlets compiles its favourite diplomatic faux pas.

Is it penning a goat-shagging limerick about Turkey's President Erdogan? Claiming Barack Obama's Kenyan heritage means he has an “ancestral dislike of the British empire”? Or describing Hillary Clinton as having “a steely blue stare, like a sadistic nurse in a mental hospital”. While Eton-educated Alexander Boris de Pfeffel Johnson is commended for taking politics to the common man, there are concerns that his gaffe-prone tendencies mean he is unsuited to managing the diplomacy needs of a Brexiting UK that wants to secure its place on the world stage.

However there has been little discussion of one of Johnson's key responsibilities in his new role. Along with new Home Secretary Amber Rudd, Johnson will get to authorise surveillance warrants for the UK’s intelligence agencies.

While Rudd is still something of an unknown, the man who likes to respond to difficult questions with “blah blah fishcakes” is not known for his love of detail. Nor does he appear to have an aversion to breaking the rules – he was sacked by The Times for making up a quote, and from the Conservative front bench  in 2004 when he failed to come clean about an affair.  But now decisions about whether GCHQ should be permitted to hack networks or tap into fibre optic cables, will fall to a man who it is alleged did not follow procurement procedures properly while Mayor of London. Boris will of course be supported by senior officials in making these decisions. And who knows, he may scrutinise warrants with the dedication that he showed for cricket in the days after Brexit.

Flippancy aside, this is something that both Rudd and Johnson are likely to find challenging. They will need to learn the legal interpretations of necessary and proportionate, and assess what the agencies are asking of them. They will need to swiftly understand the legal frameworks as they decide who and how people are surveilled and assess whether the requests are justified. There is also a questions of logistics. Theresa May reportedly signed off the equivalent of ten warrants a day while Home Secretary.

Such complex legal decisions should not be down to politicians who may have little or no expertise in the practicalities of surveillance and the law. Most countries insist that independent judges sign off warrants for surveillance. The UK is the only Five Eyes country ( a group that includes the US, New Zealand, Australia and Canada) to allow politicians to do so. The reason is obvious. A leaked GCHQ document noted: “Senior High Court judges (they) are INDEPENDENT, non govt (sic) and not openly swayed by personal contact”.

Prime Minister Theresa May has claimed that the Investigatory Powers Bill will introduce a double lock of authorisation with Judicial Commissioners checking ministers' decisions. But the detail of the Bill means that Commissioners will be checking the process and will not have the powers to challenge surveillance decisions.

Independent judicial authorisation will do more than just ensure that surveillance decisions are necessary and proportionate. It may help the Government get the cooperation it seeks from US tech companies. In his report, A Question of Trust, the independent reviewer of Terrorism, David Anderson noted: “a number of major US companies, accustomed to the FISC procedure in the US, disliked the notion of authorisation by the Secretary of State and indicated to me that they would be more comfortable about complying with a warrant if it were judicially authorised.” (p207)

The IP Bill is currently being scrutinised by the House of Lords who can amend it to ensure that the UK has independent judicial authorisation. It's not too late to get the ‘blah blah’ details right.

[Read more]

July 13, 2016 | Javier Ruiz

Telcos threaten to pull 5G investments if EU net neutrality rules are not watered down

European telcos and big industrial conglomerates demand relaxation of Net Neutrality rules, threatening to delay major investments on new 5G mobile technology.

A public EU consultation on the future deployment of 5G mobile technology closed yesterday. The same day a coalition of Europe’s largest telecommunications companies and industrial conglomerates — from Vodafone to Siemens - sent the European Commission a “5G Manifesto”. The document is standard policy lobbying fare, describing the untold wonders that 5G’s low-latency hyper-connectivity will deliver: such as self-driving cars, remote healthcare, smart grids and immersive media; while asking for leadership, massive public funding and the softening of regulations.

The global roadmap and standards for 5G have been developed by the International Telecommunications Union (ITU), an intergovernmental body, in collaboration with the mobile industry. The main headline of the ITU’s IMT–2020 vision is the peak data rate of speed up to 20 Gbps - 100 Mps for the user — with data taking centre stage from calls, but behind those figures there are many complex technical changes to how data is transmitted and networks configured.

Industry’s policy offensive is focused on the Net Neutrality rules that have been put forward for public consultation by BEREC, the European body of telecoms regulators that includes Ofcom.

The main argument from industry is the BEREC’s rules would hamper the development of “network slicing,” a key feature of 5G, which means creating virtual separate networks using the same physical infrastructure. These sliced networks are aimed at “industry verticals”: transport, energy, health, etc. The paper does not explain why the allowances for “specialised services”in BEREC’s proposed rules would not make this possible. The lobbyists’ manifesto simply threatens that investments will be delayed unless regulators find a way to “reconcile the need for Open Internet with pragmatic rules that foster innovation.”

The mythical golden days of the Open Internet as a geek run paradise of free expression may have passed — but we still need to keep in check these kinds of statements. For starters, it is hard to see what can be more pragmatic and innovative than the deceptively simple technical standards that built the Internet.

The protection of particular traffic and the development of specialised software based networks in itself may not be an issue. Everyone would want their self-driving car — or school bus, or street cleaning robot — to be as safe as possible. The relationship between industrial machine-to-machine traffic and human oriented traffic may not be the critical angle either. After all, media and entertainment appear in roadmaps as just another industry that can get its own slice of the cake. Net neutrality rules would appear to leave enough flexibility for such developments.

The main problem with the vision for global mobile hyper connectivity proposed by industry and the ITU is that it may hamper innovation by locking in the future profits of incumbent telcos and locking out citizens and SMEs from an internet-of-everything controlled by Siemens, Thales and other mega-conglomerates. A future mobile communications system purely driven by the needs of industry will also derail the social innovation required to get the European continent out of the current crisis. Freedom of expression may not thrive in the same way in such a controlled environment. For example, paragraph 18 of the BEREC text clearly states that machine to machine communications like smart meters are "outside the scope of the Regulation, unless they are used to circumvent this Regulation."

What the manifesto really says is that telcos are fed up with seeing connectivity becoming a commodity and will only invest if they can create a differentiated market and charge a premium for exclusivity. This is a natural demand, and fair play to them. They should explain how this will not lead to a repeat of the extortionate prices for mobile communications we are only now barely starting to leave behind. Someone will have to pick up the tab. What is more depressing is seeing the European Commission once again uncritically supporting big businesses’ demands, as if that was the only kind of industrial policy possible.

The future of mobile connectivity is too important to leave it to a small group of profit seeking organisations and bureaucrats. Society needs to be part of the discussions on strategic telecommunications such as 5G, in the same way we accept that decisions on high speed rail or nuclear power need a wider input. These developments will affect our lives and will cost taxpayers billions of dollars, euros and pounds.

It is unclear whether the mobile telephony model based on ITU top down standards, absolute government control and centralised infrastructure built by a handful of large companies can deliver the kind of ubiquitous connectivity required in the future. The Open Internet that everyone claims to protect has been a success so far precisely because it has taken a very different route based on open standards, decentralisation and multi-stakeholder governance. Large investments will certainly be required, but the role of industry and ideas of profit rewarded risk and investment may need to be questioned for projects that are too critical to fail.

[Read more]