July 08, 2019 | By Daniel Markuson, a digital privacy expert at NordVPN

European Net Neutrality is Under Attack

European internet users may not have noticed, but their net neutrality and online freedom is at risk. At least 186 Internet Service Providers (ISPs) in the EU are using Deep Packet Inspection (DPI) to read their users’ traffic. That means they get to decide how much internet freedom you get.

Network neutrality explained

Net neutrality is the idea that ISPs must ensure an equal internet connection to each and every user. Although there’s an EU law that regulates this, some ISPs discriminate against their users by filtering or charging for the content they try to access or the devices they use to connect.

Among other means, such as using DPI to examine our communications, telecom companies apply extra charges for data packages or specific content users connect to. This contradicts the Body of European Regulators for Electronic Communications (BEREC), which states that all internet users must be able to use the web without any restrictions.

How net neutrality is being abused

Any internet service plan involving conditions based on the websites or services you visit and use needs to be able to identify your internet traffic. If your provider offers a plan where you can use certain apps without consuming your data, that’s more than just a fundamental violation of the principles behind net neutrality. It’s also a breach of your privacy, as one way to distinguish between various traffic types is by using DPI.

Once a user accepts DPI, the ISP can use it in many different ways: throttling connections, censoring content, and tracking users’ traffic in greater detail than before. Legal protection against these intrusive actions exists in the EU, so how can users still be at risk?

Net neutrality protection in Europe

Although net neutrality is already dead in the US, Europe is still fighting for open internet access and transparency. In the EU, the right to equal network access is protected by Article 3 of EU Regulation 2015/2120. However, the European Digital Rights (EDRi) association, which for many years has been advocating in favor of strong net neutrality, is warning against the widespread use of privacy-invasive DPI technology in the EU.

EDRi is negotiating for new European net neutrality rules even as some telecom organizations seem to be pushing for the legislation of DPI. These companies claim net neutrality requirements reduce competition in the market and that a neutral internet could obstruct growth, flexibility, and investment in new infrastructure.

This gives some room for thought: will UK citizens be stripped of net neutrality after the country leaves the EU?

The loss of net neutrality

Without net neutrality in Europe, ISPs could limit what you can and can’t see online and how you experience the internet in general. When it’s gone, certain content and services may be completely blocked by some ISPs. They could also force some websites to pay or suffer slow traffic, which might drive many smaller online services out of business. Supporters of net neutrality fear the loss of consumer protections, privacy, and security while ISPs profit.

With the help of BEREC and EDRi, it’s easier to stand for our rights, quality of service, fair competition, and transparency.  If it wasn’t for net neutrality, today we might not have popular streaming services like YouTube or Netflix and we couldn’t freely listen to music via Spotify.

What can we do to fight back?

Although using virtual private network (VPN) services is a good solution to fight ISPs, you’ll only protect yourself and your family. It’s also of crucial importance to make your voice heard and let the EU know that its internet users want net neutrality to be protected and enforced. For now, you can follow and support the great efforts of EDRi and BEREC to prevent the abuse of internet freedom.

By Daniel Markuson, a digital privacy expert at NordVPN

[Read more]

June 29, 2019 | Ed Johnson-Williams and Amy Shepherd

Online Harms: Blocking websites doesn't work - use a rights-based approach instead

Blocking websites isn't working. It's not keeping children safe and it's stopping vulnerable people from accessing information they need. It's not the right approach to take on "Online Harms".

This is the finding from our recent research into website blocking by mobile and broadband Internet providers. And yet, as part of its Internet regulation agenda, the UK Government wants to roll out even more blocking.

The Government’s Online Harms White Paper is focused on making online companies fulfil a “duty of care” to protect users from "harmful content" – two terms that remain troublingly ill-defined.1

The paper proposes giving a regulator various punitive measures to use against companies that fail to fulfil this duty, including powers to block websites.

If this scheme comes into effect, it could lead to widespread automated blocking of legal content for people in the UK.

The Government is accepting public feedback on their plan until Monday 1 July. Send a message to their consultation using our tool before the end of Monday!

Mobile and broadband Internet providers have been blocking websites with parental control filters for five years. But through our Blocked project – which detects incorrect website blocking – we know that systems are still blocking far too many sites and far too many types of sites by mistake. 

Thanks to website blocking, vulnerable people and under-18s are losing access to crucial information and support from websites including counselling, charity, school, and sexual health websites. Small businesses are losing customers. And website owners often don't know this is happening.

We've seen with parental control filters that blocking websites doesn't have the intended outcomes. It restricts access to legal, useful, and sometimes crucial information. It also does nothing to prevent people who are determined to get access to material on blocked websites, who often use VPNs to get around the filters. Other solutions like filters applied by a parent to a child's account on a device are more appropriate.

Unfortunately, instead of noting these problems inherent to website blocking by Internet providers and rolling back, the Government is pressing ahead with website blocking in other areas.

Blocking by Internet providers may not work for long. We are seeing a technical shift towards encrypted website address requests that will make this kind of website blocking by Internet providers much more difficult.

When I type a human-friendly web address such as into a web browser and hit enter, my computer asks a Domain Name System (DNS) for that website's computer-friendly IP address - which will look something like My web browser can then use that computer-friendly address to load the website.

At the moment, most DNS requests are unencrypted. This allows mobile and broadband Internet providers to see which website I want to visit. If a website is on a blocklist, the system won't return the actual IP address to my computer. Instead, it will tell me that that site is blocked, or will tell my computer that the site doesn't exist. That stops me visiting the website and makes the block effective.

Increasingly, though, DNS requests are being encrypted. This provides much greater security for ordinary Internet users. It also makes website blocking by Internet providers incredibly difficult. Encrypted DNS is becoming widely available through Google's Android devices, on Mozilla's Firefox web browser and through Cloudflare’s mobile application for Android and iOS. Other encrypted DNS services are also available.

Our report DNS Security - Getting it Right discusses issues around encrypted DNS in more detail.

Blocking websites may be the Government's preferred tool to deal with social problems on the Internet but it doesn't work, both in policy terms and increasingly at a technical level as well.

The Government must accept that website blocking by mobile and broadband Internet providers is not the answer. They should concentrate instead on a rights-based approach to Internet regulation and on educational and social approaches that address the roots of complex societal issues.

  1. See ORG's response to the Government's Online Harms White Paper: ↩︎

[Read more]

June 28, 2019 | Pascal Crowe

Hunting for a solution? If it ain’t broke, don’t fix it

It seems apt that it was at this week’s ‘digital hustings’ for the Conservative Party leadership that Jeremy Hunt unilaterally came out in favour of online voting. Or at least, that is what elements of the press and Twitterati reported. What Hunt actually said was (slightly) more nuanced than a straightforward endorsement:

“The big innovation that we need is to introduce online voting...if we can book out holidays online, surely we can find a way that is fool proof to have online voting, and that is the way the world is going, and I think that would encourage much more participation in our democracy.”

There are four things to unpack in that statement that illustrate some core concerns about using the internet to help run elections.

1) Voting should be more like booking a holiday

Voting should not be like booking a holiday. Booking a holiday online requires you to input sensitive, traceable, personal information (like your card details, and name and address) online. The company must be able to identify you in order to follow up on your payment. This is not, however, desirable for an election. Whilst electoral register data is available (Although the type and accessibility of data this contains changes reasonably frequently), digitising the voting process risks undermining the principle of the secret ballot. This opens the door to electoral fraud.

We should also ask ourselves if we really want private companies administering our elections. How would they be held to account? What happens if people decide they don’t trust the company? If there’s a problem with your hotel room, you can get a refund. How would you refund an election? For private companies, elections are a consumer product. Some are offering Democracy-as-Ipod, with e-voting machines that come in “classic” or “premium” models. Should we have to choose between classic or premium democracy?

2) We can have a ‘fool proof’ online voting system

Existing statutory e-voting systems, most notably in Estonia, have been criticised for being insecure. The Netherlands stopped using e-voting machines in 2007 because they could be hacked within 30 seconds of entering a polling booth. Norway discontinued its i-voting trials in 2014, citing security concerns. The UK Electoral Commission has made serious criticisms of the e-counting hardware and software used in the London General Assembly Elections in every assessment of them it has ever done.

Put simply, the technology isn’t there. We shouldn’t turn our democracy into a user testing exercise for private companies. The risk is electoral outcomes that are decided by glitches in code or server security, rather than voters.

3) Online voting would encourage political participation

Enhancing political participation, particularly amongst people for whom the act of voting can be problematic (for example, people with disabilities), is a net positive for our democracy and a noble goal in itself. Further research about how to do this should be encouraged, and it seems likely that modern technology has a part to play.

Currently though, there is no academic consensus about whether e-voting encourages turnout. The limited number of studies makes it difficult to characterise. However, a recent Norwegian trial found no increase in aggregate turnout. In addition, it found that young people (an oft cited target demographic for e-voting) actually preferred walking to the polling station as it felt like a rite of passage for entering adulthood.

Just because your child might like playing Xbox, doesn’t mean they want to ‘play democracy’ on an Xbox.

4) “That is the way the world is going”

This statement assumes that there is a public appetite for elections to go digital.

The level of support of online elections depends on who you ask and what question you ask them (for example, conducting an *online poll* about *online elections* is likely to suggest an unrepresentative level of support, unless you do some reasonably complex sample weighting). The Electoral Commission however, in its 2017 post general election assessment found strong support for the way that the election had been administered. For example:

- 79% of respondents thought the election was well run (down from 91% in 2015).
- 98% of voters thought that the ballot paper was easy to complete.
- 84% of polling station voters were satisfied with the process of voting.
- 80% of postal voters were satisfied with the process of voting.
- 89% of candidates were satisfied with the administration of the election in their constituency.

This is not to say that elements of election administration could not be improved (for example, electoral registration is an issue). But let’s put these statistics into context. This was a national statutory election for which administrators had less than two months to prepare. If a commercial organisation had these levels of customer satisfaction after such an event, they would be cracking open the champagne.

So yes, let’s work out how to make it easier for all members of the electorate to vote. But we also need to encourage a political landscape that allows for a more equitable relationship between citizens and government. There are plenty of initiatives to encourage meaningful political engagement from a civic public: from citizen’s juries, to more localised economic and municipal models through to proportional representation and an independent Yorkshire (although some of these are more likely to be realised than others). In the meantime, the medium of our electoral system – people, pencils, and paper – should be left well alone.

In a world of uncertainty, electoral interference, and waning confidence in our democratic institutions, electronic voting is a surefire way to add to our problems. So please Mr Hunt - if it ain’t broke, don’t fix it.

[Read more]

June 11, 2019 | Jim Killock

EFF and Open Rights Group Defend the Right to Publish Open Source Software to the UK Government

EFF and Open Rights Group today submitted formal comments to the British Treasury, urging restraint in applying anti-money-laundering regulations to the publication of open-source software.

[Read more]

May 09, 2019 | Pascal Crowe

More than money - How to tame online political ads

The Electoral Commission’s Director of Regulation, Louise Edwards, recently put out a call for new laws to regulate online political adverts. She argued that the adverts need to show clearly and directly who has paid for them. [1] Whilst knowing who has paid for online ads is important, it’s only part of the picture. The whole process of online political advertising needs to be more tightly regulated.

Political parties target ads online by using personal data to include or exclude potential voters. This drives down spending by targeting only a narrow slice of the population. In addition, automated messaging is becoming both cheaper and more sophisticated. Both of these practices will significantly reduce the amount of money needed by campaigns.

To regulate online political advertising effectively, we need to look beyond campaign spending. It’s equally crucial to have greater transparency over parties’ use of personal data. Consequently, we should be looking at organisations beyond the Electoral Commission, for example the Information Commissioner’s Office.

Transparency is critical. Both political actors that use online advertising, and the platforms that facilitate them, should be forced to come clean on their sources of personal data and how their targeting works. Public reporting can be supported, to a degree, by initiatives such as Facebook’s online ad library. The limited data that Facebook provides, however, allows shady individuals who pay for ads online to conduct ‘astroturf’ campaigns hidden behind shell companies.

Open Rights Group is concerned that narrowly targeted online political advertising is contributing to the polarisation of democratic discourse. When parties’ messaging is designed only to be seen by the people already most likely to vote for them, it becomes less about consensus and increasingly geared towards riling up supporters in order to drive them to the ballot box.

Britain’s political discourse has never been totally impartial. But rarely has it been more fractured. Properly regulating online political ads takes a first step to repairing it.



[Read more]

April 08, 2019 | Jim Killock and Amy Shepherd

The DCMS Online Harms Strategy must “design in” fundamental rights

After months of waiting and speculation, the Department for Digital, Culture, Media and Sport (DCMS) has finally published its White Paper on Online Harms - now appearing as a joint publication with the Home Office. The expected duty of care proposal is present, but substantive detail on what this actually means remains sparse: it would perhaps be more accurate to describe this paper as pasty green.

Read the White Paper here.

Increasingly over the past year, DCMS has become fixated on the idea of imposing a duty of care on social media platforms, seeing this as a flexible and de-politicised way to emphasise the dangers of exposing children and young people to certain online content and make Facebook in particular liable for the uglier and darker side of its user-generated material.

DCMS talks a lot about the ‘harm’ that social media causes. But its proposals fail to explain how harm to free expression impacts would be avoided.

On the positive side, the paper lists free expression online as a core value to be protected and addressed by the regulator. However, despite the apparent prominence of this value, the mechanisms to deliver this protection and the issues at play are not explored in any detail at all.

In many cases, online platforms already act as though they have a duty of care towards their users. Though the efficacy of such measures in practice is open to debate, terms and conditions, active moderation of posts and algorithmic choices about what content is pushed or downgraded are all geared towards ousting illegal activity and creating open and welcoming shared spaces. DCMS hasn’t in the White Paper elaborated on what its proposed duty would entail. If it’s drawn narrowly so that it only bites when there is clear evidence of real, tangible harm and a reason to intervene, nothing much will change. However, if it’s drawn widely, sweeping up too much content, it will start to act as a justification for widespread internet censorship.

If platforms are required to prevent potentially harmful content from being posted, this incentivises widespread prior restraint. Platforms can’t always know in advance the real-world harm that online content might cause, nor can they accurately predict what people will say or do when on their platform. The only way to avoid liability is to impose wide-sweeping upload filters. Scaled implementation of this relies on automated decision-making and algorithms, which risks even greater speech restrictions given that machines are incapable of making nuanced distinctions or recognising parody or sarcasm.

DCMS’s policy is underpinned by societally-positive intentions, but in its drive to make the internet “safe”, the government seems not to recognise that ultimately its proposals don’t regulate social media companies, they regulate social media users. The duty of care is ostensibly aimed at shielding children from danger and harm but it will in practice bite on adults too, wrapping society in cotton wool and curtailing a whole host of legal expression.

Although the scheme will have a statutory footing, its detail will depend on codes of practice drafted by the regulator. This makes it difficult to assess how the duty of care framework will ultimately play out.

The duty of care seems to be broadly about whether systemic interventions reduce overall “risk”. But must the risk be always to an identifiable individual, or can it be broader - to identifiable vulnerable groups? To society as a whole? What evidence of harm will be required before platforms should intervene? These are all questions that presently remain unanswered.

DCMS’s approach appears to be that it will be up to the regulator to answer these questions. But whilst a sensible regulator could take a minimalist view of the extent to which commercial decisions made by platforms should be interfered with, allowing government to distance itself from taking full responsibility over the fine detailing of this proposed scheme is a dangerous principle. It takes conversations about how to police the internet out of public view and democratic forums. It enables the government to opt not to create a transparent, judicially reviewable legislative framework. And it permits DCMS to light the touch-paper on a deeply problematic policy idea without having to wrestle with the practical reality of how that scheme will affect UK citizens’ free speech, both in the immediate future and for years to come.

How the government decides to legislate and regulate in this instance will set a global norm.

The UK government is clearly keen to lead international efforts to regulate online content. It knows that if the outcome of the duty of care is to change the way social media platforms work that will apply worldwide. But to be a global leader, DCMS needs to stop basing policy on isolated issues and anecdotes and engage with a broader conversation around how we as society want the internet to look. Otherwise, governments both repressive and democratic are likely to use the policy and regulatory model that emerge from this process as a blueprint for more widespread internet censorship.

The House of Lords report on the future of the internet, published in early March 2019, set out ten principles it considered should underpin digital policy-making, including the importance of protecting free expression. The consultation that this White Paper introduces offers a positive opportunity to collectively reflect, across industry, civil society, academia and government, on how the negative aspects of social media can be addressed and risks mitigated. If the government were to use this process to emphasise its support for the fundamental right to freedom of expression - and in a way that goes beyond mere expression of principle - this would also reverberate around the world, particularly at a time when press and journalistic freedom is under attack.

The White Paper expresses a clear desire for tech companies to “design in safety”. As the process of consultation now begins, we call on DCMS to “design in fundamental rights”. Freedom of expression is itself a framework, and must not be lightly glossed over. We welcome the opportunity to engage with DCMS further on this topic: before policy ideas become entrenched, the government should consider deeply whether these will truly achieve outcomes that are good for everyone.

[Read more]

March 19, 2019 | Jim Killock

Jeremy Wright needs to act to avert disasters from porn age checks

Age Verification for porn websites is supposed to be introduced in April 2019. Age Verification comes with significant privacy risks, and the potential for widespread censorship of legal material.

porn viewing histories what could possibly go wrongThe government rejected Parliamentary attempts to include privacy powers over age verification tools, so DCMS have limited possibilities right now. Last summer, BBFC consulted about their draft advice to website operators, called Guidance on Age Verification Arrangements. That consultation threw up all the privacy concerns yet again. BBFC and DCMS agreed to include a voluntary privacy certification scheme in response.

Unfortunately, there are two problems with this. Firstly, it is voluntary. It won’t apply to all operators, so consumers will sometimes benefit from the scheme, and sometimes they won’t. It is unclear why it is acceptable to government and the BBFC that some consumers should be put at greater risk by unregulated products.

There is nothing to stop a an operator from leaving the voluntary scheme so it can make its data less private, more shareable, or more monetisable. It’s voluntary, after all.

Secondly, the scheme is being drawn up hastily, without public consultation. It is a very risky business for a regulator to produce a complex and pivotal security and privacy standard with a limited field of view. It is talking to vendors, but not the public who are going to be using these products. Security experts, of whom there are many who might help, are unable to engage.

This haste to create a privacy scheme seems to be due to the desire of government to commence age verification as fast as possible. That risks the privacy standard being substandard, and effectively misleading to consumers, who will assume that it provides a robust and permanent level of protection.

DCMS and Jeremy Wright could solve this right now

They need to do two things:

  1. Tell industry that government will legislate to make the Privacy Certification scheme compulsory;

  2. Announce a public consultation on BBFC's Privacy Certification scheme.

That may involve a short delay to this already delayed scheme. But that is better, surely, than risking damage to the privacy, personal lives and careers of millions of UK people regularly visiting these websites.

[Read more]

March 14, 2019 | Javier Ruiz

US red lines for digital trade with the UK cause alarm

The US government has published its negotiating objectives for a trade deal with the UK, which include some worrying proposals on digital trade, including a ban on the disclosure of source code and algorithms, and potential restrictions on data protection.

CC-BY-NC-ND 2.0 Chad Horwedel

Trade negotiations between the US and the UK have recently received a lot of attention due to the publication of the official negotiating objectives of the US Government, which set out in sometimes candid detail the areas of interest and priorities. The US document is mainly written in coded “trade-speak”, with seemingly innocuous term such as “procedural fairness” or “science-based” masking huge potential impacts on a wide range of areas, from farming to NHS prescriptions. The document also sets out the priorities for the US around Digital Trade with the UK, with proposals that would affect the digital rights of people in the UK.

The UK started “non-negotiating” a trade agreement with the US soon after the country voted to leave the EU in 2016. While technically not allowed to enter formal negotiations on trade until it leaves the bloc at the end of this month, the UK government has conducted five official bilateral meetings and sent several business delegations, not counting the ongoing activity of UK officials in Washington.

A public consultation last year saw many consumer and rights groups raise concerns about a potential UK-US agreement, including ORG. We are worried about the inclusion of “Digital Trade” - also misleadingly termed “E-commerce” - in negotiations, which could lead to entrenched domination by US online platforms, lower privacy protections and more restrictions in access to information.

Last month a group of 76 countries, including the US, the EU and China, announced their intentions to start negotiations on “trade-related aspects of electronic commerce” at the World Trade Organisation (WTO). Once more this has led to widespread concerns by civil society groups such as the Transatlantic Consumer Dialogue, of which ORG is a member. The proposed agenda covers non-controversial improvements, such as the use of e-signatures or fighting spam, but it includes similar proposals to those presented by the US in their digital trade objectives. These proposals will severely impact internet regulation by controlling the building blocks of digital technology: data flows, source code and algorithms.

What the US wants from the UK in digital trade

Keeping source code and algorithms confidential

The US wants to stop the UK government from “mandating the disclosure of computer source code or algorithms”. This is one of the most concerning aspects of the new digital trade agenda, already found in other recent trade agreements, and criticised by groups such as Third World Network. Restricting source code and algorithms is problematic for various reasons. In particular, the UK government has been pioneering open source software, despite some setbacks, and these clauses could be used to challenge any public procurement perceived to give preference to open source.

There are growing concerns about potential unfairness and bias in decisions made or supported by the use of algorithms, from credit to court sentencing, including the status of EU citizens after Brexit. Preventing the disclosure of algorithms would hamper efforts to develop new forms of technological transparency and accountability. The EU GDPR includes a right for individuals in certain circumstances to be informed of the logic of the systems making decisions that significantly affect them, in a potential conflict with the US digital trade proposals.

Maintaining cross-border data flows

Another objective of the US in its trade negotiations with the UK is to ensure that the UK “does not impose measures that restrict cross-border data flows and does not require the use or installation of local computing facilities”.

These demands are becoming a central feature of contemporary trade negotiations, encapsulating the key aspect of the global Digital Trade agenda: ensuring a global data flow towards the largest US-based internet giants of Silicon Valley that currently dominate the global Internet outside China and Russia.

Additionally, as we said in our response to the government consultation on the US trade deal last year, these requirements could openly clash with the EU General Data Protection Regulation (GDPR), which prohibits unrestricted data transfers. Wilbur Ross, US Commerce Secretary, has openly called GDPR an unnecessary barrier to trade. Agreeing to US demands would put the UK in a double bind that could jeopardise data flows to and from the EU.

Limiting online platform liability for third-party content

The US will also try to limit the liability of online platforms for third-party content excluding intellectual property, with caveats allowing “non-discriminatory measures for legitimate public policy objectives or that are necessary to protect public morals”. This is one topic that receives widespread sympathy from digital rights advocates, as policymakers across Europe try to open a new debate on Internet liability protections that could see online providers being forced to increase censorship over their users. We recently heard this argument in the report on Internet regulation by the House of Lords. Leveraging trade policy to advance a progressive digital rights agenda may seem a good idea,  but unfortunately the positives tend to be bundled with other worrying proposals, and trade negotiators lack the expertise required, so subtleties can be lost and mistakes made.

The wording in the US document reflects agreed exemptions in international trade rules, which have been applied in very few occasions. The exemption has been used by the US - to try to restrict online gambling from the Caribbean island of  Antigua; by China - to try to control the foreign influx of ideas into the country; and by the EU has to restrict the importation of products made from seals. In most cases the claim was either not successful or required modifications to the policy.

The concept of “public morals” is far from clear and as we can see from these case it can be applied quite broadly. It is meant to encompass human rights and environmental concerns, without mentioning them, but there is no agreement to how universal such morals have to be. This shows the dangers of bringing more spheres of human activity under the umbrella of trade. The UK is preparing to regulate harms to UK-based users of social media platforms, which will impact US companies, and it is unclear whether this activity could be considered a trade barrier and consequently defended under the public morals exemption. In our view, regulating online harms should not be linked to trade negotiations but examined on its own merits.

Preventing border taxes on digital products

The US wants to ensure that digital products imported into the country (e.g., software, music, video, e-books) are not taxed at the border. Right now,digital goods are mainly classified under their physical characteristics rather than content, so that DVDs and “laser-disks” including CDs are counted separately by UK customs and are generally exempt from custom duties although importers need to pay VAT. This exemption may become less relevant as the imports of tangible digital goods go down globally when compared to those distributed electronically. DVD sales are displaced by online streaming, and e-books are almost exclusively bought online, with Amazon accounting for almost 90% of market share in the UK.

Goods transmitted electronically are currently exempt from custom duties thanks to a WTO moratorium in place since 1998, which is currently being challenged by developing countries led by India and South Africa for incurring unfair revenue losses given the massive growth of online trade in the past 20 years.

The US wants to avoid any supposed discrimination against their digital products. Given the importance of the Silicon Valley giants, many measures designed to deal with large internet companies will appear to target US businesses. We are not sure yet about the specific agenda under this item in the UK context, but it is likely that they have in mind proposals to increase the taxation of tech firms. The US government has described EU proposals in this direction as “discriminatory”.  It is then likely that the UK’s own plans to tax digital services will clash with US demands. The distinction between products and services can be confusing in the digital sphere, but it is critically important in trade. In many cases, consumers do not own the music, films or e-books they “buy” online, they merely have a licence to the content ruled by terms and conditions, which is rather a service. UK consumer law has tried to deal with this confusion by creating specific protections for download purchases, called “digital content not on a tangible medium”, but it is not clear how this would impact trade categories.

What’s next?

The negotiations are advancing apace but it is difficult to predict what will happen. As the US document shows, behind the rhetoric there are hard economic interests that could slow down the process.

The above are only the official top level demands from the US government: US business groups are lining up to include many other issues. A recent public US government hearing in Washington on the negotiating objectives saw calls for full liberalisation of services, particularly financial services, among other issues that included access to the UK labour market for US workers. The hearing stressed that the economic relationship is important for both countries, not just the UK. The UK is the US largest partner in services trade and the largest buyer of digital services, and both countries are each others’ largest direct foreign investors. The UK is one of the few countries that does more trade in services with the US than in goods.

Despite the issues raised, the publication of the US document provides some level of transparency and enables public debate. We hope that the UK government will follow suit and publish its own negotiating objectives. Unfortunately, our experience in other bilateral areas, such as surveillance, indicates that the level of public accountability of the heavily politicised US federal government is not generally matched by Whitehall’s circumspect civil service. The advisory group created by the Department for International Trade (DfIT) for discussions on trade policy around Intellectual Property is a very encouraging step. A similar space should be created by DfIT where digital trade issues can be discussed with the attention they deserve.

[Read more]