Blog


February 26, 2019 | Matthew Rice

The missing piece from the DCMS report? Themselves

In all the outrage and column inches generated by DCMS' Disinformation and 'fake news' report campaigners for political office and representatives of political parties have failed to acknowledge one of the most critical actors in personal data and political campaigning: themselves.

The Disinformation and ‘fake news’ report from the House of Commons Digital, Culture, Media and Sport (DCMS) Committee splashed onto front pages, news feeds and timelines on 18 February. And what a response it provoked. Parliamentarians are once again talking tough against the role of big tech in one of the key areas of our life: the democratic process. The custodians of the democratic tradition are vociferously calling time on “out of control” social media platforms and demanding further regulation.

In all the outrage and column inches generated by the report, however, campaigners for political office and representatives of political parties being angrily vocal about platforms and adverts and elections have failed to acknowledge one of the most critical actors in personal data and political campaigning: themselves.

PLATFORMS ARE JUST ONE PIECE OF THE PUZZLE

The DCMS report is damning in its critique of platforms’ cavalier approach to data protection, both during election cycles and generally. It hones in on the role of Facebook in the Cambridge Analytica scandal and condemns in scathing remarks Facebook’s business focus on data as a commodity to be wielded for commercial leverage. It expresses disapproval and disappointment and paints a vivid portrait of how platforms presumptively treat our personal data as their property to use, abuse and barter away. The regulatory proposals focus sharply on creating liability for tech companies and subjecting them to a compulsory code of ethics.

Whilst these are all strong outcomes and largely welcomed, to focus on the platforms alone is to focus only on one piece of the puzzle.

The debate over the use of personal data in political campaigns goes far beyond a call on platforms to remove harmful content. Political ads might be ultimately served by online platforms, but the messages themselves and their targets are created and set by political parties, their affiliated campaigning organisations, and, increasingly, data analytic companies on behalf of the parties or campaigners.

Parts of the report comment on the responsibility of political parties and campaigners in electioneering. However, media coverage and commentary has so far limited itself to emphasising the role of social media companies. This is disappointing messaging. The role of the key political actors in using and misusing data is no less significant than that of the platforms, and their culpability needs to be equally subject to critical scrutiny.

POLITICAL ACTORS NEED REGULATING TOO

The first layer of responsibility that political actors have for data processing is in gathering and storing target audience data. Some audience data will inevitably come from online platforms, but it will often be accompanied by or added to data separately held by parties and campaigners. Increasingly, political parties purchase this data from data brokers, and those transactions are highly questionable to the point arguably of being in breach of data protection standards.

Next, whilst spending on “digital” in election campaigning is increasing, the specificity of what “digital” means is decreasing. Is this just online ad buys? It is data collection? Message development? A bit of both? All of that and some more via third parties? At present, it's nearly impossible to tell where the money is going. Election finance regulations in Britain urgently need to be redrafted with tighter spending and reporting requirements that reflect the contemporary change in where and how advertising money is distributed.

The actors involved in political campaigns are changing too: gone are the days of straightforward political parties and designated campaign organisations. 21st century electioneering involves a shadowy web of non-registered campaigners that can have decisive influence, or at least spend decisive amounts of money, on digitally-targeted and harder to track campaigns. Transparency is sorely lacking in how these campaigners operate, at a significant risk to fair and open democratic processes.

DATA PROCESSING INCURS DEMOCRATIC DUTIES

Platform responsibility for data protection and fair election campaigning is important, but the problem doesn’t begin and end with Facebook. A bigger discussion on the responsibility of political parties when it comes to handling personal data is required.

The Data Protection Act 2018 handed political parties wide powers to process personal data if it “promotes democratic engagement” or is part of their “political activities”, including campaigning and fundraising. What this means is unclear. One obvious impact, though, is that parties will no longer have to work so hard to justify their processing of personal data. This sets a worryingly lax data protection standard.

The democratic trust deficit existing in Britain today is not corrected by riding an anti-social-media wave. It begins with admitting that powerful political parties, and their affiliates, have a core responsibility when it comes to the collection, combination and use of personal data in political campaigns.

The DCMS report offers a powerful opportunity to look at data processing in elections through a wide lens and do some vitally needed internal housekeeping. It could be used, as we intend to use it, to publicly cement the duty of parliament and its occupants to uphold data protection standards, go beyond just the letter of the law, and carefully consider whether the same system that is used to sell us shoes and holidays and 10% off vouchers can be used to facilitate open and informed democratic debates.

NEW REPORTS, NEW RULES, NEW OPPORTUNITIES FOR ACTION

Open Rights Group is just beginning to work on Data and Democracy, and we expect to find opportunities to raise the above points as the DCMS report continues to be further digested. Other outputs examining this issue are also expected soon. In particular, the Information Commissioner’s Office asked for views late last year on a code of practice for the use of personal information in political campaigns; ORG responded, calling for greater clarity on the new “political” bases for processing sensitive personal data and explaining what is appropriate in terms of targeting political messaging and how electoral roll data should be used. These are topics worthy of discussion and we look forward to seeing ICO guidance in the near future.

The DCMS report is a strongly positive step towards setting new rules for political campaigning. The question we ask is - to whom are these rules going to apply? For fair campaigning and truly democratic data protection, we hope the list comprises more than just tech companies.

[Read more]


February 19, 2019 | Jim Killock

Formal Internet Censorship: BBFC pornography blocking

The following is an excerpt from the report UK Internet Regulation Part I: Internet Censorship in the UK Today.

Read the full report here.

1. Administrative blocking powers

The Open Rights Group is particularly concerned that the BBFC, as the age verification regulator, has been given a general administrative power to block pornographic websites where those sites do not employ an approved age verification mechanism. We doubt that it is in a good position to judge the proportionality of blocking; it is simply not set up to make such assessments. Its expertise is in content classification, rather than free expression and fundamental rights assessments. [1]

In any case, state powers’ censorship should always be restrained by the need to seek an independent decision. This provides accountability and oversight of particular decisions, and allows the law to develop a picture of necessity and proportionality.

The BBFC’s blocking powers are not aimed at content but the lack of age verification (AV) in some circumstances. Thus they are a sanction, rather than a protective measure. The BBFC does not seek to prevent the availability of pornography to people under 18, but rather to reduce the revenues to site operators in order to persuade them to comply with UK legislative requirements.

This automatically leads to a risk of disproportionality, as the block will be placed on legal content, reducing access for individuals who are legally entitled to view it. For instance, this could lead to some marginalised sexual communities finding content difficult to access. Minority content is harder to find by definition, thus censoring that legal content is likely to affect minorities disproportionately. It is unclear why a UK adult should be prevented from accessing legal material.

At another level, the censorship will easily appear irrational and inconsistent. An image that is blocked on a website and lacks AV could be available on Twitter or Tumblr, or available on a non-commercial site.

The appeals mechanisms for BBFC blocks are also unclear. In particular, it is not clear what happens when an independent review is completed but the appellant disagrees with the decision.

2. BBFC requests to “Ancillary Service Providers”

Once section 14 of the Digital Economy Act (DEA) 2017 is operational, the BBFC will send requests to an open-ended number of support services for pornographic sites that omit age verification.[2] The BBFC hopes that once notified these services will comply with their request to cease service. Complying with a notice could put these services in legal jeopardy as they could be in breach of contract if they cease business with a customer without a legal basis for its decision. If these are companies based outside of the UK, no law is likely to have been broken.

Furthermore, some of the “services”, such as “supplying” a Twitter account, might apply to a company with a legal presence in the UK, but the acts (tweeting about pornography) would be lawful, including sharing pornographic images without age verification.

If a voluntary notice is acted on, however, then free expression impacts could ensue, with little or no ability for end users to ask the BBFC to cease and desist in issuing notices, as the BBFC will believe it is merely asking for voluntary measures for which it has no responsibility.

This is an unclear process and should be removed from the Digital Economy Act 2017.

Recommendations to government:

1. The BBFC’s blocking powers should be removed.

2. Cease obligations to the BBFC to notify ASPs for voluntary measures.

Recommendations to BBFC:

1. Ask for the application of the FoI Act to the BBFC's statutory work.

[1] This report does not cover privacy concerns, but it is worth noting that privacy concerns could easily lead to a chilling effect, whereby UK residents are dissuaded from accessing legal material because of worries about being tracked or their viewing habits being leaked.

Robust privacy regulation could reduce this risk, but the government has chosen to leave age verification technologies entirely to the market and general data protection law. This leaves age verification (AV) for pornography less legally protected than card transactions and email records.

See https://www.openrights-group.org/about/reports/response-to-bbfc-age-verification-consultation

and https://www.openrightsgroup.org/blog/2018/the-government-is-acting-negli-gently-on-privacy-and-porn-av

[2] Digital Economy Act 2017 s14 http://www.legislation.gov.uk/ukpga/2017/30/section/14/enacted

[Read more]


February 15, 2019 | Jim Killock

Formal Internet Censorship: Copyright blocking injunctions

The following is an excerpt from the report UK Internet Regulation Part I: Internet Censorship in the UK Today.

Read the full report here.

Open-ended powers

Copyright-blocking injunctions have one major advantage over every other system except for defamation. They require a legal process to take place before they are imposed. This affords some accountability and that necessity and proportionality are considered before restrictions are put in place.

However, as currently configured, copyright injunctions leave room for problems. We are confident that court processes will be able to resolve many of these. Further advantages of a process led by legal experts are that they are likely to want to ensure that rights of all parties are respected, and appeals processes in higher courts and the application of human rights instruments can ensure that problems are dealt with over time.

A process led by legal experts offers further advantages, including that it will be likely to ensure that rights of all parties are respected and that appeals processes in higher courts and the application of human rights instruments will ensure that problems are dealt with over time.

Copyright blocking injunctions are usually open-ended. There is not usually an end date, so they are a perpetual legal power. The injunction is against the ISPs. Rights-holders are allowed under the standard terms of the injunctions to add new domains or IP addresses that are in use by an infringing service without further legal review. ISPs and rights-holders do not disclose what exactly is blocked.

It has been reported that around 3,800 domains [1] are blocked by 31 injunctions, against around 179 sites or services. [2]

The government is preparing to consult on making copyright blocking an administrative process. We believe this would be likely to reduce accountability for website blocking, and extend it in scope. At present, website blocking takes place where it is cost effective for private actors to ask for blocks. Administrative blocking would place the cost of privately-demanded blocking onto the UK taxpayer, making it harder for economic rationality to constrain blocking. Without economic rationale,and with widening numbers of blocks, it would be harder to keep mistakes in check.

38% of observed blocks in error

Open Rights Group has compiled public information about clone websites that might be blocked, for instance the many websites that have presented full copies of the Pirate Bay website.

We ran tests on these domains to identify which domains are blocked on UK networks. As of 25 May 2018, we found 1,073 blocked domains. Of these, we found 38% of the blocks had been done in error. [3]

To be clear, each block would generally have been valid when the block was initially requested and put in place by the ISP, but not many of these blocks were removed once the websites ceased to infringe copyright laws.

The largest group of errors identified concerned websites that were no longer operational. The domains were for sale or parked, that is flagged as not in use, (151), not resolving (76), broken (63), inactive (41) or used for abusive activities such as “click-fraud” (78). [4] At other times, we had detected three or four that had been employed in active unrelated legitimate use [5], and several that could be infringing did not seem to be subject to an injunction, but were blocked in any case. [6]

That means a total of 409 out of 1075 domains were being blocked with no current legal basis, or 38%.

These errors could occur for a number of reasons. Nearly all of the domains would have been blocked as they were in use by infringing services. However, overtime they will have fallen into disuse, and some then reused by other services. In some cases, the error lies with ISPs failing to remove sites after notification by rights-holders that they no longer need to be blocked. In other cases, the rights-holders have not been checking their block lists regularly enough. While only a handful of blocks have been particularly significant,it is wrong for parked websites and domains for sale to be blocked by injunction. It is also concerning that the administration of these blocks appears very lax. Anear 40% error rate is not acceptable.

To be clear, there is no legal basis for a block against a domain that is no longer being used for infringement.The court injunctions allow blocks to be applied when a site is in use by an infringing service, but it is accepted by all sides that blocks must be removed when infringing uses cease.

Open Rights Group is concerned about its inability to check the existing blocks. What is or is not blocked should not be a secret, even if that is convenient for rights-holders. Without the ability to check, it is unlikely that independent and thorough checking will take place. Neither the ISPs or rights-holders have particular incentive to add to their costs by making thorough checks. As of the end of July 2018, most of the mistakes had remained unresolved, after three months of notice and a series of meetings with ISPs to discuss the problem. The number by October 2018 had reduced to nearer 30%, but progress in resolving these remains very slow. [7]

Many blocking regimes do not offer the flexibility to add on further blocks, but require rights-holders to return to court. The block lists are entirely public in many European countries.

ISPs should at a minimum publish lists of domains that they have “unblocked”. This would allow us and others to test and ensure that blocks have been removed.

Poor notifications by ISPs

A further concern is that the explanations for website blocks and how to deal with errors is very unclear.This has no doubt contributed to the large proportion of incorrect blocks.

At present some basic information about the means to challenge the injunction at court is available.However, in most cases this is not what is really needed. Rather, a website user or owner needs information about the holder of the injunction and how to ask them to correct an error. This information is currently omitted from notification pages.

Notifications should also include links to the court judgment and any court order sent to the ISP. This would help people understand the legal basis for blocks.

Our project blocked.org.uk includes this information where available. We also generate example notification pages.

While ISPs could implement these changes without instruction from courts, they have been reluctant to improve their practice without being told. Open Rights Group’s interventions in the Cartier court cases helped persuade the courts to specify better information on notification pages, but we believe there is some way to go before they are sufficiently explanatory.

Proposal for administrative blocking

The government is considering administrative blocking of copyright-infringing domains. This poses a number of problems. The current system requires rights holders to prioritise asking for blocks where is it is cost effective to do so. This keep censorship of websites to that which is economically efficient to require, rather than allowing this task to expand beyond levels which are deemed necessary.

As we see with the current system, administering large lists of website blocks efficiently and accurately is not an easy task. Expanding this task at the expense of the taxpayer could amount to unnecessary levels of work that are not cost efficient. It will be very hard for a government body to decide “how much” blocking toask for, as its primary criteria will be ensuring material is legal. Unfortunately, there are very large numbers of infringing services and domains, with very small or negligible market penetration.

Secondly, it makes no sense for a growing system of censorship to keep what is blocked secret from the public. Administrative systems will need to be seen to be accurate, not least because sites based overseas will need to know when and why they are blocked in the UK in order to be able to appeal and remove the block. This may be resisted by rights-holder organisations, who have so far shown no willingness to make the block lists public. Administrative blocking could be highly unaccountable and much more widespread than at present, leading to hidden, persistent and unresolvable errors.

Thirdly, combining wide-scale pornography blocking with widening copyright blocking risks making the UK a world leader in Internet censorship. Once the infrastructure is further developed, it will open the door to further calls for Internet censorship and blocking through lightweight measures. This is not an attractive policy direction.

Recommendations to government:

  1. Future legislation should specify the need for time limits to injunctions and mechanisms to ensure accuracy and easy review

  2. Open-ended, unsupervised injunction and blocking powers should not be granted

  3. Administrative blocking should be rejected

Recommendations to courts and parties to current injunction:

  1. Current injunction holders and ISPs must urgently reduce the error rates within their lists, as incorrect blocks are unlawful

  2. Courts should reflect on the current problems of accuracy in order to ensure future compliance with injunctions

  3. It should be mandatory for blocking notices to link to legal documents such as a judgment and court order

  4. It should be mandatory for blocking notices to explain who holds the injunction to block the specific URL requested

  5. Assurance should be given that there is transparency over what domains are blocked

  6. ISPs and right-holders should be required to check block lists for errors 

References

[1] https://torrentfreak.com/uks-piracy-blocklist-exceeds-3800-urls-170321/

[2] See https://wiki.451unavailable.org.uk/wiki/Main_Page and https://www.blocked.org.uk/legal-blocks for the lists of sites.

[3] https://www.blocked.org.uk/legal-blocks/errors maintains the error rates; results as of 4 June 2018 are available here: http://web.archive.org/web/20180604092443/https://www.blocked.org.uk/legal-blocks/errors.

Reports and data can be downloaded from https://www.blocked.org.uk/legal-blocks

[4] These categories are defined as follows: (i) Parked or for sale: the site displays a notice explaining that the domain is for sale, or has a notice saying the domain is not configured for use; (ii) not resolving means that DNS is not configured so the URL does not direct anywhere; (iii) broken means that a domain resolves but returns an error, such as a 404, database error etc; (iv) inactive means that the site resolves, does not return an error, returns a blank page or similar, but does appear to be configured for use; (v) abusive means that the domain is employed in some kind of potentially unlawful or tortious behaviour other than copyright infringement.

[5] A blog and website complaining about website blocking ,for instance .These were not functional as we completed the review.

[6] See also our press release: https//:www.openrightsgroup.org/press/releases/2018/nearly-40-of-court-order-blocks-are-in-error-org-finds

[7] https//:www.blocked.org.uk/legal-blocks/errors Errors on 10 October 2018 stood at 362 domains out of.1128

[Read more]


February 14, 2019 | Amy Shepherd

Patently unfair - Epson takedowns continue

Hiding behind the shield of its connecter patents, the printer manufacturing giant Epson continues to shut down small UK businesses by requiring eBay and Amazon to take down compatible ink cartridge listings.

Platform takedown notice procedures are a personal patent guard-dog

As a verified rights-owner (VeRO) on eBay UK and by using Amazon UK’s reporting notice system, Seiko Epson Corporation (“Epson”) has free rein to remove any and all third-party cartridge listings that it wishes. It simply has to inform eBay or Amazon of the offending listing, provide its patent number and assert patent infringement. Listings are always removed, and affected sellers cannot prevent, challenge or appeal removal.

This one-sided system is fundamentally unfair. If Epson genuinely believes that its patents are being infringed it should issue court proceedings to enforce its rights. Instead, eBay and Amazon’s automatic takedown notice procedures provide the multi-million-dollar corporation with a blunt tool it can brazenly use to circumvent fair judicial process.

Targeting online sellers is a low move by Epson. The primary focus for patent enforcement should be compatible cartridge manufacturers or importers. Resellers are the least important part of the chain. However, online sellers have the disadvantage of being visible, and eBay and Amazon’s automatic and inflexible takedown policies make them by far the easiest target.

This is not an abstract concern: Epson’s ruthless patent-trolling is steadily shutting down small UK businesses, forcing them to lay off employees or close entirely. Takedown notices are damaging UK entrepreneurship, competition and independent business activity.

Absolute rights-holder protection originates in the E-Commerce Directive 2000

The ability of Epson to manipulate eBay and Amazon into acting as a personal patent guard-dog is not entirely of its own making, however, It is ultimately a symptom of a much larger issue, originating in the EU E-Commerce Directive 2000.

Aiming to give consumers certainty when conducting commercial transactions online, the E-Commerce Directive made online platforms responsible for illegal content on their site once notified of illegality. Nervous of the consequences of liability, platforms responded by implementing blanket takedown notice procedures which gave rights-holders complaining of illegal postings swift and absolute protection.

Subsequent judgments of the Court of Justice of the European Union (CJEU) and EU digital strategies have increasingly sought to make online platforms responsible for their content and activity. This has served to consolidate the platforms’ hands-off approach when it comes to rights-holders asserting infringement.

The takedown notice systems of eBay and Amazon deployed by Epson give affected sellers no opportunity to counter-notice. But if cartridge sellers can’t assert their legal right to post content against the platforms then they have no recourse against listing removal - and thus no recourse against the infringement of their right to free speech. Challenging listings removal is sellers’ only hope of pushback: small-time online enterprises lack the power and ready cash to force Epson to defend its patents in an open judicial forum.

Need for a Counter-Notice System

Policy attention on online platform regulation has largely focused on increasing responsibility for illegal or unwanted content; however, in this circumstance, platforms are taking action to remove content but this is having a significant negative effect on UK enterprises, both in terms of business viability and in terms of free speech.

Many UK businesses, especially individuals and small traders, rely on internet platforms to reach customers and conduct operations. However, the process of removal decision-making at platform level is often arbitrary and unfair. In situations of online defamation, counter-notice is available as a method of challenging speech removal. It allows individuals to put up a direct defence to the removal of their post. Although an imperfect system, this style of rebuttal opportunity could apply well in commercial contexts to protect businesses interests and ensure free speech rights.

For fairness and justice, there vitally needs to be a UK legislative mechanism put in place whereby online sellers can counter-notice against takedown demands and assert their rights to continue trading. If Epson doesn’t then dare take sellers to court for fear their patent might be overruled - well, that’s their choice.

If you have been affected by takedowns relating to Epson compatible ink cartridges and patent claims, please get in touch with us by emailing amy@openrightsgroup.org .



[Read more]


February 11, 2019 | Mike Morel

A new wave of Internet censorship may be on the horizon

2018 was a pivotal year for data protection. First the Cambridge Analytica scandal put a spotlight on Facebook’s questionable privacy practices. Then the new Data Protection Act and the General Data Protection Regulation (GDPR) forced businesses to better handle personal data.

As these events continue to develop, 2019 is shaping up to be a similarly consequential year for free speech online as new forms of digital censorship assert themselves in the UK and EU.

Of chief concern in the UK are several initiatives within the Government’s grand plan to “make Britain the safest place in the world to be online”, known as the Digital Charter. Its founding document proclaims “the same rights that people have offline must be protected online.”  That sounds a lot like Open Rights Group’s mission! What’s not to like?

Well, just as surveillance programmes created in the name of national security proved detrimental to privacy rights, new Internet regulations targeting “harmful content” risk curtailing free expression.

The Digital Charter’s remit is staggeringly broad. It addresses just about every conceivable evil on the Internet from bullying and hate speech to copyright infringement, child pornography and terrorist propaganda. With so many initiatives developing simultaneously it can be easy to get lost.

To gain clarity, Open Rights Group published a report surveying the current state of digital censorship in the UK. The report is broken up into two main sections -  formal censorship practices like copyright and pornography blocking, and informal censorship practices including ISP filtering and counter terrorism activity. The report shows how authorities, while often engaging in important work, can be prone to mistakes and unaccountable takedowns that lack independent means of redress.

Over the coming weeks we’ll post a series of excerpts from the report covering the following:

Formal censorship practices

  • Copyright blocking injunctions

  • BBFC pornography blocking

  • BBFC requests to “Ancillary Service Providers”

Informal censorship practices

  • Nominet domain suspensions

  • The Counter Terrorism Internet Referral Unit (CTIRU)

  • The Internet Watch Foundation (IWF)

  • ISP content filtering

The big picture

Take a step back from the many measures encompassed within the Digital Charter and a clear pattern emerges. When it comes to web blocking, the same rules do not apply online as offline. Many powers and practices the government employs to remove online content would be deemed unacceptable and arbitrary if they were applied to offline publications.

Part II of our report is in the works and will focus on threats to free speech within yet another branch of the Digital Charter known as the Internet Safety Strategy.

[Read more]


February 06, 2019 | Jim Killock

Duty of care: an empty concept

There is every reason to believe that the government and opposition are moving to a consensus on introducing a duty of care for social media companies to reduce harm and risk to their users. This may be backed by an Internet regulator, who might decide what kind of mitigating actions are appropriate to address the risks to users on different platforms.

This idea originated from a series of papers by Will Perrin and Lorna Woods and has been mentioned most recently in a recent Science and Technology committee report and by NGOs including children’s charity 5Rights.

A duty of care has some obvious merits: it could be based on objective risks, based on evidence, and ensure that mitigations are proportionate to those risks. It could take some of the politicisation out of the current debate.

However, it also has obvious problems. For a start, it focuses on risk rather than process. It moves attention away from the fact that interventions are regulating social media users just as much as platforms. It does not by itself tell us that free expression impacts will be considered, tracked or mitigated.

Furthermore, the lack of focus that a duty of care model gives to process means that platform decisions that have nothing to do with risky content are not necessarily based on better decisions, independent appeals and so on. Rather, as has happened with German regulation, processes can remain unaffected when they are outside a duty of care.

In practice, a lot of content which is disturbing or offensive is already banned on online platforms. Much of this would not be in scope under a duty of care but it is precisely these kinds of material which users often complain about, when it is either not removed when they want it gone, or is removed incorrectly. Any model of social media regulation needs to improve these issues, but a duty of care is unlikely to touch these problems.

There are very many questions about the kinds of risk, whether to individual in general, vulnerable groups, or society at large; and the evidence required to create action. The truth is that a duty of care, if cast sensibly and narrowly, will not satisfy many of the people who are demanding action; equally, if the threshold to act is low, then it will quickly be seen to be a mechanism for wide-scale Internet censorship.

It is also a simple fact that many decisions that platforms make about legal content which is not risky are not the business of government to regulate. This includes decisions about what legal content is promoted and why. For this reason, we believe that a better approach might be to require independent self-regulation of major platforms across all of their content decisions. This requirement could be a legislative one, but the regulator would need to be independent of government and platforms.

Independent self-regulation has not been truly tried. Instead, voluntary agreements have filled its place. We should be cautious about moving straight to government regulation of social media and social media users. The government refuses to regulate the press in this way because it doesn’t wish to be seen to be controlling print media. It is pretty curious that neither the media nor the government are spelling out the risks of state regulation of the speech of millions of British citizens.

That we are in this place is of course largely the fault of the social media platforms themselves, who have failed to understand the need and value of transparent and accountable systems to ensure they are acting properly. That, however, just demonstrates the problem: politically weak platforms who have created monopoly positions based on data silos are now being sliced and diced at the policy table for their wider errors. It’s imperative that as these government proposals progress we keep focus on the simple fact that it is end users whose speech will ultimately be regulated.

[Read more]


February 04, 2019 | Javier Ruiz

ORG calls for public participation in digital trade policy after Brexit

A key aspect of Brexit is the future of trade policy. The Government  has committed to abandon the UK’s customs union with the EU to enter into myriad independent trade deals with countries across the world. We don’t want to get into a discussion about the merits of this approach or whether it is likely to succeed, but assuming it will go ahead we believe that transparency and participation are critical requirements for the development of future trade agreements after Brexit.

ORG is interested in trade because these agreements include provisions that severely affect digital rights such as privacy and access to information. Copyright and other forms of IP have been part of trade deals for over 20 years, but countries such as the US now want to expand the scope to include a whole raft of issues into trade negotiations, including algorithmic transparency and data flows.

The UK Department of International Trade is already pre-negotiating deals with the US, Australia and New Zealand and is engaging with interested parties in some sectors, such as IP, which is very positive. ORG is participating in some of these discussions.

Our concern, as we get closer to actual trade negotiations, is that there will be pressure to maintain most of the information confidential. Historically, trade deals have been shrouded in secrecy, with the executive branch of government claiming exclusive prerogative as part of their role in maintaining international relations. In the past decades, as trade issues have expanded into many socio-economic spheres - such as digital, labour or environmental regulations - generating vigorous debates, this lack of transparency has become unsustainable. Even as recently as in the ACTA and TPP negotiations, civil society has been forced to rely on leaks for information and public media interventions for engagement. This secrecy did not stop the derailing of many trade agreements and in cases has fuelled more public concerns.

We recognise that some information requires to remain confidential, but believe that very high levels of transparency and public participation are possible, and indeed necessary in these unprecedented circumstances. The blocking by the House of Lords of the Trade Bill until Government provides more information  on how international trade deals will be struck and scrutinised after Brexit points at the need for change.

The current situation in Parliament and elsewhere demonstrates the difficulties in finding a social consensus around Brexit and the kind of trade policy that should follow from it. The limited public debate on trade deals so far has quickly led to concerns about food safety, with headlines about chlorinated chicken, and the takeover of public services, particularly the NHS. In this situation we think that transparency, including access to draft texts and positions, will be critical to maintain legitimacy.

Fortunately things are changing elsewhere. The WTO has improved their external transparency over the years, particularly when compared to negotiations on bilateral agreements. Documents are available online, there are solid NGO relationships and public engagement activities. We hope that the UK will go even further than the WTO to become a world leader in enabling public participation in trade policy.

[Read more]


February 01, 2019 | Mike Morel

Response to IAB statement

By: Jim Killock (Open Rights Group) Johnny Ryan (Brave) Katarzyna Szymielewicz (Panoptykon Foundation) Michael Veale (University College London)

IAB:

We have taken note of media reports regarding an update to complaints made by ad-blocking browser developer Brave and Polish activist group Panoptykon Foundation to a number of European data protection authorities.

Response

In addition to complaints in Ireland and Poland, the IAB should be aware that complaints have also been made to the UK Information Commissioner in this matter, by Jim Killock, Executive Director of the Open Rights Group, and Michael Veale of University College London. 

IAB: 

As with previous submissions made by Brave et al., we believe that: (1) the complaints are fundamentally misdirected at IAB Europe or the IAB Tech Lab; and (2) they fail to demonstrate any breach of EU data protection law.

Technical standards developed by IAB Tech Lab are intended to facilitate the effective and efficient functioning of technical online advertising processes, such as real-time bidding. IAB Europe’s Transparency & Consent Framework helps companies engaged in online advertising to meet certain requirements under EU data protection and privacy law, such as informing users about how their personal data is processed. The responsibility to use technologies and do business in compliance with applicable laws lies with individual companies.

Response:

The IAB proceed on a misunderstanding of the law and the facts. The complaints have detailed widespread and significant breaches of the data protection regime, in the initial complaints as submitted by our legal team, Ravi Naik of ITN Solicitors with the assistance of a leading QC. Those initial complaints from Sept 2018 have been built on with the further material served on 28 January 2018, Data Protection Day. 

Furthermore, the IAB proceed on the basis of an overly restrictive interpretation of how a data controller is defined. Much like Google tried to avoid liability for search before the ECJ, IAB cannot seek to avoid accountability for their own system.

The facts make clear that IAB are a liable controller. IAB defines the structure of the OpenRTB system. Both the IAB and Google structures could – and should – be remedied to have due regard to the rights of data subject. Whether the structure is so remedied is within the IAB and Google’s control.

The IAB system provides for the inclusion of personal data in the bid request, some of which are very intimate indeed. Indeed, the IAB explicitly recommends the inclusion of personal data in the bid request. For example, it “strongly recommends” that ID codes that are unique to the person visiting a website should be included. [1] It even goes so far as to warn companies using its system that they will earn less money if they do not include these personal data. [2]

The IAB does this in the knowledge that it is unable to exercise any control over what happens to personal data broadcast billions of times a day by its system.  An internal IAB TechLab document from May 2018 confirms that “there is no technical way to limit the way data is used after the data is received by a vendor for decisioning/bidding” once an ad auction broadcast has been made. [3] The same document notes that “thousands of vendors” receive these data. [4]

IAB:

The Content Taxonomy Mapping document cited by the complainants does not, as Brave and Panoptykon seem to contend, demonstrate that taxonomies of data types that would qualify as special categories of personal data (and are subject to stricter protections under EU data protection law) are used by individual companies; nor can it be considered to prove or demonstrate that any companies making use of those taxonomies are doing so without complying with applicable EU data protection or other law. 

Response

The categorised content content is used by a person, the categories stick to that person, and become personal data. This helps other players profile "the human using the device", as IAB puts it. 

The example bid requests in Google’s developer documentation (Google also uses the IAB RTB standard) speak for themselves. They contain the following personal data: [5]

  • pseudonymized user IDs, that can be “matched” against for re-identification, 

  • IP address, 

  • GPS coordinates (latitude and longitude), 

  • ZIP code, 

  • machine and operating system version details, 

  • and categories (“publisher verticals”). 

The IAB’s own documentation includes an example bid request that contains the personal data of a young female, using a specific iPhone 6s, reading a loading URL, and with several IDs that allow ad auction companies to identify her. [6] The bid request also shows her GPS coordinate at this instant. (Would a woman on her own on a street at night be comfortable knowing that her GPS coordinates were being sent to random parties?)

IAB: 

The complaints are akin to attempting to hold road builders accountable for traffic infractions, such as speeding or illegal parking, that are committed by individual motorists driving on those roads. Using this analogy, the complainants’ purported finding that EU data protection law is being breached is comparable to someone pointing out that an automobile is technically capable of exceeding the speed limit, or parking in a restricted area, and adducing this fact as “evidence” that it actually does.  A technical standard may be misused to violate the law or used in a legally compliant way, just as a car may be driven faster than the speed limit or driven at or below that limit. The mere fact that misuse is possible cannot reasonably be used as evidence that it is  actually happening. And the whole purpose of the Transparency & Consent Framework is to ensure it does not.

Response:

The IAB has failed to protect people’s data, which are broadcast billions of times a day, using the system that it defines and encourages its members to use. It cannot claim the to be a bystander. By defining and promoting the system, it plays a role in determining the purposes and means of how that data is processed. Using IAB’s own metaphor - which presents them as road builders or car producers who cannot be held liable for traffic infractions  - it is clear that IAB is the authority that sets the traffic rules for its private roads. It has the responsibility when those rules conflict with the law.

1 AdCOM Specification v1.0, Beta Draft”, IAB TechLab, 24 July 2018 (URL:https://github.com/InteractiveAdvertisingBureau/AdCOM/blob/master/AdCOM%20BETA%201.0.md).

AdCOM Specification v1.0, Beta Draft”, IAB TechLab, 24 July 2018 (URL:https://github.com/InteractiveAdvertisingBureau/AdCOM/blob/master/AdCOM%20BETA%201.0.md).

Pubvendors.json v1.0: Transparency & Consent Framework”, IAB TechLab, May 2018.

4 Ibid.

5 Authorized Buyers Real-Time Bidding Proto”, Google, 23 January 2018 (URL:https://developers.google.com/authorized-buyers/rtb/realtime-bidding-guide).

6  AdCOM Specification v1.0, Beta Draft”, IAB TechLab, 24 July 2018 (URL:https://github.com/InteractiveAdvertisingBureau/AdCOM/blob/master/AdCOM%20BETA%201.0.md).

 

 

 

 

[Read more]