ORG response to the DCMS policy paper “Establishing a pro-innovation approach to regulating AI”

ORG response to the Government approach to regulating Artificial Intelligence

0 Executive Summary

Open Rights Group (ORG) is a UK-based digital campaigning organisation working to protect fundamental rights to privacy and free speech online. With over 20,000 active supporters, we are a grassroots organisation with local groups across the UK.

ORG is grateful for the proactiveness the Department for Digital, Culture, Media and Sport have shown in seeking our views on their AI Policy Paper “Establishing a pro-innovation approach to regulating AI”. We share our views by answering to questions 1, 2 and 3 of the Policy Paper, and in particular:

In answer to Question 1, we compare this proposal with the answer the Government gave in their response to Data: a new direction. In particular, we argue that the Government are taking an inappropriate course of action and missing an opportunity by introducing restrictions on the safeguards provided by Article 22 of the UK GDPR against solely automated decision-making. At the same time, we regret that the Government are not planning, with the upcoming AI White Paper, to introduce any legally binding or statutory framework to raise protections for those automated decisions that, already today, fall outside of the scope of Article 22.

In answer to Question 2, we warn the Government about the outcome that a sector-specific approach to privacy regulation in the United State had, and we recommend defining contexts and writing norms with sufficient breadth and flexibility to account for technological and societal developments. Further, we challenge the overarching assumption of the AI Policy Paper, namely that regulation stifles innovation and thus needs be targeted and non-binding. Instead, we explain why robust regulation supports innovation, and why this approach is coherent with the same concept of “growth” the Government adopted in their Deregulatory Act 2015.

In answer to Question 3, we point out that the definition the Government gave to the principle of fairness is rather weak if compared with the OECD AI principles, upon which the cross-sectoral principles are allegedly built, in particular as it misses references to the rule of law, human rights and democratic values”. We also address two misconceptions the Government have presented over the explainability and autonomy of AI systems.

Finally, in answer 4 we spontaneously address the stated aim of the AI Policy Paper, namely to create a world-leading regulatory environment that promotes innovation. In particular, we point out how innovation becomes a rather hollow term if it is not compared to the benefits or adverse impact the use of new technologies may have. Further, we argue that leadership is shown when regulation provides an answer to the challenges we face as a society: as such, the vision this AI Policy Paper projects — that of a regulatory environment that encourages risk-takers to move fast and break things, leaving the rest of us to deal with the fallout of someone else’s recklessness and broken things — does not appear just as appealing, and thus is unlikely to become “world-leading”.

1. What are the most important challenges with our existing approach to regulating AI? Do you have views on the most important gaps, overlaps or contradictions?

1.1 The approach that the Government have shown since the response to Data: a new direction appears incoherent.1 On the one hand, their response rightly noted the importance of the protections afforded by Article 22 of the UK GDPR and the risk for the UK credibility that scrapping such protections would have entailed. At the same time, the Government promised to address in the upcoming AI White Paper the many requests, including from the Information Commissioner’s Office,2 to extend the protections afforded by Article 22 to non-solely automated decisions.

1.2 Regrettably, the Government diverged from their response and restricted the scope of Article 22 in the Data Protection and Digital Information Bill, which would make it applicable only for solely-Automated Decision Making (ADM) that involves the use of special category data. At the same time, the Government are announcing that the White Paper on AI is meant to establish a non-binding set of principles. By doing so, the Government are restricting existing statutory rights, and missing the opportunity to introduce a complimentary regime to the protection of personal data with the scope of providing safeguards against ADM that may not be solely automated, or may not use personal data but may still have an impact on the rights and welfare of individuals.

2. Do you agree with the context-driven approach delivered through the UK’s established regulators set out in this paper? What do you see as the benefits of this approach? What are the disadvantages?

Concerning the context-driven approach

2.1 The Government are right in capturing the difficulty of regulating general-purpose technologies like Artificial Intelligence. However, and as a matter of comparison, a purely sector-specific approach to digital regulation has already proven to be ineffective. For instance, sector-specific privacy legislation in the United States produced a patchwork of incoherent frameworks that rapidly became obsolete, and failed to provide effective protection to personal data. The Government should be mindful of this lesson, and careful in designating “contexts” and writing rules with sufficient breadth to ensure they are adaptable and future-proof.

Concerning the pro-growth approach

2.2. Further, the Government are showing a worrying trend in perpetuating, here as in other policy initiatives, the misconception that loose or voluntary regulatory frameworks would support innovation. On the contrary, robust ethical and legal boundaries liberate and encourage innovation. Organisations benefit from clear guidelines and rules they can rely on to navigate the complex questions arising from innovation.

2.3 Indeed, lack of regulation only risk “unleashing” innovation in ways the public does not want, trust or agree with, leading to backlash and opposition: individuals, consumers and civil society groups will always challenge the legitimacy and legality of practices when they produce an adverse impact on them. Instead, innovation legitimately enters our lives and becomes accepted only when we are confident that it is not a threat to our rights and lifestyle. This is why a pro-growth regulatory regime ought to be robust and rights-based: the protection of rights promotes public trust and, in turn, encourages everyone to embrace rather than reject innovation.

2.4 Further, the Government do not seem to give due weight to other regions in the world that are introducing legally-binding frameworks, or considering banning certain uses of Artificial Intelligence. This divergence may lead to businesses in the UK adopting practices or investing in systems that fit domestic (weak) legal and ethical standards, only to find out that these are at stake or outright illegal within the regulatory frameworks of foreign markets and jurisdictions. Businesses in the UK may also face the reputational damage of being associated with a country where bad-faith actors look for shelter or go to when working on solutions that defy everyone else’s laws and social norms.

2.5 Finally, we reiterate the points that Open Rights Group made in our response to the Information Commissioner’s Office Regulatory Action Policy.3 Business-friendly and “pro-growth” cannot be the synonyms of a regulatory approach that leaves offenders unpunished: such an approach undermines trust and exposes law-abiding businesses to unfair competition. In the own words of the Government Statutory Guidance to the Deregulation Act 2015,4 concerning the “growth duty” established for regulatory authorities:

1.4 Non-compliant activity or behaviour undermines protections to the detriment of consumers, employees and the environment and needs to be appropriately dealt with by regulators. It also harms the interests of legitimate businesses that are working to comply with regulatory requirements, disrupting competition and acting as a disincentive to invest in compliance.”

It is worth emphasising how the Government contradict the assumption above by characterising loose, unassertive or non-binding rules as “pro-growth” and business-friendly. Weak regulations are but rules that offenders cannot be held accountable against for the benefit of law-abiders, and the Government are yet to present their case for favouring the former against the latter.

3. Do you agree that we should establish a set of cross-sectoral principles to guide our overall approach? Do the proposed cross-sectoral principles cover the common issues and risks posed by AI technologies? What, if anything, is missing?

Concerning the OECD principles

3.1 The Government are proposing cross-sectoral principles that “build on the OECD Principles on Artificial Intelligence and demonstrate the UK’s commitment to them”. These principles are not, in the words of the Policy Paper, “intended to create an extensive new framework of rights for individuals”. However, these principles are expected to be implemented in practice by Regulators. As detailed below, the sum of these intents appears contradictory.

3.2 The Government are proposing the principle that regulators should “Embed considerations of fairness into AI”, with the stated aim of ensuring that “high-impact outcomes — and the data points used to reach them — should be justifiable and not arbitrary”. However, this is a rather restrictive definition of fairness by the same standards of the OECD principle 1.2 of “Human-centred values and fairness”, according to which “AI actors should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle”, including “freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognised labour rights”.5

3.2 It is also worth noting that the same OECD Recommendation of the Council of Artificial Intelligence were clear in stating that “the following principles are complementary and should be considered as a whole”. By cherry-picking principles and definitions to their likes, and thus diverging from these recommendations, the Government are hardly demonstrating commitment to implementing the OECD principles.

3.3 Further, the Government are proposing not to establish a statutory framework of rights, but they expect these principles to be implemented by Regulators. This approach seem unworkable: Regulators need statutory footing to either justify their regulatory activity or enforce the law. Lacking a legal framework that Regulators can build upon, the avenue for implementing these principles results unclear.

Other concerns regarding the cross-sectoral principles

3.4 On a separate note, it is worth addressing two misconceptions that are present in the description of other cross-sectoral principles. Although the Government are reaching reasonable conclusions, wrong or controversial premises ought to be addressed upfront to avoid them becoming an issue in the future.

3.5 Concerning the principle “Make sure that AI is appropriately transparent and explainable”, the Government argues that “Presently, the logic and decision making in AI systems cannot always be meaningfully explained in an intelligible way”. However, in their previous section on “Defining the core characteristics of AI”, the Government rightly noted that “the logic or intent behind the output of systems can often be extremely hard to explain”. Indeed, Computers are deterministic, and their behaviour can always be explained. The complexity of Artificial Intelligence only means it may be hard, or economically inconvenient, to do so. This is an important assumption to determine whether we should trust a system that is “too expensive” to be meaningfully explained or monitored: the answer to this question should likely depend on whether a bad outcome affects the actor who deploys the AI system alone or any other third parties, and on who ultimately bears the consequences of a bad outcome.

3.6 Finally, and concerning the principle of “Define legal persons’ responsibility for AI governance” the Government argues that “AI systems can operate with a high level of autonomy, making decisions about how to achieve a certain goal or outcome in a way which has not been explicitly programmed or even foreseen”. Taken as such, this premise is highly questionable: AI systems are trained based on the data they are fed with and the assumptions that humans give them. Further, humans will usually train these systems by giving feedback (such as rewards or punishments), which will embed the value-judgement of the individual who trains the AI. In turn, automated systems rarely misbehave because of their “autonomy”; rather, they act upon the negligence or intent of the humans who trained and deployed them.

4. Concerning the aim of the AI Policy Paper, and the concept of “world-leading”

4.1 The AI Policy Paper is expressly meant to retain a “world-leading regulatory regime” and support the UK efforts to remain “a global AI superpower”. These aims call into question the meaning we give to words such as innovation and leadership.

4.2 Innovation without any other connotation means merely new things, lacking any indication on whether these are desirable, able to solve existing problems, and benefit society as a whole. By failing to take this distinction into due account, the Government will keep failing to identify the challenges they need to deliver with their regulatory initiatives. In turn, regulation that does not account for present needs and hard-learnt lessons will fail to deliver an inspiring or leading vision.

4.3 For instance, Open Rights Group work on the UK data protection reform have focussed on retaining the protections the UK inherited from the General Data Protection Regulation for workers, women, migrants, minorities, members of LGBT communities and everyone else.6 ORG did not seek to retain the standards of the GDPR because it is a perfect Regulation, or because of the Bruxelles effect. Rather, its legal standards are appealing and influential because of the answers they provide to the challenges that digital and data-driven technologies are posing to our rights and lifestyle, and the vision it projects for a different state of affairs.

4.4 By reference to ORG response to the Plan for Digital Regulation,7 “there is decisive and growing evidence that technology is either been weaponised against vulnerable individuals, or is otherwise resulting in negligent, unintended, adverse consequences for an increasing number of people. Without the pretence to draw an exhaustive picture, personal data is constantly being exploited to discriminate individuals’ upon their weaknesses, anxieties,8 opinions, or protected characteristics such as identity, race and gender.9 Digital platforms overwhelmingly rely on business models whose financial sustainability depends on polarisation and misinformation,10 thus harming social cohesion and the democratic discourse. Further, technology is leading to pervasive surveillance at work,11 in schools,12 at home13 and in public places,14 as well as against journalists and activists.15” Unsurprisingly, the Ada Lovelace Institute found that “There is consistent evidence of public support for more and better regulation” and that the public expects innovation to “be ethical, responsible and focused on public benefit”.16

4.5 These considerations maintain their relevance in the context of Artificial Intelligence, where automation can be used to great societal benefit as well as to replicate discrimination at scale. In turn, a rights-based framework that promotes the use of new technologies such as AI while protecting our fundamental rights and freedoms does provide a vision worth pursuing. Instead, the vision the UK Government are presenting in this AI Policy Paper is that of a regulatory environment that encourages risk-takers to move fast and break things, leaving the rest of us to deal with the fallout of someone else’s recklessness and broken things. It is rather obvious why such a vision does not appear just as appealing, and why regulations based on this premise will hardly become “world-leading”.

1 Data: a new direction – government response to consultation. From: https://www.gov.uk/government/consultations/data-a-new-direction/outcome/data-a-new-direction-government-response-to-consultation#ch1

2 ICO Response to DCMS consultation “Data: a new direction”. From: https://ico.org.uk/media/about-the-ico/consultation-responses/4018588/dcms-consultation-response-20211006.pdf

3 ORG response to the ICO Regulatory Action Policy consultation. From: https://www.openrightsgroup.org/publications/org-response-to-the-ico-regulatory-action-policy-consultation/

4 Growth Duty Statutory Guidance – under Section 110(6) of the Deregulation Act 2015. From: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/603743/growth-duty-statutory-guidance.pdf

5 OECD, Recommendation of the Council on Artificial Intelligence. From: https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449

6 How GDPR stops discrimination and protects equalities. From: https://www.openrightsgroup.org/how-gdpr-stops-discrimination-and-protects-equalities/

7 Open Rights Group submission to the Department of Digital, Culture, Media and Sport — Plan for Digital Regulation. From: https://www.openrightsgroup.org/publications/open-rights-group-submission-to-the-department-of-digital-culture-media-and-sport-plan-for-digital-regulation/

8 Panoptykon Foundation, Algorithms of trauma: new case study shows that Facebook doesn’t give users real control over disturbing surveillance ads, at: https://en.panoptykon.org/algorithms-of-trauma

9 DataEthics, The Inherent Discrimination of Microtargeting, at: https://dataethics.eu/the-inherent-discrimination-of-microtargeting/

10 Privacy International, The UN Report on Disinformation: a role for privacy, at: https://www.privacyinternational.org/fr/node/4515

11 Americal Civil Liberties Union, Amazon Drivers Placed Under Robot Surveillance Microscope, at: https://www.aclu.org/news/privacy-technology/amazon-drivers-placed-under-robot-surveillance-microscope/

12 Open Knowledge Foundation, Open Knowledge Justice Programme challenges the use of algorithmic proctoring apps, at: https://blog.okfn.org/2021/02/26/open-knowledge-justice-programme-challenges-the-use-of-algorithmic-proctoring-apps/

13 DataEthics, Being Watched While Working From Home at: https://dataethics.eu/being-watched-while-working-from-home/

14 Liberty, Five Reasons Why Facial Recognition Must Be Banned, at: https://www.libertyhumanrights.org.uk/issue/five-reasons-why-facial-recognition-must-be-banned/

15 The Guardian, Huge data leak shatters the lie that the innocent need not fear surveillance, at: https://www.theguardian.com/news/2021/jul/18/huge-data-leak-shatters-lie-innocent-need-not-fear-surveillance

16 Who cares what the public think? UK public attitudes to regulating data and data-driven technologies. From: https://www.adalovelaceinstitute.org/evidence-review/public-attitudes-data-regulation/