UK Government failing on AI regulation

The UK Government have published their vision for “Establishing a pro-innovation approach to regulating AI”. The Policy Paper outlines a sector-specific approach to regulating Artificial Intelligence, with the intent of leaving this technology mostly unregulated and “enable a targeted and nuanced response to risk”. It also states that there are no plans to introduce a new statutory framework of rights; instead, UK regulatory authorities will be left with the task of implementing a set of non-binding cross-sectoral principles if they decide to regulate a certain use of AI.

According to the Government, this approach is meant to promote innovation, and it comes right after a former DCMS Minister in charge of the UK digital strategy announced that “no business with under 500 staff will be subject to business regulation”. A new Minister for Digital, instead, blamed GDPR bureaucracy for a “shortage of electricians and plumbers” during the Conservative party conference.

In light of the above, our answer to the AI consultation was a useful opportunity to take stock of the UK Government approach to digital regulation, and articulate how and why they are failing to deliver.

Automated decision-making and the UK Govt broken promises

In the response to Data: a new direction, the Government promised that they would not have pursued the proposal to scrap the right to a human review of solely-Automated Decision Making (ADM) under Article 22 of the UK GDPR. This came as a result of the many requests to do the opposite of what the Government suggested and extend these safeguards to non-solely ADM systems instead. Also, the Government recognised the potential damage to “the reputation of the United Kingdom as a trustworthy jurisdiction for carrying out automated decision-making”.

However, in the Data Protection and Digital Information Bill the Government are proposing again to scrap Article 22 for ADM that are not based on sensitive data: this would legitimise solely-ADMs even when it’s used against the will of the individuals who are subject to these decisions. On the other hand, the AI Policy Paper does not extend safeguards to non-solely ADMs, but would only provide non-binding principles to be implemented, if ever, at the discretion of UK Regulators.

In a nutshell, the Government are replacing legal safeguards with empty words, and they are bravely passing on the responsibility to take the difficult decisions to UK regulatory authorities.

Deregulation does not support growth and innovation

The other, worrying trend that emerges from this policy paper is this Government obsession with meaningless, non-binding and non-enforceable regulation as a tool to grow the economy and unleash innovation. On the contrary, robust ethical and legal boundaries liberate and encourage innovation by providing clear guidelines organisations can rely on to navigate the complex questions arising from innovation. Also, a robust, rights-based framework promotes public trust and, in turn, encourages everyone to embrace rather than reject change.

However, the Government are omitting rights from their regulatory approach: in implementing the OECD AI principles, the AI policy paper characterises “fairness” as aimed at ensuring that “high-impact outcomes […] should be justifiable and not arbitrary”. The merit of this description becomes clear when compared to the OECD principle of “human-centred value and fairness”, which includes respect for “the rule of law, human rights and democratic values” such as “freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognised labour rights.

Finally, weak or non-enforceable regulation is neither pro-growth nor business-friendly. Indeed, even the UK Government Statutory Guidance to the Growth Duty notes that tolerating non-compliant behaviour undermines “protections to the detriment of consumers, employees and the environment”, and harms “the interests of legitimate businesses”. The Government are yet to present their case to disregard human rights and democratic values, or to favour the law-breakers against the law-abiders.

A future we don’t want to live in

As always, the Government are boasting about their plans as their ultimate effort to introduce “world-leading” regulation. This remarkable display of wishful thinking does a poor job of compensating for the astounding lack of leadership this Government are showing in digital regulation.

As Open Rights Group already noted in our response to the UK Plan for Digital Regulation, technology is routinely being used to discriminate individuals, predate on their addictions, undermine the democratic discourse and expose us to indiscriminate and pervasive surveillance. Unsurprisingly, the Ada Lovelace Institute found that “There is consistent evidence of public support for more and better regulation”, and that the public expects innovation to “be ethical, responsible and focused on public benefit”.

The UK Government are ignoring this state of affairs and the complex questions it raises. Instead, they keep doubling down on envisioning rules that encourage risk-takers to move fast and break things, leaving the rest of us to deal with the fallout of someone else’s recklessness and clutter. It is rather obvious why such a proposition does not appear just as appealing, and why this Government vision will hardly become “world-leading”.

Open Rights Group will be fighting tooth and nail to protect our rights in the digital age, and promote the use of new technology for our collective benefit. Please consider subscribing to our newsletter, or join us in our fight.

Stop Data Discrimination

Back the campaign

Sign up to receive the latest news and information