Legal Opinion: The use of Artificial Intelligence tools in asylum cases

Open Rights Group has commissioned this legal Opinion to explain how civil society actors can evaluate the use of Artificial Intelligence (AI) tools and systems by the Government, whether through dialogue or litigation. It is particularly concerned to prevent the deployment of AI in ways that risk violating the rights of migrants, refugees and asylum-seekers. The Government’s use of AI in the process of determining refugee status has become a focus of concern.

Part I: The need to regulate AI systems that affect individual rights

Changes in technology have frequently caused seismic shifts in society. AI is no different. However, there are unique features to AI systems which mean that they pose greater and different risks in comparison to other technological advances. The way AI systems work is often opaque (‘the Blackbox problem’); they can exacerbate power imbalances; they often process vast amounts of personal data giving rise to privacy risks and concerns about data security; they can ‘hallucinate’ (i.e. completely fabricate information); and sometimes they provide inaccurate, abusive or discriminatory outputs. It is obvious that where the rights of individuals are concerned such risks must be taken very seriously, and that the more vulnerable the individual and impactful the decision, the more care must be taken.

Part II: The AI Ethical Principles

There has been an intense global discourse about how to regulate AI in a way that harnesses its promised benefits, whilst also managing its inherent risks. From this has emerged a set of common ethical principles about the use of AI contained in international instruments made by the United Nations Educational, Scientific and Cultural Organization (UNESCO), the Organisation for Economic Co-operation and Development (OECD),9 and the Council of Europe.

The United Kingdom is a member of UNESCO, the OECD and the Council of Europe, and is a signatory to the Council of Europe’s Framework Convention on Artificial Intelligence. Whilst each of these instruments uses slightly different language, the common principles that run through them relate to:

  • ‘Democracy’
  • ‘Fairness, equality and non-discrimination’
  • ‘Human dignity and autonomy’
  • ‘Respect for human rights’
  • ‘Privacy and data governance’
  • ‘Sustainability’
  • ‘Robustness and digital security’
  • ‘Safety and reliability’
  • ‘Transparency and explainability’
  • ‘Accountability and responsibility’
  • ‘Contestability, oversight and redress’

Each of these address specific and well known problems inherent in AI systems. We refer to them in this Opinion as ‘the AI Ethical Principles’.

While the UK Government has not directly transposed these international instruments into domestic law it has stated its commitment to them in the AI Playbook for the UK Government (‘the UK AI Playbook’) which distils the principles to be followed by the Government when it deploys AI.

Annex A contains a table which maps the relationship between the UK AI Playbook and international instruments concerning these AI Ethical Principles.

In tandem with its adoption of the AI Ethical Principles, the UK AI Playbook contains a detailed step-by-step procedure for decisions as to the deployment, use, risk assurance and review of AI systems. These steps are also called ‘Principles’ although they more closely resemble procedural steps. The following Principles (or procedural steps) in the UK AI Playbook are particularly relevant to this Opinion and the Home Office’s use of AI:

  • Principle 1: You know what AI is and what its limitations are;
  • Principle 2: You use AI lawfully, ethically and responsibly;
  • Principle 4: You have meaningful human control at the right stage;
  • Principle 6: You use the right tools for the job;
  • Principle 7: You are open and collaborative; and
  • Principle 10: (a) You use these principles alongside your organisation’s policies and (b) have the right assurance in place.
    [Principle 10 is not divided into (a) and (b) within the UK AI Playbook. We consider this a mistake. Principle 10 includes two distinct notions, AI assurance being hugely significant and a key theme of this Opinion.]

The procedural steps contained in the Playbook are detailed and clearly expressed. They contain important prescriptive statements about what Government must do as summarised in Annex B. They link directly with the AI Ethical Principles. In short, the UK AI Playbook sets out the Government’s own standards which departments and public sector organisations are expected to follow in relation to their use of AI, and civil society can use the UK AI Principles to engage with the Government. As discussed below, the UK AI Principles will also be relevant to the interpretation and application of domestic legislation and common law principles.

Part III: Domestic legislation, legal principles and their relationship to the UK AI Playbook

In parallel with the UK AI Playbook, there are domestic legislation and legal principles, which, whilst not specifically mentioning AI, must be considered whenever the Government uses AI because of their field of application. These include the Equality Act 2010, the Human Rights Act 1998, data protection rules and general public law principles. We refer to this as the ‘Domestic legal framework’. The UK AI Playbook will be relevant when considering this legal framework in so far as the Government is deploying AI.

The UK AI Playbook also recognises that this wider framework of laws and principles must be applied. In Principle 2 it instructs those acting on behalf of the Government to ‘…use AI lawfully, ethically and responsibly’.

There is thus a rich tapestry of controls that the law places on the Government’s use of AI and that are available for civil society when engaging with the Government, whether through dialogue or litigation. If deployed effectively, these existing legal protections, along with the AI Ethical Principles, as contained in the UK AI Playbook, can regulate the Government’s AI use to protect the welfare and rights of people who are subject to decisions shaped or supported by technology. We analyse how this might happen in a case study in Part IV.

Part IV: Case Study: The Home Office’s use of AI during the refugee status determination process

The law requires those taking decisions as to whether an asylum-seeker has refugee status (‘Decision-Makers’) to look at all material facts to determine whether an asylum-seeker has a ‘well-founded’ fear of persecution in their own country. This is a subjective test with objective elements. Decision-Makers must examine (1) the information provided by the applicant during interviews and (2) information concerning the situation in countries from which asylum-seekers have fled.

The two tools discussed in this Opinion relate to these two elements: ACS is a generative AI tool that summarises information provided by applicants for Decision-Makers; APC is a generative AI tool that searches country information for Decision-Makers. The important point is that both AI tools create new text for the Decision-Maker to consider rather than simply indexing or organising the existing source information (‘the Source Material’). In this way, they funnel, filter and regurgitate important facts which are material to the Decision-Maker’s legal obligations when determining refugee status. They may ‘filter out’ crucial information. The output of the APC and APS is not shared with the asylum-seeker. In fact, we understand that they are not even informed that AI is going to be used for their application.

We also note that, from the available information, it appears there is a significant risk of inaccuracy in the output generated by the ACS and APS; during the pilot for the ACS, there were inaccurate summaries 9% of the time and 5% of users of the APS were ‘not confident in tool accuracy’.

This is concerning, but additionally we note that there is no detailed information available as to how the level of accuracy was measured or evaluated in relation to the ACS; it appears that some form of objective quantitative data must have been measured since the 9% metric has been published. Whether this inaccuracy arose from the Large Language Model (LLM) ‘hallucinating’, or whether it was simply an error in summarisation, is entirely unknown. The assessment of accuracy for the APS appears to have been limited to an assessment of confidence rather than an objective measure of inaccuracy.

Our analysis of these two tools in Part IV, based on the publicly available information, leads us to conclude that (1) in significant respects the UK AI Playbook is not being followed or (2) there are serious concerns that this is so.

Our specific observations on compliance with the AI Playbook’s principles, which we see as procedural steps, are summarised in the table below.

Principles 1, 4 & 10b: ‘You know what AI is and what its limitations are’, ‘You have meaningful human control at the right stage’ & ‘You have the right assurance in place’

There has been a failure to assess quantitatively the extent to which the APS produces inaccurate outputs.Paras 98-99
Whilst there has been some form of quantitative assessment of the ACS to assess the accuracy of outputs; there is insufficient information about what accuracy means in this context, the extent of inaccuracy and what benchmarking the Government is using (i.e. what ‘good looks like’).Paras 100-104
There is no ability to cross-reference the summarised output from the ACS (and we assume the APS) to the original Source Material making it difficult for Decision-Makers to assess/verify accuracy. This is an important missing procedural safeguard.Para 105
There is no system in place to allow asylum-seekers to check the output of the ACS or APS for accuracy since they do not know it happened and have no opportunity to see the text generated. This is an important missing procedural safeguard.Para 106
There is a risk that the summaries produced by the ACS and APS will become part of the decision-making process since they filter and funnel information which could be relied on by the Decision-Makers at the expense of the Source Material. There is no clear guidance, that we can see, which tells Decision-Makers that they must fully consider all original Source Material. This is an important missing procedural safeguard. It also means that we cannot be satisfied that there are procedures in place to ensure meaningful human control.Paras 107 & 111-121
We are not satisfied that there are adequate technical measures in place and auditing processes to ensure that Decision-Makers consider all original Source Material. This is an important missing procedural safeguard. It also means that we cannot be satisfied that there are procedures in place to ensure meaningful human control.Paras 107-111
We are not satisfied that there has been an adequate assessment of whether the quality of the decisions is negatively affected by the ACS or APS. Given that the Decision-Makers’ task is to assess relevant/material facts, our view is that quality – in this context – should measure the extent to which the ACS and APS disregarded or correctly identified/summarised that information. In other words, the Government has not properly approached AI Assurance.Para 112
It is unacceptable that the ACS and APS have been rolled out despite only limited bias testing having been undertaken. In other words, the Government has not properly approached AI Assurance.Para 113

Principle 6: ‘You use the right tools for the job’

Despite the risks associated with generative AI, there is no publicly available information that the Government considered whether an analogue or non-AI solution could achieve the aim of greater efficiency.Paras 114-120

Principle 7: ‘You are open and collaborative’

To the ORG’s knowledge, the Home Office has not engaged with civil society in relation to the ACS and APS.Para 122
There is no published Data Protection Impact Assessment or Equality Impact Assessment. The ‘prompt’ used in the ACS tool is unknown (for no obvious good reason). Asylum-seekers are not told about the use of the ACS tool. Technical information such as the training data source has yet to be published. Neither the ACS nor APS is listed in the repository for the Algorithmic Transparency Recording Standard.Paras 123-132
There is no information about the AI Assurance that has happened since the pilot scheme.Para 133

As to the existing domestic legal framework, which is incorporated into Principle 2, we conclude that:

Article 3

Asylum-seekers will often be seeking refuge in the UK as a means of escaping torture or inhumane and degrading treatment such that their Article 3 rights are engaged. If the Home Office adopts a practice of using generative AI tools which create a risk that Decision-Makers will consider inaccurate information and/or overlook relevant/material fact facts when determining people’s asylum claims, and those risks are not mitigated by appropriate safeguards, such an assessment would be unlikely to have the necessary rigour to comply with the UK’s procedural obligations under Article 3.Paras 136-140

Public law

The Home Office is under a heightened Tameside duty of enquiry as regards the accuracy and functionality of the ACS and APS. The Home Office will be at significant risk of breaching its Tameside duty if it fails to undertake adequate assessments in respect of and monitor the accuracy of the APS and the ACS, the extent to which they impact the quality of asylum decisions, the risk of bias and discrimination posed by the AI tools, and the alternatives to the AI tools to achieve the Home Office’s aims.Paras 144-146
The use of the ACS and the APS give rise to a significant risk of process irrationality. If a Decision-Maker relies upon the ACS and APS summaries at the expense of a full examination of the Source Material, and those summaries have filtered out relevant information regarding the country of origin or asylum-seeker’s interview, there will be a significant risk that the Decision-Maker will have failed to take relevant considerations and evidence into account when determining the asylum claim in question.Paras 147-149
Given the apparent inaccuracies in the summaries generated by the APS and ACS, there is a significant risk that decisions which are based on those summaries will be based upon and vitiated by material errors of fact.Para 150
As a matter of procedural fairness, we consider that asylum -seekers have a right to be informed that AI is being used in the determination of their claims, how it is being used, and to be provided with the output of the AI-generated summaries. That asylum-seekers appear not to have been so informed is likely to be unlawful.Para 151

Data protection

There will be a breach if the ACS produces inaccurate summaries of personal data, if there is no explanation to the asylum-seekers that the AI tool will be used and / or they are denied access to the output from the ACS to correct any errors.Para 153

Public Sector Equality Duty (PSED)

The Government has not published an Equality Impact Assessment, so we cannot be satisfied that the PSED has been met and that there are no broader equality issues.Paras 154-156

Regulators

The Independent Chief Inspector of Borders and Immigration (ICIBI) should examine the way in which the Home Office is using AI. The Government has made it plain that it expects regulators to implement AI ethical principles in their work. We are not aware of any scrutiny of the ACS or APS by regulators.Paras 157-159

The analysis in Part IV illustrates both why civil society has a role in holding Government to account and the role it can play in doing so.

Read the full legal opinion by clicking here.

Migrant Digital Justice