Home Office use of AI in asylum cases likely to be unlawful, legal opinion finds
The Home Office’s failure to inform asylum applicants that AI tools are being used in their assessments is likely to be unlawful, according to a legal opinion, published today. It finds that the Home Office’s use of AI tools does not meet a number of legal obligations nor the standards set out in the AI Playbook for the UK Government.
The Opinion, written by Cloisters Chambers’ Robin Allen KC and Dee Masters, and Joshua Jackson of Doughty Street Chambers, opens the way to legal challenges by asylum applicants who believe that AI has been used in their assessments that determine whether or not they can be granted protection in the UK.
LEGal opinion
The use of Artificial Intelligence tools by the government: A case study of the Home Office’s asylum practice.
Find out more.
Sara Alsherif, Migrants Rights Programme Manager said:
“Determining whether someone can or cannot seek refuge in the UK is one of the most serious and life-changing decisions the government can make. There must be the utmost transparency, fairness and accuracy.
“But asylum applicants are not even being informed that opaque AI tools are being used in the assessment of their case, nor being given the opportunity to correct errors that might be made.
“We need an immediate ban on the use of these tools. There are many ways to clear the Home Office’s backlog of asylum cases and raise the standard of their decisions – these tools are not the answer.”
How the Home Office is using AI in immigration
The Government has admitted that the Home Office is using AI to summarise both asylum interview transcripts and internal policy documents.
The Asylum Case Summarisation (ACS) tool uses ChatGPT-4 to summarise asylum interview transcripts into concise summary documents.
The Asylum Policy Search (APS) tool summarises country Policy and Information Notes (CPINs), guidance documents, and Country of Origin Information (COI) reports.
As the legal opinion notes, both of these tools “create new text for the Decision-Maker to consider rather than simply indexing or organising the existing source information”.
Lack of transparency
Asylum applicants are not being told that AI is being used in their applications. The opinion finds that as a matter of procedural fairness, this is “likely to be unlawful”. It’s also a potential breach of data protection law if the ACS tool produces inaccurate summaries of applicants’ personal data that they do not have the opportunity to correct.
The Home Office’s own evaluation of the ACS found 9% of AI-generated summaries were so flawed they had to be removed from the pilot. 5% of users of the APS were “not confident in tool accuracy”.
The opinion finds that: “Given the apparent inaccuracies in the summaries generated by the APS and ACS, there is a significant risk that decisions which are based on those summaries will be based upon and vitiated by material errors of fact.”
AI Playbook standards
The Opinion highlights the importance of the Government’s own detailed and careful guidelines in the AI Playbook for the UK Government. It notes the failures by the Home Office to follow the Playbook’s principles and procedural safeguards set out , in particular concerning transparency and the duty to be open and collaborative when deploying of AI.
Impact on equality
The tools may not meet the Public Sector Equality Duty, which requires public authorities to consider how their policies affect people who protected under the Equality Act. The Home Office has not published an Equality Impact Assessment for either tool, so it’s not possible to know whether this has been met and that there are no broader equality issues.
.
Robin Allen KC and Dee Masters of Cloisters Chambers said:
“AI use requires great care if it is to be lawful. The public is entitled to expect the Home Office will scrupulously apply the AI Playbook for the UK Government, especially for such sensitive issues as asylum applications, and so are applicants. Our Opinion highlights the legal peril where this does not happen.”
“Technology can assist decision-making, but it cannot undermine the careful human judgment required in asylum cases. Where AI tools are used without adequate safeguards, there is a real risk that unlawful or unfair decisions may result.”
“If AI tools are influencing asylum decisions, there must be full transparency about how those systems operate and how their outputs are used. Without that transparency, it becomes extremely difficult to ensure that decisions affecting fundamental rights are lawful and fair.”
Ban AI tools in asylum decision making
Ask your MP to take a stand against the use of AI tools in asylum assessments.
Write to your MP