Digital Privacy

Saving time, risking lives: Government uses AI tools to inform asylum decisions

The automation of the hostile environment continues with the Home Office rolling out the use of AI in the asylum decision-making process.

In April 2025, the Home Office quietly published evaluations of two AI tools that are shaping how asylum decisions are made:

  • The Asylum Policy Search (APS) tool is already being used by asylum decision-makers within the Home Office. This tool summarises country Policy and Information Notes (CPINs), guidance documents, and Country of Origin Information (COI) reports.

Automating the hostile environment

Read our policy paper on AI in the asylum decision-making process.

Find out more

The Home Office claims that the ACS tool saves 23 minutes per case, and the APS tool saves 37 minutes. Such time saving is clearly attractive to a government that has pledged to “clear the asylum backlog”, which at the end of September 2025 stood at over 62,000 applications waiting for an initial decision.

The people behind these statistics, waiting for their fate to be decided, are also likely to welcome faster decision making. But saving time shouldn’t be the main driver behind decisions that could result in someone being sent back to a country where they face persecution, violence or even death.

These tools are shaping the information upon which life-changing decisions are being made. Even if we take the Home Office’s time-saving assessments at face value, we need to be reassured that these tools are used in ways that are transparent, accountable, equal and impartial. But the Home Office is not even telling people applying for asylum when AI is being used in their cases.

If the Home Office really want to save money, they should be focusing on processing asylum claims accurately, to avoid costly appeals and burdens on the tribunal and court system. By focusing on the speed, rather than the accuracy, of asylum claim processing, they force the courts to correct mistakes, and delay rightful claims from being recognised. Using AI to perpetuate inaccurate and poor processes is the opposite for fair and efficient.

In summarising interviews, the ACS tool is deciding what is important, what should be included, what should be omitted, what should be simplified. The risk is that overlooking a crucial part of someone’s interview can dramatically affect the decision of their case. LLMs may be prone to omission, as at core they base summaries on what appears to linguistically likely to be relevant. Unless a case reviewer goes back and checks the original interview, it is the tool that has decided which information will be used to inform a decision.

LLM models like Chat-GPT work based on the user’s intent, and they tend to be most effective when users are more specific in their prompts. The same tool can produce radically different outputs based on subtle changes in instructions. The Home Office has refused to give any information about the prompts being used to summarise cases or what steps are being taken to ensure that they don’t enable exclusion, bias, or discrimination.

The Home Office’s own evaluation reveals 9% of AI-generated summaries were so flawed they had to be removed from the pilot but it has not provided information about how these errors were distributed. Were nationality, language or the complexity of cases a factor?

Indeed 23% of caseworkers lacked full confidence in the tools’ outputs, which means that they may be likely to spend additional time checking the information that the tools provided.

It’s difficult to believe that good faith lies behind the implementation of such tools in the asylum system, given the ever-increasing hostile narratives from the Home Office against migrants and people seeking safety in the UK. The Home Office has a track record of using controversial technologies to target migrants, and this appears to be another example.

LLMs are not neutral and can be easily tweaked to produce preferred results. The use of such tools with vulnerable people in critical situations without safeguarding, transparency and accountability could lead to fatal results. Governments around the world have tested new technologies on vulnerable populations such as people seeking safety, migrants or benefit claimants, who are less able to stand up for their rights. Once the use of these tools is normalised, they will inevitably be are out to other public services. It’s also worth noting that last yearthe government passed the Data Use and Access (DUA) Act, which removed our right not to be subject to automated decision-making (ADM) in most circumstances.

Fairness in asylum decisions doesn’t come from faster processing. It comes from adequate staffing, proper training, robust safeguards, and genuine respect for human rights.

The resources being spent on these AI systems could instead fund more caseworkers, better training, improved legal representation, and stronger oversight. The backlog exists not because summaries take too long to write, but because the system is fundamentally under resourced and hostile by design.

But our government has a history of testing new technologies and eroding the rights of vulnerable populations who are less able to object.

At the very least, we need more transparency about when and how AI is being used. We need to know what safeguards are in place to prevent errors and who is accountable when AI gets it wrong.

Without these vulnerable people will be harmed and these tools will be quietly rolled out across government departments.

Migrant Digital Justice