Digital Privacy
14 Jan 2026 Jim Killock
Techno-Permacrisis
Musk’s latest venture, image generation in Grok that until Wednesday lacked any guardrails to prevent the production of child abuse images and non-consensual sexual images, provoked an Ofcom investigation and further EU Commission action as well as the promise of UK emergency legislation against apps that provide such images in less than a week.
Failure to Regulate AI
X doing something harmful was a predictable outcome of the concentration of platform power in the hands of a privately wealthy individual like Musk, even if it is hard to explain his actions or what he believed he was achieving. Nevertheless, the harms of this concentration of power could be tempered by AI regulation that requires service providers to understand and mitigate risks in a similar manner to the European AI Act, which is unfortunately not yet fully in force. Labour decided not to go ahead with an AI Bill, under pressure from US AI and cloud providers, who sold them a story of low regulation encouraging UK economic growth; something they should now revisit. Billions were promised.
Rather than recognising Grok’s current actions enabling the production of illegal content as indicating a wider need to risk-regulate and structurally constrain the AI industry, we are being asked to see it as something that needs specific, emergency legislation. Given the difficulties applying guardrails to AI, for instance the ease by which users can bypass guardrails through adjusting the language of their prompts, next week’s legislation will be difficult to frame.
Emergency legislation might now be unavoidable given such serious cases involving child sexual abuse material, but pro-active risk based regulation might have avoided this situation arising in the first place. A larger, comprehensive bill would give time and space to think these risks through, across other issues as well, such as AI’s use in criminal justice, benefits, finance, and employment.
Without an AI Bill, AI’s most visible failures in the UK, both now and in the future, should be understood as the predictable outcome of concentrated platform power combined with a decision not to regulate AI risk at a systemic level. Meantime, Labour’s push to deliver AI into policing, justice, migration policy, health and benefits is running its own serious risks, without a public discussion of what this means.
A Dependency Problem
There is a wider, uncomfortable question which is not yet central to the political debate. Musk, X and Grok are not some accidental rogue element. They are the extreme edge of an alignment between far right politics, billionaire controlled platform infrastructure, and the US government.
Labour needs to develop a strategy that tackles the effects of that relationship in the UK; but beneath the immediate reaction to Grok, the truth is that the Government is building the power of the tech giants, rather than tackling it.
Palantir have advanced in the NHS, and are now so deeply embedded in the Ministry of Defence that major contracts proceed without competitive bids. Tony Blair’s advancement of Larry Ellison’s Oracle globally raises questions of whether they might be involved with Digital ID. The Government has weakened data protection to favour private sector “innovation” rather than personal rights, and appointed Doug Curr from Amazon to run the Competition and Markets Authority.
Last week’s Cybersecurity Bill debate raised stinging questions about the risks of Digital ID and of the lack of any visible plan to ensure that the UK can even operate its own systems without the threat of off switches from foreign providers. It is unclear that the UK has any meaningful plan to encourage domestic provision of IT systems that the government can control, even given the economic benefits of encouraging the UK tech sector.
Whether we look at social media or the government’s relationship with its own technology, the UK is dependent on a small number of mostly US companies that are tied to a destructive form of politics and discourse. In both cases, the core problem is lock-in: users and institutions are trapped, and regulation is forced to operate through the very platforms causing harm. Government should seek to disrupt this abusive relationship by building the alternatives, especially open source alternatives which are not controlled by a single company.
Until we start to tackle this relationship of dependence, democratic erosion and economic extraction, the UK will be doomed to a cycle of techno-permacrisis. Illegal content, online abuse or democratic manipulation will raise calls for regulation. Content regulation will bleed into surveillance and the roll back of encryption technologies. Government will depend on the bad actors – Meta, X, TikTok – to implement content regulation while their business models continue to rely on low cost, high response clickbait. Regulation on these lines will not deliver a healthy online environment.
How to Stop the Permacrisis
The missing middle path is structural: creating real exit for customers through competition measures, rather than solely attempting to police behaviour within monopolised systems.
- Interoperability
We set out the steps the UK should take in our recent report, Making Platforms Accountable and Creating Safety. In short: create a competitive market where people can move out of any platform without losing who they follow and their followers; move government announcements to Mastodon and BlueSky; and stop funding X and Meta with time and ad money.
- Digital Sovereignty
We set out the first steps towards Digital Sovereignty in our briefing to Parliament. In short, start building up the use of open source software, so government isn’t tied to a single company to maintain and modify what they depend on; and commission software services with the principle of ‘Public Money, Public Code’.
The answers exist. We need MPs with political courage to put them into practice. The alternative is techno-permacrisis.