Running out of road
How British tech policy hit a dead end
Last Tuesday, Science and Technology Secretary Liz Kendall mystified and horrified my Twitter timeline in equal measure when she tweeted a BBC One segment about Barnsley’s status as Britain’s first ‘Tech Town’. Announced back in February, Barnsley is receiving £500,000 in funding over the next eighteen months to support AI skills training and the rollout of AI in healthcare and education.
The announcement, which amounts to roughly £2 per resident, claimed that the new “status will position Barnsley as the UK’s trailblazer”. In many ways, this epitomises technology policy in Britain at the moment: unambitious, scattergun, and communicated in rhetoric totally mismatched to the substance.
Despite the technology policy ecosystem producing extraordinary volumes of output, both the proposals and debates I see feel increasingly stagnant. No matter the ideological predisposition of the author, most reports seem to converge on similar sets of recommendations around compute, skills, sandboxes, prioritising specific sectors for support, data access, procurement reform, immigration reform, and copyright reform.
These proposals are a blend of the sensible, the unlikely, and unambitious filler. It’s often hard to escape the feeling that these reports are the product of supply-side dynamics in the think tank world rather than any discernible objective. Many of the issues holding back the tech sector now sit in other areas, such as energy or planning, but it’s also hard to write policy when the government itself doesn’t have a discernible agenda.
Back in the glory days of February 2025, Keir Starmer promised to “mainline AI into the veins” of the UK economy. The government seemed set to pass radical pro-innovation copyright reform. Investors and CEOs were queuing up to engage with ministers and announce investments. But that optimism hasn’t survived contact with reality. Copyright reform was shelved in the face of vested interest stakeholder pushback, investment announcements have dissolved into thin air, while the government’s wider agenda seems designed to deter adoption.
Much of this can be explained by how the government never really had a theory of technology-enabled economic growth. It has cycled through several different semi-theories, but each of them has run into a structural constraint of some kind that it wasn’t prepared to address. Each of these theories, however, tells us something about the policymaking process and the constraints any future government might face.
The AI doctor won’t see you now
Early Starmerism, inspired by the Tony Blair Institute’s work on ‘reimagining the state’, engaged in much enthusiastic discussion of AI-enabled public services. The snappily-named AI Opportunities Action Plan, published in January 2025, stated that “AI could be the government’s single biggest lever to deliver its five missions”. There was talk of AI saving the hundreds of thousands of hours of time in the health service lost to missed appointments and huge efficiencies in areas like benefits processing. In March of that year, then DSIT Secretary Peter Kyle said that it was “almost certain” that AI would make it possible to reduce civil service headcount.
Better public services are a good thing and you can construct a second order argument that explains how they contribute to growth. For example, healthier workers are more productive and faster planning decisions unlock investment. But modest, diffuse efficiency gains are not the same thing as macroeconomic strategy. When a private firm automates a process, the productivity gain shows up directly in output per worker and in margins. When HMRC automates an element of tax processing, the gain dissipates across millions of taxpayer-hours saved, none of which are measured, and the internal cost saving may not materialise if the staff are just reallocated.
And therein lies the problem. Real efficiencies are contingent on complementary policy changes that the government isn’t in all likelihood prepared to make. When the Labour Party is polling at 16%, it’s unlikely to pick a political fight with its own base by meaningfully reducing public sector headcount. In the same way, you can’t use AI to bring down the welfare bill when the primary problem is the eligibility criteria rather than fraud. The government’s forays into reforming these criteria haven’t been promising so far.
In the end, the great efficiency agenda amounted to a handful of relatively unambitious pilot programmes (some of which have already been discontinued), and by the autumn of 2025, the government’s rhetorical energy had largely shifted elsewhere.
Pig iron production up 200%, comrades
The government has enthusiastically paraded commitments from American tech companies to build AI infrastructure in the UK, culminating in a £31 billion “Tech Prosperity Deal” announced during Trump’s state visit in September 2025. Microsoft committed £22 billion over four years, NVIDIA pledged 120,000 advanced GPUs, and OpenAI announced a UK version of its Stargate project. Starmer declared the deal would lay “the foundations for a future where together we are world leaders in the technology of tomorrow.” (On a personal note, I find this Pravda-esque style of government, which involves wielding out contextless big numbers to signal great productivity gains quite tedious.)
Six months later, the picture isn’t looking promising. OpenAI announced that it was pausing Stargate UK, citing the cost of energy and the regulatory environment. Government ministers are, of course, right when they point out that British energy costs were known to OpenAI at the time they made the commitment. Meanwhile, a number of the headline commitments from other tech companies look less impressive up close. For example, CoreWeave’s £1 billion “investment”, announced with promises of “two new data centres”, turned out to involve renting space in facilities built in 2002 and 2015. Meanwhile, the government has acknowledged that a £1.9 billion commitment from NScale was “not a formal contract, rather an intention to commit capital”.
DSIT themselves have admitted that they simply publish the investment numbers companies give them and that they make no effort to check them.
Even were all these announcements to be real, data centre hosting is probably the lowest-margin part of the AI value chain. Britain would capture employment, operational roles, and business rates on the physical facilities, but the recurring revenues from every API call would ultimately flow back to the US.
A host also needs to be able to guarantee grid connections and the ability to build – something it has yet to manage. The first AI Growth Zone, at Culham in Oxfordshire, was announced in February 2025. It has yet to begin building work and is still considering delivery partner proposals.
Even before OpenAI bailed, the government had been backing away from the data centre play, as part of its retconned evolving theory of tech sovereignty. Back in January, Liz Kendall said in a speech that “we shouldn’t try to outrun the US or China on building the most or the biggest data centres”, and should instead be focused on adoption.
Something something towns
A pure focus on adoption would have a lot to recommend it, but it runs into a combination of factional Labour politics and the wider incoherence of the government’s economic agenda.
AI-related ministerial roles have been consistently held by figures from the right of the government, such as Peter Kyle and Liz Kendall. The faction, identified with former Starmer chief of staff Morgan McSweeney, saw Labour’s path to power running through towns and suburbs, as opposed to cities.
If you’re serious about adoption driving growth, you’d target your interventions at the firms and sectors where AI adoption would generate the largest productivity gains. That means, large firms (which have the data infrastructure, management capability, and scale to deploy AI effectively) and knowledge-intensive sectors (like professional services, finance, IT, and pharmaceuticals, where AI tools are most mature). Focusing narrowly on towns is likely to miss both.
Instead of working out how to ensure mid-sized professional services firms in Leeds are integrating AI into their services, we’ve seen a series of low-value labour market interventions, framed in the language of “working people,” “local communities,” and places that have been “short-changed for too long”, presumably like Barnsley, Town of Tech.
These include £5 million ‘community funds’ around the AI Growth Zones (largely in former industrial areas) and a new Future of Work Unit (to “protect communities from the mistakes of past industrial change”), and regional “AI Adoption Hubs” to do … presumably something. I’ve also heard stories from people working in government about how they are constantly receiving requests from ministers and officials for examples of how AI will benefit average wage-earners in towns.
One step forward, three steps back
Now I’m all for the average wage-earner in towns doing better, but AI will likely only bring this about indirectly. If Britain manages to capture the economic rents from deploying AI, then we’ll be richer. We’ll be able to pay for more public goods that people in towns benefit from, while rising aggregate demand will benefit people who produce traded goods and services.
Unfortunately, the government’s own research on AI adoption found that 80% of UK businesses have no plans to adopt AI at all. This is not because until now, these businesses did not have a government hub to teach them how to use ChatGPT. It is more likely because the government’s wider policy agenda seems designed to disincentivise it.
For example, the government has hiked the minimum wage and employer national insurance contributions. The Low Pay Commission, whose evidence informs government policy on this issue, has found that employers often cover increases in the minimum wage by cutting investment.
As well as reducing firms’ financial buffers, the government is increasing the legal risk they face. If AI adoption means some roles change and others disappear, firms need flexibility to redeploy and, in some cases, make redundancies. The Employment Rights Bill, with measures like day-one unfair dismissal rights, risks making it meaningfully harder to restructure a workforce around AI.
Meanwhile, in March, the government declared that it no longer has a policy position on AI and copyright, having backed away from an opt-out model. A report following a government consultation concluded that there was “no consensus” on the issue – which was presumably the point of running the exercise. As well as creating yet more investment-deterring uncertainty, this sent a clear signal that the government will retreat from pro-AI positions when a politically vocal constituency objects.
We can also throw in the government’s online safety agenda, which is among the most burdensome and authoritarian in the western world. To signal its openness to investment, Number 10 summons in the bosses of tech companies so the prime minister can performatively berate them on camera.
Depressingly, I suspect that the majority of the people sat around the Cabinet table simply haven’t clocked that there is a tension between AI adoption and the government’s wider agenda (or lack thereof in some domains). More worryingly, I suspect a big chunk of policymakers simply don’t care. It was reported that at a cabinet away-day last July, both Ed Miliband (the Secretary of State for Energy Security and Net Zero) and Lisa Nandy (Secretary of State for Culture, Media, and Sport) reportedly grumbled about there being a session on AI. Meanwhile, Chancellor Rachel Reeves and Liz Kendall have both talked about how they don’t personally use AI, although this could be a product of an eccentric Freedom of Information ruling.
Viewed through another lens, this incuriosity is rational. If Number 10 and DSIT can’t stick to a narrative on the importance of AI for more than a few weeks or months at a time, there’s little incentive for ministers to devote that much intellectual time or attention to it. This is part of the reason that the act of cycling between theories is so damaging. If the boss doesn’t seem to believe in any one of them all that much, why should you invest any political capital yourself?
Groundhog day
A coherent theory of how technology drives growth needs to specify a mechanism: technology does X, which causes Y, which produces Z (growth). At various points, the government has cycled through a spending efficiency argument (public services), an input argument (attracting investment), and an output argument (towns). At no point do these ever join up into anything more substantial or coherent.
The previous government, for all its faults, had something approaching a theory. They believed that R&D funding would lead to breakthrough technologies, which would then be commercialised, creating new high-tech businesses that then contributed to growth.
It wasn’t supremely sophisticated, but you could at least chart a through-line through policy decisions. The Catapult network was established in 2011, modelled on Germany’s Fraunhofer institutes, to bridge the gap between university research and commercial application. Innovate UK, which funds promising technology companies, was folded into UK Research and Innovation in 2018 to align it more closely with the research councils. Meanwhile, EIS and SEIS, the tax-advantaged venture capital schemes offering 30% and 50% income tax relief respectively on early-stage investments, were expanded repeatedly.
In 2023, it published the Science and Technology Framework, which outlined priority technologies, accompanying investments, and contained specific actions to be taken by departments. The Office for Science and Technology Strategy was a serious attempt to build a vehicle for implementing this. The current government issued a revised Science and Technology Framework, which removed all of the funding commitments and specific proposals, replacing them with a list of principles. The underlying logic around strategic advantage was stripped out, in favour of bromides about industrial strategy and ‘missions’.
Some parts of the old agenda made sense, other bits were profoundly misguided. In their early years, they believed that innovation policy could be outsourced to universities, which produced the Alan Turing Institute disaster. It was only in their sunset years that they moved into institution-building mode with ARIA and the AI Security Institute. The previous government also bears much of the blame for creating the multibillion pound subsidy machine for predominantly middling to poor venture funds. And the less said about EIS the better. But there was at least an appreciation of what the machinery of government did and how it could be deployed in the service of a wider agenda.
Our current government is now attempting to revive elements of this agenda. It’s achieving this partly by compelling pension funds to invest in overvalued private markets, but also with its recently unveiled £500 million Sovereign AI fund. Around half of this money has been set aside for equity stakes, and the rest for grants. SovAI also offers other support, such as stakes, fast-track visas, and access to procurement.
From what I know about their first wave of companies they’ve backed, I’d be surprised if many of them were struggling to raise money from private investors. This leads me to suspect that their primary motivation for taking government money are the other value-adds. If I were a betting man, I imagine they’re most interested in the procurement.
This is probably the hardest part of Sovereign AI’s work, largely because the organisation itself has limited control over it. You can have the best investment team in the world (and Sovereign AI does have some great people), but they can’t compel government departments to buy products or services at scale or to make advance market commitments. They can’t instruct departments to ignore the legal advice about fair procurement that produces lengthy, incumbent-friendly competition processes. As with every other ‘theory’ here, success is contingent on other policy changes.
Other legacies of the previous government continue to perform well. The AI Security Institute’s evaluation of Claude Mythos Preview was a genuinely impressive example of state capacity. It’s unfortunate and telling that the state’s response was to write a cringe-inducing letter to businesses, telling them to set better passwords, along with other pre-AI basic cybersecurity advice. The British state created a global public good in the form of AISI, but seems more interested in appropriating AISI’s prestige to look serious, as opposed to using its insight to shape policy.
Closing thoughts
One of the reasons we seem to circle back to the same sets of solutions is that the government seems only capable of viewing innovation through two lenses: a magic force that creates jobs or moonshot ideas that are catapulted to commercial success by venture capture. The latter view is a product of over-reliance on contemporary Silicon Valley as a reference point and of spending too much time talking to universities and VCs, who are all too keen to reinforce that perception.
Anyone looking to develop a theory of technology-enabled growth should probably widen their perspective. For example, much of the most exciting work we see today is being conducted in companies that became pretty good at doing less experimental work well. Many companies built strong positions in rather dull markets, before using the profits from that work to cross-subsidize innovation. Samsung went from textiles to insurance to consumer electronics, and then to chips, while Huawei’s first product was phone switches. Microsoft is building its AI position on decades of boring enterprise software profits, while Waymo and DeepMind are the beneficiaries of a Google warchest, built on the sale of ads.
A theory of change that looked seriously at Britain’s disproportionately large share of small businesses, low levels of business investment, and sluggish technology adoption would likely yield more than asking what AI will do for hardworking families in towns.
Disclaimer: these are my views and my views alone. They are not the views of my employer, the people of trailblazing tech town Barnsley, any regional AI adoption hubs, or anyone else. I’m not an expert in anything, I get a lot of things wrong, and change my mind. Don’t say you weren’t warned.


