Governing AI for the public interest

https://arab.news/pdugg
UK Prime Minister Keir Starmer last month published an “AI Opportunities Action Plan” that includes a multibillion-pound government investment in the UK’s artificial intelligence capacity and £14 billion ($17.4 billion) in commitments from tech firms. The stated goal is to boost the AI computing power under public control twentyfold by 2030 and to embed AI in the public sector to improve services and reduce costs by automating tasks.
But governing AI in the public interest will require the government to move beyond unbalanced relationships with digital monopolies. As matters stand, public authorities usually offer technology companies lucrative unstructured deals with no conditionalities attached. They are then left scrambling to address the market failures that inevitably ensue. While AI has plenty of potential to improve lives, the current approach does not set governments up for success.
To be sure, economists disagree on what AI will mean for economic growth. In addition to warning about the harms that AI could do if it is not directed well, the Nobel laureate economist Daron Acemoglu estimates that the technology will boost productivity by only 0.07 percent per year, at most, over the next decade. By contrast, AI enthusiasts like Philippe Aghion and Erik Brynjolfsson believe that productivity growth could be up to 20 times higher (Aghion estimates 1.3 percent per year, while Brynjolfsson and his colleagues point to the potential for a one-off increase as high as 14 percent in just a few months).
Meanwhile, bullish forecasts are being pushed by groups with vested interests, raising concerns over inflated figures, a lack of transparency and a “revolving door” effect. Many of those promising the greatest benefits also stand to gain from public investments in the sector. What are we to make of the CEO of Microsoft UK being appointed as chair of the UK Department for Business and Trade’s Industrial Strategy Advisory Council?
The key to governing AI is to treat it not as a sector deserving of more or less support, but rather as a general purpose technology that can transform all sectors. Such transformations will not be value-neutral. While they could be realized in the public interest, they could also further consolidate the power of existing monopolies. Steering the technology’s development and deployment in a positive direction will require governments to foster a decentralized innovation ecosystem that serves the public good.
Policymakers must also wake up to all the ways that things can go wrong. One major risk is the further entrenchment of dominant platforms such as Amazon and Google, which have leveraged their position as gatekeepers to extract “algorithmic attention rents” from users. Unless governed properly, today’s AI systems could follow the same path, leading to unproductive value extraction, insidious monetization and deteriorating information quality. For too long, policymakers have ignored these externalities.
While AI has plenty of potential to improve lives, the current approach does not set governments up for success
Yet governments may now be tempted to opt for the option that is cheapest in the short term: allowing tech giants to own the data. This may help established firms drive innovation, but it also will ensure that they can leverage their monopoly power in the future. This risk is particularly relevant today, given that the primary bottleneck in AI development is cloud computing power, the market for which is 67 percent controlled by Amazon, Google and Microsoft.
While AI can do much good, it is no magic wand. It must be developed and deployed in the context of a well-considered public strategy. Economic freedom and political freedom are deeply intertwined and neither is compatible with highly concentrated power. To avoid this dangerous path, the Starmer government should rethink its approach. Rather than acting primarily as a “market fixer” that will intervene later to address AI companies’ worst excesses (from deepfakes to disinformation), the state should step in early to shape the AI market.
That means not allocating billions of pounds to vaguely defined AI-related initiatives that lack clear objectives, which seems to be Starmer’s AI plan. Public funds should not be funneled into the hands of foreign hyper-scalers, as this risks diverting taxpayer money into the pockets of the world’s wealthiest corporations and ceding control over public sector data. The UK National Health Service’s deal with Palantir is a perfect example of what to avoid.
There is also a danger that if AI-led growth does not materialize as promised, the UK could be left with a larger public deficit and crucial infrastructure in foreign hands. Moreover, relying solely on AI investment to improve public services could lead to their deterioration. AI must complement, not replace, real investments in public sector capabilities.
The government should take concrete steps to ensure that AI serves the public good. For example, mandatory algorithmic audits can shed light on how AI systems are monetizing user attention. The government should also heed the lessons of past missteps, such as Google’s acquisition of the London-based startup DeepMind. As the British investor Ian Hogarth has noted, the UK government might have been better off blocking this deal to maintain an independent AI enterprise. Even now, proposals to reverse the takeover warrant consideration.
Prioritizing support for homegrown entrepreneurs and initiatives over dominant foreign companies is crucial
Government policy must also recognize that Big Tech already has both scale and resources, whereas small and medium-size enterprises require support to grow. Public funding should act as an “investor of first resort” to help these businesses overcome the first-mover bias and expand. Prioritizing support for homegrown entrepreneurs and initiatives over dominant foreign companies is crucial.
Finally, since AI platforms extract data from the digital commons (the internet), they are beneficiaries of a major economic windfall. It follows that a digital windfall tax should be applied to help fund open-source AI and public innovation. The UK needs to develop its own public AI infrastructure guided by a public-value framework, following the model of the EuroStack initiative in the EU.
AI should be a public good, not a corporate tollbooth. The Starmer government’s guiding objective should be to serve the public interest. That means addressing the entire supply chain — from software and computing power to chips and connectivity. The UK needs more investment in creating, organizing and federating existing assets (not necessarily replacing Big Tech’s assets entirely). Such efforts should be guided and co-financed under a consistent policy framework that aims to build a viable, competitive AI ecosystem. Only then can they ensure that the technology creates value for society and genuinely serves the public interest.
- Mariana Mazzucato is Professor in the Economics of Innovation and Public Value at University College London and the author, most recently, of “Mission Economy: A Moonshot Guide to Changing Capitalism” (Penguin Books, 2022).
- Tommaso Valletti, Professor of Economics at Imperial College London, is Director of the Centre for Economic Policy Research and a former chief competition economist at the European Commission.
- Copyright: Project Syndicate