AI in the Common Interest by Gabriela Ramos & Mariana Mazzucato

0
AI in the Common Interest by Gabriela Ramos & Mariana Mazzucato

Public policies and institutions must be designed to ensure that innovations improve the world; but as things stand, many technologies are being deployed in a vacuum, with advances in artificial intelligence raising one red flag after another. The era of light-touch self-regulation must end.

LONDON – The tech world has generated a fresh glut of front-page news in 2022. In October, Elon Musk bought Twitter – one of the most important public communications platforms used by journalists, academics, businesses and policymakers – and proceeded to lay off most of its content-moderating staff, indicating that the company will instead rely on artificial intelligence.

Then, in November, a group of Meta employees revealed that they had designed an AI program that could beat most humans in the strategy game Diplomacy. In Shenzhen, China, officials are using “digital twins” of thousands of 5G-connected mobile devices to monitor and manage flows of people, traffic and energy consumption in real time. And with the latest iteration of ChatGPT’s language prediction model, many are declaring the end of the college essay.

In short, it was a year in which already serious concerns about how technology is designed and used deepened into even more pressing concerns. Who is in charge here? WHO should be in control? Public policies and institutions must be designed to ensure that innovations improve the world, but many technologies are currently deployed in a vacuum. We need inclusive mission-oriented governance structures centered around a true common good. Capable governments can shape this technological revolution to serve the public interest.

Consider AI, which the Oxford English Dictionary broadly defines as “the theory and development of computer systems capable of performing tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” AI can make our lives better in many ways. It can improve food production and management by making farming more efficient and improving food safety. It can help us strengthen resilience against natural disasters, design energy-efficient buildings, improve energy storage and optimize renewable energy deployment. And it can improve the accuracy of medical diagnostics when combined with doctors’ own assessments.

These apps will make our lives better in many ways. But with no effective rules in place, AI is likely to create new inequalities and reinforce existing rules. One does not have to look far to find examples of AI-powered systems that reproduce unfair social biases. In one recent experiment, robots powered by a machine learning algorithm became openly racist and sexist. Without better oversight, algorithms that are supposed to help the public sector manage welfare benefits can discriminate against families who are truly in need. Equally worrying, public authorities in some countries are already using AI-powered facial recognition technology to monitor political dissent and subject citizens to mass surveillance regimes.

Market concentration is also a major concern. AI development – ​​and control of the underlying data – is dominated by just a few powerful players in just a few places. Between 2013 and 2021, China and the United States accounted for 80% of private AI investment worldwide. There is now a massive power imbalance between the private owners of these technologies and the rest of us.

HOLIDAY SALE: Save $35 on any new PS subscription






PS_Holiday Sale_1333x1000_Alt



HOLIDAY SALE: Save $35 on any new PS subscription

Get greater access to PS – including every commentary and our On Point series of subscriber-exclusive content – ​​for as little as $49.99.


Save now

But AI is also being boosted by massive public investment. Such financing must be controlled for the common good, not in the interest of the few. We need a digital architecture that more fairly shares the rewards of collective value creation. The era of light-touch self-regulation must end. When we allow market fundamentalism to prevail, the state and taxpayers are condemned to come to the rescue after the fact (as we saw in the context of the 2008 financial crisis and the COVID-19 pandemic), usually at great financial cost and with long-lasting social scars. Worse yet, with AI we don’t even know if a ex post intervention will be enough. As The Economist recently pointed out, AI developers themselves are often surprised by the power of their creations.

Fortunately, we already know how to avert another laissez-faire induced crisis. We need an “ethical by design” AI mission supported by sound regulation and capable governments working to shape this technological revolution in the common good, rather than in shareholder interest alone. With these pillars in place, the private sector can and will join the broader effort to make technologies safer and fairer.

Effective public oversight must ensure that digitization and AI create opportunities for public value creation. This principle is an integral part of UNESCO’s recommendation on the ethics of AI, a normative framework adopted by 193 member states in November 2021. In addition, key players are now taking responsibility for reframing the debate, with US President Joe Biden’s administration proposing an AI Bill of Rights, and the European Union developing a holistic framework for governing AI.

Yet we also need to keep the public sector’s own uses of AI on a sound ethical footing. With AI supporting more and more decision-making, it is important to ensure that AI systems are not used in ways that undermine democracy or violate human rights.

We also need to address the lack of investment in the public sector’s own innovative and management capabilities. COVID-19 has highlighted the need for more dynamic capabilities in the public sector. Without robust terms and conditions governing, for example, public-private partnerships, companies can easily capture the agenda.

The problem, however, is that the outsourcing of public contracts has increasingly become an obstacle to building capacity in the public sector. Governments must be able to develop AI in ways that do not depend on the private sector for sensitive systems, so that they can maintain control over important products and ensure that ethical standards are upheld. Likewise, they must be able to support information sharing and interoperable protocols and benchmarks across departments and ministries. All this will require public investments in government capabilities, following a mission-oriented approach.

Given that so much knowledge and experience is now centered in the private sector, synergies between the public and private sectors are both inevitable and desirable. Mission orientation is about choosing the willing – by co-investing with partners who recognize the potential of government-led missions. The key is to equip the state with the ability to manage how AI systems are deployed and used, rather than always playing catch-up. To share the risks and rewards of public investment, policymakers can attach conditions to public funding. They can and should also require Big Tech to be more open and transparent.

The future of our societies is at stake. We must not only solve the problems and control the downside risks of AI, but also shape the direction of the digital transformation and technological innovation more broadly. At the start of a new year, there is no better time to start laying the groundwork for unlimited innovation in everyone’s interest.

Leave a Reply

Your email address will not be published. Required fields are marked *