Rishi Sunak is scrambling to update the government’s approach to regulating artificial intelligence, amid warnings that the industry poses an existential risk to humanity unless countries radically change how they allow the technology to be developed.
The prime minister and his officials are looking at ways to tighten the UK’s regulation of cutting-edge technology, as industry figures warn the government’s AI white paper, published just two months ago, is already out of date.
Government sources have told the Guardian the prime minister is increasingly concerned about the risks posed by AI, only weeks after his chancellor, Jeremy Hunt, said he wanted the UK to “win the race” to develop the technology.
Sunak is pushing allies to formulate an international agreement on how to develop AI capabilities, which could even lead to the creation of a new global regulator. Meanwhile Conservative and Labour MPs are calling on the prime minister to pass a separate bill that could create the UK’s first AI-focused watchdog.
A Downing Street spokesperson said: “The starting point for us is safety, and making sure the public have confidence in how AI is being used on their behalf. Everyone is well aware of the potential benefits and risks of AI. Some of this tech is moving so fast it’s unknown.”
For several months, British ministers have spoken optimistically about the opportunities AI presents for the country.
Michelle Donelan, as science, innovation and technology secretary, published a white paper in April which set out five broad principles for developing the technology, but said relatively little about how to regulate it. In her foreword to that paper, she wrote: “AI is already delivering fantastic social and economic benefits for real people.”
In recent months, however, the advances in the automated chat tool ChatGPT and the warning by Geoffrey Hinton, the “godfather of AI”, that the technology poses an existential risk to humankind, have prompted a change of tack within government.
Experts say it will soon be possible for companies to use the technology to decide who to hire and fire, for police to use it to detect suspects and for governments to manipulate elections.
Last week, Sunak met four of the world’s most senior executives in the AI industry, including Sundar Pichai, the chief executive of Google, and Sam Altman, the chief executive of ChatGPT’s parent company OpenAI. After the meeting that included Altman, Downing Street acknowledged for the first time the “existential risks” now being faced.
On Monday, British officials will join their counterparts from other G7 member countries to discuss AI’s implications for intellectual property protections and disinformation.
“There has been a marked shift in the government’s tone on this issue,” said Megan Stagman, an associate director at the government advisory firm Global Counsel. “Even since the AI white paper, there has been a dramatic shift in thinking.”
Some MPs are now pushing for an AI bill to be passed through the Commons which could set certain conditions for companies who want to develop the technology in the UK. Some want to see the creation of an AI-specific regulator.
skip past newsletter promotion
after newsletter promotion
David Davis, the Tory MP and former cabinet minister, said: “The whole question of responsibility and liability has to be very tightly defined. Let’s say I dismiss you from a job on the basis of an AI recommendation, am I still liable?
He added: “We need an AI bill. The problem of who should regulate it is a tricky one but I don’t think you can hand it off to regulators for other industries.”
Lucy Powell, Labour’s spokesperson for digital, culture, media and sport, said: “The AI white paper is a sticking plaster on this huge long-term shift. Relying on overstretched regulators to manage the multiple impacts of AI may allow huge areas to fall through the gaps.”
Her colleague Darren Jones, who chairs the business select committee, wrote to Sunak this week calling on him to promote the UK as a possible host for an international AI agency, along the lines of the International Atomic Energy Agency.
Government insiders admit there has been a shift in approach, but insist they will not follow the EU’s example of regulating each use of AI in a different way. MEPs are currently scrutinising a new law that would allow for AI in some contexts but ban it in others, such as for facial recognition.
“We don’t want to regulate product-by-product,” said one. “We want to stay nimble, because the technology is changing so fast.”