The writer is a barrister and the author of ‘The Digital Republic: Taking Back Control of Technology’
Last month’s boardroom carnage at OpenAI — in which the chief executive, Sam Altman, was toppled and reinstated in less than a week — showed the company to be riven by the same question that is set to dominate politics in 2024: to what extent should increasingly capable AI systems be curtained by government regulation?
This is ultimately a question for politicians rather than corporations. And the UK government has recently found its answer: safety first. The regulatory landmarks of 2023 were the Online Safety Act and the AI Safety Summit. “Safety” was the top priority for both. The act followed the suicide of 14-year-old Molly Russell, who had been served a torrent of online content related to self-harm. It aimed to make Britain “the safest place in the world to be online”. The summit got the regulatory ball rolling in a similar direction for AI. The Bletchley Declaration contained seven references to “safety”, and the UK’s AI Safety Institute is now actively recruiting.