January AI policy update: UK marks AI action plan milestone as deepfake crackdown hits X
The government claims 76% of its AI action plan delivered, new laws target AI-generated deepfakes, and researchers say the West is fighting the wrong AI race. This month's policy roundup covers the debates and trends shaping AI.
The UK AI action plan: one year on
The UK government has published its progress on its AI Opportunities Action Plan, a year after first issuing the report.
A new government dashboard shows 76% of the plan delivered with 24% in progress. See the UK AI minister's rundown of the commitments the government has delivered. Meanwhile Oliver Purnell, partnerships lead at i.AI, has summarised his favourites.
As part of this, the government has launched a skills boost programme to train 10 million workers with AI skills by 2030. The training is to help make the UK the fastest-adopting AI country in the G7 and comes as new government research shows that AI adoption in the UK remains modest, with one in six firms using AI.
New industry partners in the skills programme include the CBI, FSB, IoD and techUK, among others. Nimmi Patel, associate director of policy at techUK, has a useful summary of the programme. See also the UK government release.
New thinking: the real AI race
Lisa Klaassen, a PhD candidate at Oxford, and Broderick McDonald, a research fellow at King's College London, have a new piece out that refutes the typical AI race narrative.
Western leaders have often framed US-China competition as a winner-takes-all race for AI supremacy where the winners will be those who can train ever more powerful LLMs and build AGI. However, the pair suggest this narrative overlooks what matters.
In practice, safe deployment at scale is more important than pursuing ever-larger models. Critically, deploying AI at scale across the economy will require significant public trust, which is currently missing.
To close this gap, Lisa and Broderick argue that the West must pivot from an innovation-only mindset to a deployment-first strategy built on public trust.
New reports: UK AI governance, a global template
The Alan Turing Institute has published a new report on the UK's approach to AI governance.
As countries worldwide seek the best way to regulate the technology, the UK has taken a distinct, sector-led approach. But could its approach serve as a blueprint for other jurisdictions? This is the question Arcangelo Leone de Castris, AI governance manager at the Institute, asks in the latest report.
To better understand what that actually looks like on the ground, the Institute's latest country profile examines the UK's model, including an analysis of the UK's aims and a detailed look at key policy initiatives.
New laws brought into force: Grok AI
The UK government said this month that a new law will be brought into force to make deepfake sexual images created by AI illegal.
This follows reports that the Grok AI chatbot on the social media platform X has been used to create and share degrading, non-consensual intimate deepfakes. While it is illegal to share intimate, non-consensual deepfakes in the UK, it had not been a criminal offence to use an AI tool to create them until this announcement.
In a win for the UK government, X said Grok would no longer allow the editing of images of real people in revealing clothing, and that generating such images will be blocked in jurisdictions where it is illegal.
New views: from research to impact
AI policy researchers want to create an impact but often need to break out of the echo chamber first. This challenge recently featured in discussions between Mila - Quebec Artificial Intelligence Institute policy fellows and members of the Global AI Policy Research Network.
Jason Tucker, associate professor of AI policy and co-chair of the research network, summarises how AI policy researchers can have an impact. He suggests foregrounding stakeholder engagement, balancing speed with rigour, targeted AI policy literacy and acting local while thinking global.
New book: Regulation of generative AI
All chapters of the Oxford Handbook of the Foundations and Regulation of Generative AI are now available online.
Following a long-term collaboration between Professor Philipp Hacker, Andreas Engel, Sarah Hammer and Professor Brent Mittelstadt, the new chapters examine the technical foundations of generative AI focused on the core technologies, the art of prompting, content detection, and explainable generative AI. They also cover how to evaluate AI models' societal effects, a critical analysis of AI companions and their implications for human interaction, and how the law does and should address AI-generated expression.
You can read Philipp's summary on LinkedIn or purchase the book online.
New briefings: AI content labelling
The House of Commons library has released a new research briefing on AI content labelling.
The 41-page report explains what AI content labelling is, how it works, and what information should be included in a label. It also outlines the rules and regulations around labelling as well as the policies of social media companies, news organisations, search engines and video game services.
Chartered PR practitioner James Boyd-Wallis is MD of tech and AI focused corporate and public affairs agency Highbury Communications and co-founder of AI policy network, Appraise.
Further reading
December AI policy update: AI bill scrapped, FCA testing and transparency
AI policy update: £20bn boost, child safety laws and science strategy
.jpg&w=728&h=90&maxW=&maxH=&zc=1)
