Advert
Advert
Join CIPR
A workflow concept illustration on a turquoise background of two people looking at the word AI next to giant folder icons
Visual Generation / iStock
TECHNOLOGY
Monday 19th January 2026

December AI policy update: AI bill scrapped, FCA testing and transparency

The UK government confirms no AI bill, while the FCA launches live testing and new research reveals declining model transparency. This month's policy roundup

A no-go: the UK AI bill

Politico’s Tom Bristow first reported that the AI bill is looking very unlikely, with the secretary of state for science, innovation and technology, Liz Kendall, all but confirming it early in December. 

So, what’s happening instead? Tom reports there are three key developments. First, ministers hope to address AI harms through the Online Safety Act. Second, and as mentioned in this update a couple of weeks ago, child sexual abuse material is being addressed through an amendment to the crime and policing bill. Third, the AI Safety Institute’s (AISI) work continues. 

Commenting on the development, Alexandru Voica notes how the UK government will now focus on outcomes over rules

Meanwhile, Tommy Shaffer Shane describes some of the risks of rowing back on an AI bill.

New initiative: AI live testing

The FCA, a UK financial regulator, announced that it is working with major firms to test AI in a safe place to understand the potential benefits and risks.

The AI Live Testing initiative is the first of its kind in the UK financial sector, helping firms that are ready to use AI in UK financial markets. 

Jessica Rusu, chief data, information and intelligence officer at the FCA, said that “by working closely with firms and our technical partner Advai, we’re helping to make sure that AI is developed and deployed safely and responsibly in UK financial markets.”

Read a full writeup on the FCA’s new initiative online.

New interviews: the AI minister

Amy Lewin, editor at Sifted, interviewed the UK AI minister, Kanishka Narayan, on the publication’s podcast.

In the interview, Amy asks Kanishka if he thinks the UK is moving fast enough on adopting and regulating AI.

He also discusses the biggest risks the technology poses to the UK and where he thinks the government should put its resources to have the most positive impact on the tech sector.

Regulation in the UK: public attitudes

The Ada Lovelace Institute published new nationally representative polling which examines whether the UK public supports the regulation of AI, how they expect it to function, and where gaps between public expectations and policy ambition may lie.

Some key takeaways:

- There is strong public support for an independent regulator for AI, equipped with enforcement powers. 

- Meanwhile, 91% of the public feel it is important that AI systems are developed and used in ways that treat people fairly. 

- In addition, people support mechanisms such as independent standards, transparency reporting, and top-down accountability to ensure effective monitoring before and after deployment.

The Institute’s Gaia Marcus wrote an insightful post exploring the results and what they tell us about the public’s support for the regulation of AI.

Model transparency: annual index

The Center for Research on Foundation Models has published its latest Transparency Index.

The comprehensive study shows that transparency, a key principle of effective regulation, as the Ada study above shows, is in decline.

The average transparency score for AI companies declined from 58/100 in 2024 to 40/100 in 2025.

The study also shows that while companies share the capabilities of their models, they do not adequately evaluate risks. Just four of 13 companies comprehensively evaluated risks prior to releasing their foundation model. 

What’s more, the study shows a trend to release less, not more, information. For more info, read Kevin Klyman’s detailed summary of the new Index.

New views: advancing the UK’s leadership on frontier AI governance

Imogen Stead, AI policy manager, and Jess Whittlestone, senior adviser, both at the Centre for Long-Term Resilience, published their thoughts on the UK’s role in global AI governance.

In the piece, Imogen and Jess argue that the UK has significant AI soft power, particularly in frontier AI safety and security, which can be used to strengthen and differentiate its voice in the international AI conversation. 

Aiming to move the policy conversation beyond either regulatory standard-setting or AI investment power, they set out three additional pathways to impact for the UK.

These include: strengthening AISI’s international engagement, advancing international alignment on risks and mitigations, and influencing the behaviour of frontier companies.

Highlights: UK AI policy

Diego D., head of AI policy at HM Treasury, has shared a few key takeaways from a busy year in AI policy.

In his post, he highlights:

- Procurement reforms so the UK government can buy British AI

- A wider entrepreneurship and innovation package to help companies scale and stay in the UK

- Alongside this, the launch of four new AI Growth Zones this year and a new AI strategy for science, backed by £137m, to accelerate AI adoption in R&D 

Last, if you don’t already follow: Rachel Adams

Rachel, the author of the excellent book The New Empire of AI, has launched a new Substack, Fragile World.

In the Substack, Rachel asks what work we need to do to translate AI's potential to solve intractable problems like disease into tangible change.

It’s an ambitious and challenging topic, which Rachel points out, is often hindered by treating the question as a technical rather than a political problem.

For anyone interested in AI’s impact on justice and global inequality, it will become an essential space for dialogue and debate.

Chartered PR practitioner James Boyd-Wallis is vice-chair of the CIPR Public Affairs group and co-founder of The Appraise Network, for AI policy and advocacy professionals in the UK.

Read more

AI policy update: £20bn boost, child safety laws and science strategy

How AI comms must change in 2026

Can you be trusted? CIPR East Anglia tackles misinformation in the digital age

AI policy update: UK bets on beating US as EU gambles €1bn

AI policy update: UK-US tech prosperity deal and national security

How to make yourself irreplaceable in an age of AI