The AI regulatory outlook for 2024

As the applications of AI technology grow ever more ubiquitous, governments are advancing a variety of regulatory approaches to protect consumers and businesses.

In this article, we outline the AI regulatory changes we expect to see both domestically and globally in 2024.

The UK Government remains committed to a "pro-innovation" approach to regulation

On February 6th, the UK Government published its response to the results of the consultation on its AI White Paper. The key development is the announcement of a £100 million package to help further develop the UK's AI capabilities, with £90m going towards nine new AI research hubs and £10m to train regulators about AI technology. The response also sets forth the government's "Pro-Innovation" approach to AI regulation, developed around three overarching principles – "Driving Growth", "Increasing Public Trust in AI", and "Strengthening the UK's position as a global leader in AI".

As discussed in our analysis of the UK AI Safety Summit of November 2023, these principles will engender a hands-off regulatory approach, whereby UK bodies such as Ofcom, the ICO and CMA will have to publish their outlines on sector specific rules by 30 April 2024. In contrast to the EU's broad legislative approach (see below), it's unlikely this government will publish AI-specific legislation any time soon. We will continue to monitor the regulatory developments closely and are well placed to advise businesses looking to navigate the complex and fast-changing rules.  

The state-of-play for AI developers and copyright holders remains unclear

The UK government has now confirmed that the Intellectual Property Office working group on AI and copyright have not been able to agree on a code of practice, so work will continue on developing a model which balances the interests of AI developers and IP rights holders. The prevailing issue is that AI models, including large language models such as ChatGPT, are trained by scraping huge quantities of materials from the internet which may sometimes be protected by copyright. It remains unclear if such activities are prohibited by English IP law, and because no IPO code has come to fruition, there is still significant uncertainty as to whether AI firms can access copyrighted work as an input for their models, and whether there are adequate protections (for example labelling) on generated output to support rights-holders of copyrighted work.

The report confirms that the government will set out further proposals on the way forward soon, and this activity will happen in tandem with international efforts working to address the issue. It comes as the House of Lords communications and digital committee publishes its own report on LLM models, in which it says the government should prioritise responsible innovation and update legislation if there are shortfalls in existing copyright protection.  

The EU AI Act will offer much-welcome certainty but remains to be finalised

In December, the European Parliament and Council of the EU reached a political agreement on the scope and contents of the world's first piece of comprehensive law regulating AI. The EU AI Act will apply to both public and private sectors inside and outside the EU as long as the AI system is placed on the Union market or its use affects people located in the EU.

It will concern both providers or developers of AI systems, and any deployers of "high-risk" AI systems. There will be four such risk categories for AI systems, as well as an identification of risks specific to general purpose models. The Act sets out a methodology to define these risks, with specific obligations such as "conformity assessments" imposed on the "high-risk" systems, and "unacceptable risk" systems liable to being prohibited.

The Act will offer some much-welcomed certainty, but it will be important for businesses and operators to seek regulatory advice as the EU and UK rules change over the course of this year.

We anticipate that the draft EU legislation will become law in late spring/summer 2024, and a staggered compliance period will follow. There will however be a staggered compliance period: provisions related to prohibited AI systems are set to become enforceable six months after the Act is finalized, and provisions related to so-called "General Purpose" AI will become enforceable 12 months after this date. The rest of the AI Act is expected to become enforceable in 2026.

AI Safety Summits will continue to encourage international collaboration on governance

Following the UK AI Safety Summit in November 2023, it has been announced that South Korea will host the next summit within six months, and a third gathering will be hosted by France in a years' time.

AI regulations are set to change considerably in the next year and beyond, if you would like to discuss any of these issues, please contact a member of our team below.

Key contacts

Related