Wild West or window of opportunity? AI regulation and balancing innovation, intellectual property and privacy

Brett Lambe, senior associate and an experienced technology lawyer at Ashfords, considers the UK’s approach to AI regulation and how its interplay with intellectual property (IP) and data protection may impact businesses in the near future.

Ashfords logo

Ashfords is a partner for Growth Forge 2024, a business acceleration programme for ambitious tech companies. Learn more here.

Different approaches to AI regulation

Across the globe we are seeing the emergence of a range of approaches to AI regulation. The EU is currently leading with its comprehensive Artificial Intelligence Act, focusing on categorising AI systems based on risk levels; unacceptable, high, limited and minimal, which require varying degrees of onerous obligations and oversight.

In contrast, the UK’s whitepaper outlined the government’s intent to adopt a principles-based approach, emphasising innovation while monitoring potential risks on the basis of (i) safety, (ii) transparency, (iii) fairness, (iv) accountability and (v) contestability.

The UK has chosen to approach AI risks through existing frameworks, regardless of whether AI is being used or not. This differs from the EU’s approach, which essentially creates a new layer of requirements on top of many existing technology-neutral obligations.

Rather than planning to introduce a single AI regulatory body or comprehensive new laws, the UK offers a principles-based framework for existing regulatory authorities to follow. While the UK’s approach seeks to encourage innovation (and diverges from the EU approach quite significantly), it may lead to uncertainty among businesses in their attempt to adhere to these guidelines.

Businesses in the UK are currently expected to navigate the emerging landscape of fast-evolving technology using existing legal and regulatory frameworks not specifically tailored for AI. This regulatory approach is subject to change but currently presents unique challenges.

Intellectual property – looking for stability

Generative AI models are trained and developed using vast amounts of information and data, which can often include copyright material. Disputes are likely to arise in the absence of clear legal grounds permitting the use of such copyright material.

The current key piece of UK legislation governing copyright (the Copyright, Designs and Patents Act 1988) is almost 40 years old, and therefore clearly not tailored to the current rapid advances in technology.

IP-rich businesses, including software providers and streaming services whose revenues depend on licensing and monetising their IP estate, require a robust framework to monitor and prevent unlawful copying and other infringement, and tend to aggressively guard their IP rights.

One example of this is currently playing out in the UK and US courts as Getty Images (a renowned global image licensor and market place) is pursuing legal action against Stability AI (an open-source generative AI company). Getty Images alleges infringement of its IP rights in relation to Stable Diffusion, a deep learning AI model that automatically generates images developed by Stability AI. Getty Images claims that Stability has scraped millions of images from Getty Images’ online image libraries without its consent, and unlawfully used those images to train and develop Stable Diffusion. Getty Images also claims that the output of Stable Diffusion (synthetic, AI-generated images) infringes IP rights by reproducing substantial parts of copyright works.

The outcome of Getty Images’ claim against Stability will have far-reaching implications. It is possible that developers will start “jurisdiction shopping” to train AI models in countries with more permissive legislative environments. Depending on the nuance of the Getty Images judgment, it may be that future AI models can be trained outside the UK, but the software itself can be sold and used in the UK.

Data protection – no clear view

The use of AI at scale on individuals, without their knowledge or consent, pushes at the boundaries of data protection legislation, including the UK GDPR.

In October 2023, the UK General Regulatory Chamber’s First-tier Tribunal allowed an appeal by Clearview AI, an American facial recognition company, against the UK Information Commissioner’s Office (ICO) decision to fine the US-based company £7.5m and issue an enforcement notice demanding that Clearview stop obtaining and using the personal data of UK residents and delete their data from its systems.

Clearview is an incredibly powerful search engine for faces, being able to detect faces in complex crowds and with remarkable accuracy. A user will upload an image of a person’s face and Clearview’s AI system will search through over 20 billion images to find matches. It gathers the images from publicly available online information, although Clearview states that data will not be taken from social media accounts listed as private. The size of Clearview’s data set means that it likely held a significant amount of data relating to UK residents which was obtained without their knowledge or consent.

Without a base in the UK and by its tight client focus on law enforcement, Clearview has currently avoided the ICO’s fine. However businesses offering similar generative AI technology are unlikely to be able to rely on this narrow exemption. It will be even more important for generative AI providers to consider their legitimate requirements for data processing, especially when it comes to biometric data such as facial recognition. Any failure to do so, such as failing to ensure that there is a lawful basis for processing the personal data in question or retaining data indefinitely, could result in a substantial fine or an enforcement notice from the ICO.

The ICO is now seeking permission to appeal the First-tier Tribunal’s rejection of their initial decision, and we await further clarity on the issues arising from this appeal.

Of a more general application to businesses, the Italian Data Protection Authority issued an interim emergency decision in 2023 ordering OpenAI LLC to immediately stop the use of ChatGPT to process the personal data of individuals in Italy.

This stemmed from allegations of insufficient transparency, inadequate age verification processes, and the absence of a proper lawful basis for the alleged large-scale collection and processing of personal data used to train the platform’s underlying algorithms.

Other AI-related risks also arise in relation to personal data and the issues of bias and fairness.

With regard to bias, AI systems are only as objective in their decision making as the data they are trained on. If the data used to train AI algorithms is biased, the resulting decisions and outcomes can also be biased – and this could happen unwillingly or unconsciously.

Despite safeguards, bias can persist, leading to potential legal liabilities and reputational damage for businesses. Examples include an organisation using AI in recruitment that inadvertently favours certain demographics, or conducting AI-based performance reviews that reflects underlying gender bias, or implementing customer service chatbots that exhibit racial bias.

Addressing this challenge requires coupling the use of AI in management decisions with robust human oversight mechanisms and regularly monitoring and auditing AI systems to identify and remove discriminatory patterns or outcomes.

In respect of fairness, this is a fundamental principle of both the UK GDPR and AI whitepaper. AI systems that use personal data in ways that could negatively impact individuals unfairly or beyond their reasonable expectations are likely to face stiff regulatory scrutiny.

What lies ahead?

While the technological and commercial possibilities of AI are enormously exciting, the legal and regulatory frameworks are continuing to develop in the UK and globally, and some degree of caution is therefore needed.

The legal foundations associated with generative AI are still being established, with courts and lawmakers considering novel issues and establishing precedent.

As mentioned at the outset of this article, the EU’s Artificial Intelligence Act is an EU-wide regulation on AI which aims to establish a common regulatory and legal framework for AI. Like the GDPR before it, this is likely to be a wide-ranging piece of legislation which will apply not just to businesses located in the EU, but global businesses (including those in the UK, US and Asia) looking to trade in the EU. As such, although other countries may take a different approach, the AI Act is likely to prove a benchmark for other lawmakers.

However, while traditional forms of software can be readily modified to fit with changing legal requirements, the “black box” and often irreversible nature of developing and training AI models and algorithms means that potentially infringing activity is “baked in” to an AI product or service. Given the international scope of AI products, it will be very difficult, if not impossible, for AI companies to row back on infringing acts to meet the stricter requirements of a particular market.

Given the huge amounts of money, time and expertise invested in developing AI products, it is not inconceivable that some companies may simply choose to avoid particular territories (even huge markets such as the EU) if the regulatory burden is too great and would mean compromising the USP or profitability of their product in other, more permissively regulated countries.

The law is constantly evolving on the many issues thrown up by AI, and often moves more slowly than the technology it seeks to govern. This is particularly the case in AI which is an area of significant technical and legal complexity.

This raises uncertainty for all players in the AI market, both vendors and customers, who must continue to learn and adapt to the evolving regulatory framework. While the legal landscape takes shape, businesses will be able to exploit this to their advantage, with the AI market resembling both “Wild West” and “window of opportunity”.

Brett Lambe is a Senior Associate and experienced technology lawyer in Ashfords’ Technology team.

IGNITE YOUR BUSINESS GROWTH

Join Growth Forge and receive £8,000 of business support for just £795.

TSW Growth Forge logo tight white