PHONE NUMBER

+64 21 389 121

EMAIL ADDRESS​

clive@cliveelliott.com

Further Wake-up call for Artists and Authors!

When coming across a Getty Image online it is difficult to avoid seeing its prominent watermark accompanied by a notice that “Images may be subject to copyright” but as we find in the recent decision Getty v Stability (Getty Images (US) Inc & Ors v Stability AI Ltd [2025] EWHC 2863 (Ch)) handed down earlier this week, the use of the image with the Getty mark on it may amount to trade mark infringement.

However, the copyright aspects of the case have drawn the most attention. In short, the question was whether an AI platform, Stability AI infringed Getty’s copyright in images that it alleged Stability had copied and made available to members of the public who used its AI platform to generate their own content. Some commentators have described the decision as a “final blow to artists?”.

The finding by the trial judge at [758] that: “An AI model such as Stable Diffusion which does not store or reproduce any copyright works (and has never done so) is not an ‘infringing copy’” suggests that might be right. However, I don’t think too much can be made of the decision for a variety of reasons.

As Nathan Smith, an IP partner at Katten Muchin Rosenman LLP pointed out there remains significant uncertainty, saying: “The Court’s findings on the more important questions regarding copyright infringement were constrained by jurisdictional limitations, offering little insight on whether training AI models on copyrighted works infringes intellectual property rights,” before going on to note that the scope of Getty’s claim was significantly reduced.

The main area I discuss in this post is Getty’s argument that Stability AI had unlawfully trained and developed its AI model, thereby infringing its copyright, albeit in a secondary rather than primary fashion. Ultimately, in terms of the copyright claim the question was whether by making the model weights for certain versions of Stable Diffusion available for download, had Stability committed secondary copyright infringement under sections 22 or 23 of the UK Copyright Designs and Patents Act?

The answer was no. The claim failed because Justice Joanna Smith concluded that an article which is an “infringing copy” must have at some point in its existence consisted of, contained, or stored a copy of a copyright work, which Getty failed to prove. The Court relied on unchallenged expert evidence at [554] that:

“Rather than storing their training data, diffusion models learn the statistics of patterns which are associated with certain concepts found in the text labels applied to their training data, i.e. they learn a probability distribution associated with certain concepts’.”

The Court’s reasoning needs to be looked at a bit closer to understand the technical issues at play. When looking at the question of reproduction and the vexed question of memorisation the Court stated at [559]:

“However, notwithstanding this evidence about memorization, it is important to be absolutely clear that Getty Images do not assert that the various versions of Stable Diffusion (or more accurately, the relevant model weights) include or comprise a reproduction of any Copyright Work and nor do they suggest that any particular Copyright Work has been prioritised in the training of the Model. There is no evidence of any Copyright Work having been “memorized” by the Model by reason of the Model having been over-exposed to that work and no evidence of any image having been derived from a Copyright Work”.

Looking at what was before the Court, perhaps the outcome was not entirely surprising. Once the defendants got the case whittled down to just two live issues, Getty’s task was a difficult one. The fact that it had to fall back on a secondary infringement claim, says a lot.

In terms of the mechanics, whether a “substantial part” of the copyrighted work has been copied, is more of a qualitative rather than a quantitative assessment. Even a very small excerpt can be considered substantial if it is the “heart” or most distinctive/essential part of the original work. This means that there is no tariff as to the number of words, a percentage, or a number of snippets that are safe to copy.

Notwithstanding this, the great practical difficulty that plaintiffs in these cases, Getty included, face is that the AI platforms have taken “a lot of little“ – in other words, they have taken millions of tiny fragments of information from potentially millions of different works and combined them together into a single reservoir of data, where each snippet resides until called upon. That is, in response to one or more prompts from a user. The multitude of individual elements can be traced back to the reservoir, for example a date centre in Nevada, but the same elements cannot be traced back to one single source or for that matter a compilation of identifiable works. That is always going to be the problem in establishing that large language models, whether providing generative AI tools or not, have taken a substantial part of any single work or group of works.

The other problem is that the owners of AI tools can quite legitimately argue that they are not reproducing any work or group of works, as such. Instead, what they are doing, when properly analysed, is learning from or memorising each work, but only using snippets of information, knowledge or understanding from each work. These snippets may, in a fashion, be combined later. The difficulty however is showing a causal connection between the copyright work and the alleged infringement when an author’s particular contribution has been intermingled with potentially millions of others.

Plaintiffs in countries like Australia and New Zealand who try to take on the might of the AI barons will face the same uphill battle. The problem is a simple one, the learning exercise is undertaken overseas and probably in the USA where regulation of AI is minimal and unlikely to change under the current administration. This means that jurisdictions like Australia and New Zealand will need to find their own solutions. Those solutions will only lie at the output end i.e. where an alleged infringing work is created by a user in that particular jurisdiction.

Absent any international treaty (again a long shot, given the current US administration’s “hands-off” attitude to AI) the domestic law in which any alleged infringing act occurs, would need to be changed. That is, to allow a claim for copyright infringement to be brought in the home jurisdiction or for repatriation of some form of royalty or compensation for those who own the copyright works used to create the AI generated work. Without this, authors and owners of copyright, are being and will continue to be, severely disadvantaged in the future. However, the dilemma for authors is that the disadvantage is suffered by numerous, potentially thousands or millions of fellow authors, who are all in the same boat.

The difficulty with a royalty type scheme will be to ensure that the scheme is fair and that authors are compensated to some degree for their efforts and that royalties get to the right people. It will also be critical that any scheme is properly run and operates in the interest of authors, rather than other vested interest, of which there are many. There is, however, no reason in principle why this could not be achieved.

To do so, there would need to be a careful identification of the particular type of copyright works in issue, for example, differentiating between musical, literary and artistic works. I suggest that sub-categories of works would also need to be identified, for example, in the art field, the type of painting, whether it be figurative, or impressionistic et cetera, the artist or school of art identified in the prompt employed by the user of the AI platform et cetera.

This way, the royalty could be paid to the right cohort or group of artists rather than authors generally. The exercise will be difficult but with the power of AI and a highly incentivised artistic community, the process is clearly achievable. That is not to say that big tech will support the idea. On the contrary, they will fight against it, through the courts and in the corridors of power.

In my view, it is a losing battle and fitting square pegs into a round hole is seldom successful. My recommendation is to start again and find a remedy that works for the particular situation even if it means starting from scratch and either reimagining what copyright is, or creating a new sui generis right that is fit for purpose in the AI age.

Facebook
Twitter
LinkedIn

Clive Elliott-Barrister

I live and work in Auckland, New Zealand. I am a frequent writer and commentator on intellectual property and information technology issues. I am a barrister and arbitrator. Before going to the Bar in 2000, I was a partner and headed the litigation team at Baldwin Shelston Waters/Baldwins. I took silk in 2013. Feel free to contact me via phone, email or social media.