Last week I had the pleasure of going to a private viewing of the new feature film “Moss & Freud” which was produced by leading Kiwi film producer Matthew Metcalfe. Having acted for Matthew over the years and on this particular film, it was great to see the final product.
I loved “Moss & Freud”. The film centers on a famous portrait, painted by Lucien Freud “Naked Portrait 2002,” of English supermodel Kate Moss. What I personally enjoyed most was the “up close” portrayal of the artistic process and how creative minds work. This leads to the topic I would like to discuss in this post and ones that follow. The fascinating fact is that this painting took no less than nine months to complete and towards completion Kate Moss was sitting for Freud seven nights a week, in order for it to be completed before she gave birth to her daughter, Lila.
The reason this is significant will become apparent from the discussion which follows – AI, particularly generative AI can create a work of art, potentially as engaging and some might say confronting, as this unique portrait. The difference however is that while it took Lucien Freud nine months to finish the painting, with the right knowledge and prompts a platform like Dall.e could potentially do the same in an hour – indeed way less.
In these posts, I set out why I think that, in the AI age, we have no choice but to re-examine and repurpose the whole notion of copyright authorship.
In my view, the convergence of AI technology and copyright law presents fundamental challenges to traditional concepts of authorship, originality, and creative ownership. I will look at how AI-assisted creation is reshaping the cognitive processes underlying authorship, while simultaneously testing the boundaries of established copyright doctrine. I also put forward some suggestions as to how we can address these challenges.
Before I do so, I discuss a couple of recent cases that are attracting attention around the world.
Before looking at how the law needs to be changed, we first need to understand what is and is not protected currently. The recent spate of AI/copyright cases is testing the boundaries. While this discussion is far from exhaustive, it is worth looking at a couple of these recent cases. In Bartz v Anthropic,[i] US District Court Judge William Alsup had to determine whether Anthropic’s LLM Claude qualified as “fair use” under Section 107 of the US Copyright Act. The judge provided an interesting summary of how these LLMs were created and how the founders of the company set out to create, at huge cost and effort, a central library of “all the books in the world” to retain “forever.” It was into this central library that the company scanned and inputted millions of books in digital form and went about training their models. Some of the books were purchased while others were pirated, but that distinction is not discussed in this article. What is relevant is that while the authors bringing the claim alleged that the books had been copied without their authority, they did not allege that any infringing copy of their works was ever provided to users by the Claude service and that Claude had never supplied to any users an exact copy or substantial knock-off.[ii] Indeed, as the judge notes, once the LLMs had been set up and the books had been copied, cleaned, tokenized and compressed, and the LLM had been trained, “an LLM did not output through Claude to the public any further copies.”
If this description of the way Claude was created and operated is accurate, it is, in effect, like a lecture theatre full of the brainiest professors in the world who have read all of the relevant literature and are there at your beck and call to answer any questions. As the court put it,[iii] Claude “could receive new text inputs and return new text outputs as if it were a human reading prompts and writing responses.” Conceptually, Claude is little different from a professor or other human collaborating with an author. It is as akin to a human-to-human collaboration as one could imagine.
Before analysing the decision more closely, it is worth noting what the trial judge stated about where copyright does not extend, stressing at page 13 that:
Copyright does not extend to “method[s] of operation, concept[s], [or] principle[s]” “illustrated[ ] or embodied in [a] work.” 17 U.S.C. § 102(b); see, e.g., Nichols v. Universal Pictures Corp., 45 F.2d 119, 120–22 (2d Cir. 1930) (Judge Learned Hand) (stage properties and storytelling elements); Apple Comput., Inc. v. Microsoft Corp., 35 F.3d 1435, 1445 (9th Cir. 1994) (“user-friendly” design principles and elements); Swirsky v. Carey, 376 F.3d 841, 848 (9th Cir. 2004) (music theory principles and chord progressions).
Judge Alsup then considered the question of memory and creative elements in an AI context and said:[iv]
For centuries, we have read and re-read books. We have admired, memorized, and internalized their sweeping themes, their substantive points, and their stylistic solutions to recurring writing problems… if someone were to read all the modern-day classics because of their exceptional expression, memorize them, and then emulate a blend of their best writing, would that violate the Copyright Act? Of course not.
The court also assessed what it saw as the underlying transformative purpose of the exercise of creating the LLM, stating:[v]
The purpose and character of using copyrighted works to train LLMs to generate new text was quintessentially transformative… More specifically, Anthropic used copies of Authors’ copyrighted works to iteratively map statistical relationships between every text-fragment and every sequence of text-fragments so that a completed LLM could receive new text inputs and return new text outputs as if it were a human reading prompts and writing responses.
The Court’s approach in Bartz v Anthropic has not found favour elsewhere[vi]. In fact, in the same District Court, the Northern District of California, Judge Vince Chhabria in Kadrey v Meta Platforms[vii] took issue with Judge Alsup’s overall approach, saying that he had “focused heavily on the transformative nature of generative AI while brushing aside concerns about the harm it can inflict on the market for the works it gets trained on”[viii] and that some of his analogies were inapt.
Even though the Court found in favour of Meta on this occasion, it was with some considerable reluctance, and it sent a shot across the boughs of the AI industry, stating:
The upshot is that in many circumstances it will be illegal to copy copyright-protected works to train generative AI models without permission. Which means that the companies, to avoid liability for copyright infringement, will generally need to pay copyright holders for the right to use their materials.
While Bartz v Anthropic and Kadrey v Meta Platforms deal with a general purpose LLM there is other litigation in the music space. In June 2024, the Recording Industry Association of America (RIAA), which represents the big record labels, brought copyright infringement proceedings against two platforms, Suno[ix] and Udio.[x] They allow music to be created. The record labels allege that these platforms have used their members recordings, without permission, to train the AI models that create infringing musical works.
In the Suno case the allegations have been significantly escalated with the record labels filing an amended complaint on September 19, 2025, adding new “stream-ripping” allegations against Suno. According to Billboard, the record companies now allege that Suno illegally downloaded training music from YouTube, using a piracy method known as “stream-ripping,” which breaches the Digital Millennium Copyright Act by circumventing YouTube encryption measures. This significantly ups the ante. Suno had previously admitted that tens of millions of recordings have been used to train its model, and these must have included recordings whose rights are owned by the recording studios bringing the claims. The amended piracy allegations are no doubt brought in reliance on Anthropic’s US$1.5 billion settlement with book authors[xi], where evidence of illegal downloading proved crucial. This may prove to be AI developers’ weak spot and the way in which authors can recover some form of compensation for their efforts.
These landmark cases will no doubt largely determine the legal contours of authorship, ownership and fair use in the evolving AI-driven world we live in.
=========================================
[i] Bartz v. Anthropic, (N.D. Cal. C 24-05417 WHA 2025)
[ii] Bartz v. Anthropic, at 7
[iii] Bartz v. Anthropic, at 11
[iv] Bartz v. Anthropic, at 12-13
[v] Bartz v. Anthropic, at 11
[vi] I am indebted to my colleague Caitlin Hadlee, with whom I have had a number of interesting discussions, who suggested I read the Meta case before posting this message, given the divergence in thinking coming from two judges in the same District Court in the USA, illustrating that the question is a difficult one and the position is far from clear.
[vii] Kadrey et al v. Meta Platforms, Inc., No. 3:2023cv03417 – Document 598 (N.D. Cal. 2025)
[viii] Kadrey v. Meta Platforms at 3
[ix] UMG Recordings, Inc. v. Suno, Inc. Case No. 1:24-cv-11611, U.S. District Court, District of Massachusetts, filed on June 24, 2024
[x] UMG Recordings, Inc. v. Uncharted Labs, Inc. Case No. TBD, U.S. District Court, Southern District of New York, filed on June 24, 2024
[xi] https://www.npr.org/2025/09/05/g-s1-87367/anthropic-authors-settlement-pirated-chatbot-training-material