Copyright: what you need to know

AI and copyright

Generative AI is a rapidly evolving technology that processes vast amounts of existing data to generate new content – including text, images, and code – based on user prompts. This raises complex questions around copyright that are still being worked out in law and policy.

This page focuses solely on UK copyright law and its relationship with AI. It does not cover data protection, ethics, or intellectual property more broadly, and it does not cover the terms and conditions of specific AI tools, which vary across global jurisdictions.  

The use of Generative AI is governed by the Copyright, Designs and Patents Act (1988) and developing UK case law. Because the law predates modern AI, there is no legislation written specifically for it – the framework is being shaped gradually through the courts. 

Under the Copyright, Designs and Patents Act (1988), copyright only protects works that are ‘original’ and created by a human author. There are two points at which copyright becomes relevant when using AI: what you put in, and what comes out.  

Authorship of AI-generated works

Under Section 9(3) of the Copyright, Designs and Patents Act (1988), for works that are ‘computer-generated,’ the author is deemed to be the person who makes the arrangements necessary for the creation of the work – for example, the person who writes the prompts. However, this only applies where the work also meets the ‘originality’ requirement.     

Text and data mining

AI companies generally rely on the text and data mining exception as a legal defence for scraping data to train their models. Section 29A of the Copyright, Designs and Patents Act (1988) permits text and data mining for non-commercial research purposes, but this is not a ‘blanket right’ to scrape any database.  You must have lawful access to any material you are mining, and the exception does not override a database’s terms of use.     

Case law in practice

The Originality Threshold 

Following the Court of Appeal ruling in THJ v Sheridan (2023), copyright only protects works that are the ‘author’s own intellectual creation.’ This requires the human user to have made "free and creative choices." Purely automated AI output, without significant human creative intervention, is unlikely to qualify for copyright protection.   

The 'Ingestion vs. Storage' Distinction 

The landmark UK High Court case Getty Images v Stability AI (2025) clarified an important point: AI models do not ‘store’ training images in the traditional sense. Because the tool is not scanning and copying data in the way a human would, simply using an AI tool is generally not copyright infringement in itself.   

What is changing

The UK government is currently conducting a major review of AI and copyright policy. Following the Data (Use and Access) Act 2025, the Government laid a report and economic impact assessment before Parliament in March 2026. The report, Report on Copyright and Artificial Intelligence (March 2026), outlines the key issues and policy considerations. Further policy development and consultation are ongoing, and this is expected to inform future legislation.     

Output Ownership: Who owns the results?

Because copyright under Section 9(3) of the Copyright, Designs and Patents Act (1988), only protects original works, ownership of AI-generated content is not straightforward.   

  • The default position: Content generated purely by an AI prompt is generally not protectable by copyright in the UK. You may not legally ‘own’ the raw text or images produced, and you may be unable to prevent others from using them.   
  • The ‘human touch’: If you significantly adapt, creatively edit, or arrange AI output into a larger original work, copyright may apply to your unique contribution. Following  THJ v Sheridan (2023), the threshold is high: you must demonstrate that you made ‘free and creative choices’ that stamp the work with your personal touch.   
  • Infringement risk: When you use generative AI, the legal risk of copyright infringement shifts to you as the user. AI models can "memorise" specific training data, meaning an output may reproduce a 'substantial part' of an existing copyrighted work without you being aware of it. Under Section 16 of the Copyright, Designs and Patents Act (1988), liability rests with the person who publishes or shares that output. Two things to watch for in particular: distorted watermarks or logos in an AI-generated image are a strong indicator that the tool has reproduced a protected source; and if a text or image output looks like a near-copy of an existing work, treat it as a potential infringement before using it. 
  • Keep a record of your prompts: Because this is an evolving area of law and ownership can be difficult to establish retrospectively, it is good practice to keep a record of the prompts you use. AI tools will have their own policies on this – check the guidance for whichever tool you are using. 

Input Risks: Protecting your IP and Library Licenses

  • Licensing restrictions: Do not upload full-text articles or eBooks from the library into public AI tools. Most library licenses strictly prohibit this, and doing so may breach the University’s legal agreements with publishers.   
  • The training data risk: Many free AI tools use your inputs to train their models. Uploading unpublished research, draft chapters, or sensitive data could result in your intellectual property being absorbed into the AI’s training set, potentially jeopardising future patents or publication rights. Subscription and enterprise versions of Ai tools often include settings to prevent your inputs being used for training, but you will need to check the terms and conditions and ensure you have configured the tool correctly. 
  • Kent-supported AI tools: The University provides access to tools such as ChatGPT Edu and Microsoft Copilot through institutional accounts. These services offer improved data protection compared to public versions, but they do not remove copyright or licensing restrictions. You must still not upload licensed library content, unpublished research intended for publication, or other sensitive material unless you are certain this is permitted. Always follow University guidance for the specific tool you are using.

Practical guidance

  • Use Kent Supported Tools: Where appropriate, tools such as ChatGPT Edu or  Microsoft Copilot via your Kent login offer better data privacy than free public versions, but standard copyright and licensing restrictions still apply. 
  • Declare AI use: In your research and assessments – check your School’s specific requirements and read the University of Kent AI and Academic Integrity guidance.      
  • Cite and acknowledge: AI use when publishing – COPE (Committee on Publication Ethics) guidance state that AI cannot be listed as an author; publishers will have their own requirements. Read our FAQs on Academic Publisher Position for more detail.    

Help

We provide advice, training and specific guidance on copyright law to support you in your work and study. If you have any questions about copyright, email copyright@kent.ac.uk

     

Last updated