Data Privacy

Generative AI tools process everything you type, and many use this data to train their systems. This means anything you enter may not stay private. Understanding these risks will help you keep your personal and academic information safe.

Are you a member of staff?

If so, please access SharePoint for guidance relevant to you.

Please access Data Protection's guidance on AI use by clicking here to ensure any use is compliant.

Guidance for students

Many commonly used AI tools can store, share, or be trained on the data that you input. This means if you input personal information, such as names, email addresses, opinions about people, confidential information, or other people’s work into an AI model, that data will no longer be in your control and could be used in ways you did not expect, including forming part of the output generated for other people.

For these reasons, you should not enter personal information, confidential information or other people’s work into an AI tool.

It is important to be aware that free AI tools don’t have the safety considerations in place that approved tools do.  Where possible, you should use the University’s licenced AI products as these offer greater security in terms of how the input can be used by the technology provider through stronger contractual agreements.

Understanding these risks will help you keep your personal and academic information safe.

Why This Matters

Generative AI works by analysing huge amounts of data, including your inputs. If you enter personal or sensitive information, you risk:

  • Personal data exposure – Your personal information could become public.
  • Breach of confidentiality – Sharing sensitive third-party data or research participant data without consent could present a serious breach of data protection law and/or confidentiality.
  • Loss of control over intellectual property – Unpublished or proprietary material could be stored and reused by AI providers.
  • Your data being reused – Many AI tools use your content to improve their models in ways you may not expect.

Example: If you enter your full name and contact details when asking a question in an AI tool, that information could later appear in a response given to someone else.

If you are doing a research project that involves human participants and you would like to use AI, please contact the Data Protection Office and visit our SharePoint page for guidance at the start of your project.

Important take-away points

  • Keep personal and sensitive data out of AI tools. Do not input information that can identify you or other people, or confidential information such as health data, passwords and bank details.
  • Protect intellectual property. Do not enter unpublished research, or the work of other people without their permission.
  • Check the tool’s terms and privacy settings. Understand how your data will be stored or used before entering any content.

✅ Where possible, turn off data collection.

For example:

  • ChatGPT allows you to disable chat history so your conversations won’t be used to train the model. Other models have similar features too.
  • Choose trusted tools. The University of Kent’s Microsoft Copilot Chat includes enterprise-level protections against data collection.

Tip for Safer Use

If you’re unsure whether something is safe to share, don’t share it. Treat AI tools like social media. Always ask yourself:

“Would I be happy if this appeared online?”

If the answer is no, keep it out of AI tools.

Click the button below to return to the AI home page.

Last updated