AI Ethics

How to use AI with integrity, fairness, and transparency.

Generative AI tools can support us in many areas of our lives including our learning, but only when used ethically. This means using AI in a way that aligns with Kent’s values.

In your studies, this means being honest, transparent, and taking full responsibility for your work.  

Core Ethical Principles

1. Academic Honesty & Transparency

AI use should never be hidden. If your module allows AI use, you must be clear about when and how you’ve used it. This is essential for maintaining trust and integrity in your work.

  • Follow the module guidance on AI use.
  • If AI contributes to your work, disclose this clearly and cite it when required in the format noted by your lecturer. This may be a declaration sentence or you may be required to complete a cover sheet. Always check this with your module convenor or primary tutor.
  • Never present AI-generated text or ideas as entirely your own.

You can find more about AI and Academic Integrity on our page below: 

2. Human Oversight  

Whether in your studies or outside of them, AI is not a replacement for your own thinking. It can produce incorrect or fabricated content (known as "hallucinations") or misjudge your circumstances due to missing context.

Using AI responsibly means verifying its output.

  • Always review, fact-check, and evaluate AI outputs.
  • Use AI for support with tasks and never hand over control to the AI tool. AI tools are best used as personal assistants where you guide them.
  • Remember: your critical thinking, not the AI tool, must make the final decision. This decision is always your responsibility.

3. Fairness, Bias & Cultural Sensitivity

AI tools are trained on existing data. Because we are all full of our biases, this means that the training data can perpetuate these same biases. Additionally, AI tools analyse huge amounts of data to produce outputs which are more likely to reflect the "most likely" response to a question. This processing can be reductive and perpetuate biases due to a focus on the "most common" responses rather than the "most correct" responses.

For both of these reasons, this means that the outputs of AI tools to our questions might reflect limited perspectives. In academic study, this is even more likely to be the case; because we are regularly working at the cutting-edge of our subject areas, the amount of data available to train the AI model on our particular question is smaller, meaning a more limited range of responses may be produced.

Far from not using them at all, ethical use of AI tools involves being aware of these limitations and finding ways to mitigate them. In this case, this means questioning and challenging outputs effectively:

  • Evaluate responses for bias and a lack of inclusivity to avoid reinforcing biases and stereotypes.
  • AI tools process all inputs the same whether that be "what is the difference between an apple and an orange?" or "explain the following complex geopolitical matter." Be mindful of cultural and social implications when interpreting AI-generated content.
  • Check multiple sources, not just AI outputs, for diverse viewpoints.

David Barnes, a senior lecturer from the School of Computing, talks about how AI tools work and why we must be mindful when using these tools. Click the link below to watch his video.

4. Accountability & Integrity

When this guidance was first written during 2025, we could have reliably pointed to a few key markers for identifying AI-generated content such as images. Less than 12 months later, AI tools have advanced so quickly that these markers no longer exist.

Far from making it less risky to present AI-generated work as your own, the risk actually increases. This is because if you use AI tools inappropriately, you will be increasingly less likely to be able to spot when AI tools have hallucinated errors, increasing the risk of you producing work (either at univeristy, at home, or in the workplace) that is incorrect. This may result in serious consequences for yourself and others.

Several studies (Köbis, Doležalová, Soraperra 2021; Pavão 2025) have noted that individuals overestimate their abilities to identify AI-generated content. Again, far from meaning we cannot or should not use AI tools, these studies indicate the need to be mindful of our use and diligent of our appraisal of its outputs and of our own understanding.

Even if AI assists you, the responsibility for your work remains yours. Ethical use means owning your decisions and being ready to explain them.

  • You are responsible for all submitted work.
  • Be prepared to explain how and why you used AI if asked by your lecturer.

When AI Use Becomes Misuse

Using AI can cross into unethical territory when it replaces your own work or violates academic integrity. This is the case both at university and will very often be the case more widely.

The following behaviours put you at risk of academic misconduct in your studies and pose the risks outlined above outside of your studies:

  • Generating and submitting content produced by AI instead of completing the work yourself.
  • Copying and pasting AI content, leading to errors, false references, or plagiarism.
  • Ignoring module-specific rules on AI use such as failing to declare AI assistance when required.

Always check your module’s guidance and use our AI Declaration page to disclose AI use correctly.

Quick Ethical Check

If you have used AI tools, before submitting work in any context, ask yourself the following questions:

  • “Could I explain and justify how and why I used AI in this work if asked?”
  • "Is all material I am submitting my own based on my own thoughts and produced under my own impetus?"
  • "Have a double checked all AI-generated materials before using them to guide my research?"

If the answer to any of these questions is "no", you need to rethink your approach.

To learn more about how to improve your AI skills, take our course on "Developing Your AI Literacy" by clicking here.

Click below to return to the AI home page.

Last updated