AI and Academic Integrity

As a student at the University of Kent, it is important to understand how generative AI technology interacts with academic integrity. While Generative AI tools can be valuable in your learning and preparing for assessments, it is crucial to distinguish between using generative AI tools for these purposes and generating content that you cannot claim as your own work.

Appropriate use

Generative AI tools are accelerators. It is your decision regarding what you let it accelerate: good habits or bad habits.

We encourage you to use generative AI to enhance your critical thinking skills rather than replacing them. Some ways to do this may include:

✅ Engaging in discussion with AI tools as a "study buddy" (always double checking everything it outputs).
✅ Helping you to identify positive and productive study habits or schedules that work for you.
✅ Searching for potential reading materials (again, always checking their relevance and refining your searches as you go).

The examples given above foreground your own critical thinking and encourage you to use AI to accelerate good learning and studying habits.

With this in mind, unless you are specifically requested to do otherwise, the content of your submitted assessments must always be your own work. Presenting a Generative AI output or another person’s work as your own is a breach of academic integrity. You are expected to engage in good academic practice that is consistent with the University's six fundamental values of academic integrity: honesty, trust, fairness, respect, responsibility, and courage.

For more comprehensive guidance on using AI in your studies, please click here to visit our Moodle module, "Generative AI: Developing your AI Literacy".

You can also visit our extensive prompt bank to see examples of appropriate use of AI which foreground and enhance your critical thinking skills. Click here to access Kent AI Prompt Bank.

The University recognises the potential for the use of Generative AI to enable academic dishonesty in assessments. The University is updating its regulations on academic misconduct to clarify Policy around the use of Generative AI in your learning and assessment.

What You Need to Know

  • Assessment-level guidance is key

Each assignment or module may have different rules about whether AI tools can be used. If they can, it may also place restriction on how they can be used. Always check your assignment brief or ask your Module Convenor if you're unsure.

  • Academic integrity still applies 

Whether or not AI can be used, the final submission must demonstrate your own understanding, critical thinking, and academic judgement. Submitting content that misrepresents your contribution, whether written by another person or AI, is a breach of academic integrity.

What ‘Your Own Work’ Means When Using AI 

In some cases, you may be permitted to incorporate AI-generated content, similar to how you might incorporate ideas from books or websites, with clear attribution and academic judgment. 

In other assessments, it may be the case that no AI use is allowed at all. The key principle is honesty and transparency, guided by your specific assessment requirements.

Unless it is otherwise noted, you should:

  1. Not include materials generated by AI in your submissions.
  2. Not submit materials that you have written but that have been substantially altered by AI. By "substantially altered", we mean that your work should not be altered more than it would be by the spellchecking or grammar checking software that you would find in Microsoft Office products (i.e. a spellchecker that is not AI enabled).

⚠️ Generative AI Declaration

Some schools may also ask you to include a short declaration in your assessment submissions to confirm whether you have used generative AI and to what extent. 

Where required, the wording and expectations for these declarations will be specified in the assignment brief.  

By following this guidance, you can use Generative AI responsibly while maintaining academic integrity and avoiding misconduct.  

Where to next?

Last updated