Our AI Principles

Our AI Principles

Purpose

At the University of Kent, we are committed to advancing student-centred education, world-leading research, and responsible professional practice that serve both our community and the wider public good. Artificial intelligence is increasingly shaping how we learn, work, conduct research and create knowledge. Our approach is grounded in fairness, transparency, academic rigour and social responsibility.

As a centre of world-leading research, education and innovation, we place human judgement and critical thinking at the heart of our AI use. Our community will harness evolving technology to shape the future in ways that align with our outstanding research and knowledge creation, our student-centred approach to education, and our core values.

 

Scope

These principles set out the University’s expectations for responsible, ethical, lawful and effective engagement with AI across education, research and professional services. They are designed to support innovation while safeguarding academic integrity, protecting the quality and credibility of our degrees and research, and ensuring that staff and students develop the knowledge, skills and critical understanding needed to use AI confidently, responsibly and in ways that contribute positively to society.

AI at Kent: Our Principles for Responsible and Effective Use of Artificial Intelligence

Click the buttons below to read more our ten principles of responsible AI use at Kent.

AI is an assistive technology. Meaningful human oversight, professional judgement, and academic expertise remain central to learning, assessment, research, and decision-making in all areas of university activity.

The University is committed to supporting staff and students to develop the knowledge, skills, and confidence needed to engage with AI critically, responsibly, and effectively. Training and guidance on the use of AI – and its benefits, risks, and limitations - will equip staff and students with the skills to support their work, studies, and future careers.

AI should support learning, critical thinking, and skill development. Teaching materials and learning activities will be academically led and pedagogically sound, with staff retaining responsibility for their design, quality, and educational value. AI should  assist and enhance academic practice and student learning, not replace academic engagement or diminish the learning experience.

Academic judgement about student work remains the responsibility of academic staff. AI will not be used to make decisions about marks or academic outcomes unless its use is explicitly authorised, clearly communicated, pedagogically justified, and always with human oversight.

Use of AI must be honest, responsible, and appropriately transparent. Individuals remain accountable for the accuracy, quality, and integrity of their work, regardless of whether AI tools are used to support it.

AI must be used in ways that are lawful, ethical, and aligned with the University’s values. This includes respecting data protection, privacy, intellectual property, and confidentiality requirements so that the University complies with its legal and regulatory obligations.

AI tools used within the University must meet appropriate standards for safety, security, reliability, and vendor assurance, particularly where personal or sensitive data is involved. Early engagement with relevant policies, guidance and support is essential in mitigating any risks associated with the use of AI.

AI tools have been developed in ways that may embed bias in decision-making. Responsible use includes conscious attention to the risk of bias so that disadvantage to any group is eliminated or mitigated.  AI should only be used in ways that safeguard fairness, accountability and accessibility.

The University is aware of the environmental and sustainability impacts of AI use. The University is committed to training staff and students in efficient and effective use of AI to mitigate these impacts, in line with the University’s broader sustainability objectives.

AI technologies and practices evolve rapidly. These principles, and the guidance that supports them, will be reviewed regularly to ensure they remain appropriate and effective.

How these principles are applied in practice

These principles set the University’s overall expectations for responsible AI use. However, specific requirements about whether and how AI tools can be used may vary by context.

Guidance on AI in specific university activities may set out defined instructions, requirements and restrictions on its use. For instance, use of AI in assessment will be set out in assessment briefs for each module, and service-specific guidance on the appropriate use of AI in specific professional contexts may be defined locally. Always check your assignment brief, relevant policy, or local guidance, and speak to your Module Convenor or line manager if you are unsure.

On related policies

In some areas, the use of AI is governed by specific University policies, regulations, or external requirements. These must always be followed.

The AI Policy Group is undertaking a university-wide review of current policies in light of developments in AI tools and their potential application. Where there are university-wide policies that align closely with AI they will be linked here once their review has been completed.

Guidance and FAQs on AI tools at Kent

For staff

To read more about the AI@Kent team and to receive guidance on AI for your role, please click here to visit the AI@Kent Sharepoint page.

Following the announcement that Kent will be providing ChatGPT to all staff and students, we have provided some FAQs. If you are a member of staff and would like to read FAQs on the project, you can click here to read these on Sharepoint. For any technical queries, click here to read these (also on Sharepoint).


For students

To read more about AI use at Kent including responsible and ethical use, please visit our "Developing Your AI Literacy" module on Moodle.

If you are a student and would like to read FAQs, click here to read the student FAQs via the "Developing Your AI Literacy" module on Moodle.

Practical tips for using AI the right way

Explore our guidance pages for more information and advice, taking the following as your starting point.

check

Use AI ethically and responsibly


check

Follow guidance provided by your module convenor


cross2

Do not submit AI-generated work as your own


cross2

Do not share personal data with AI tools


Where can I learn more?