AI in Higher Education

AI in the classroom

Generative AI (Gen AI) is transforming higher education by enabling more personalised, efficient, and accessible learning, while also raising complex issues of accuracy, ethics, and academic integrity. Recent UK government and university policies emphasise that responsible adoption requires rethinking assessment design and governance to balance innovation with human judgement, fairness, and academic values.

AI Lesson Download

An Introduction to AI in the Classroom [new 2025]

This lesson introduces students to the key opportunities, risks, and ethical considerations of using generative AI in higher education. Through reading, writing, and discussion tasks, students learn how to apply AI responsibly, evaluate its limitations, and understand institutional policies such as the AI Traffic Light System for academic integrity. EXAMPLE Level ***** [B1/B2/C1]  WEBPAGE TEACHER MEMBERSHIP / INSTITUTIONAL MEMBERSHIP

AI Student Checklist [new 2025]

This Student AI Checklist helps learners use generative AI safely, ethically, and within university guidelines. It is important because it promotes academic integrity, protects personal data, and ensures students remain responsible for their own understanding and original work. Level ***** [B1/B2/C1]  INFORMATION WEBPAGE TEACHER MEMBERSHIP / INSTITUTIONAL MEMBERSHIP

Generative AI In Higher Education: Opportunities, Risks and Assessment Design

By C. Wilson (2025)

RESEARCH: This overview was created by analysing current guidance and evidence from the UK Government alongside policies and practice papers from leading UK universities: Glasgow, Leeds, Reading, Sussex, Manchester, Edinburgh, King’s College London, Leicester, Liverpool and Newcastle. Synthesising these sources enabled an in-depth, UK-focused analysis of generative AI in higher education, covering opportunities, risks, academic integrity, assessment design, and student use.

1. What is Generative AI?

Generative artificial intelligence (Gen AI) in education refers to tools such as ChatGPT, Google Gemini, Microsoft Copilot, Deep Seek, Grammarly and Midjourney that create new content including text, images, code, and simulations to enhance and personalise teaching and learning. These technologies can support automated feedback, lesson design, adaptive tutoring, and the creation of realistic scenarios. However, they also carry risks such as spreading misinformation, raising ethical concerns, and weakening critical engagement when used without careful evaluation (1-5).

AI: umbrella term for all forms of artificial intelligence.

Generative AI: a specific type of AI designed to produce new data, not just process or recognise it.

2. Applications of Generative AI in Education

Generative AI supports personalised learning, teaching, assessment, accessibility, and productivity by creating content, providing tutoring, analysing data, and simplifying complex tasks (1,4,5).

  • Personalised learning:Adapts pace and content to individual student needs.
  • Content creation:Produces lesson plans, quizzes, emails, teaching materials, and multimedia resources.
  • Virtual tutoring:Provides one-to-one explanations, answers questions, and supports disadvantaged learners.
  • Simulations and scenarios:Creates realistic environments and case studies for applied learning.
  • Assessment and feedback:Assists with grading, formative feedback, and performance tracking.
  • Accessibility:Simplifies language, translates texts, generates alt text, and supports inclusive learning.
  • Text manipulation:Summarises, paraphrases, or reformats written material.
  • Idea generation:Proposes structures, frameworks, and topics for essays or reports.
  • Data analysis and visualisation:Produces graphs, infographics, and visual summaries.
  • Productivity support:Streamlines administrative tasks and improves efficiency.

3. Limitations and Risks of Generative AI in Education

Generative AI provides opportunities such as personalised learning and content creation, but it also poses risks including misinformation, bias, and privacy concerns, which require responsible and critical use (2-5).

Reliability and Accuracy

AI often produces errors or misleading outputs because it cannot reliably distinguish truth from falsehood. In some cases, it fabricates information, known as hallucinations, including false references that appear credible but do not exist. Performance can also decline over time through a process called algorithmic drift. At present, this occurs when accuracy and reliability decrease because the system continually repeats and recycles the same information, reinforcing patterns and errors rather than adapting to new or changing data (4,5).

Bias and Fairness

AI systems are trained on vast datasets that may include stereotypes, cultural biases, or discriminatory patterns, which are often reproduced and even amplified in their outputs. There is a lack of transparency in how these systems generate responses, often described as the black box problem, which limits accountability and a clear insight into decision-making processes. Generally, AI presents information in a confident and authoritative manner, which can encourage users to accept flawed or misleading content without applying sufficient critical evaluation (2,4).

Academic Integrity

By producing work with AI that does not reflect their own understanding, students risk committing plagiarism or academic dishonesty and graduating without genuine expertise or professional competence. In fact, such excessive reliance contradicts university pedagogical principles, as it weakens critical thinking, problem-solving, and independent learning skills (6,7).

Data Privacy and Compliance

AI systems often process sensitive personal or research data, raising risks of breaches, surveillance, or misuse. Involving third party vendors, which are external organisations that store, manage, or analyse data on behalf of an institution, introduces additional risks of interception, unauthorised access, or exposure outside the institution’s direct control. Many AI systems are trained on copyrighted material without permission, which raises questions about intellectual property, unclear ownership of outputs, and potential legal disputes. Furthermore, accessibility and equality requirements must be addressed to ensure full compliance with General Data Protection Regulation (GDPR) and Data Protection Act (DPA) obligations (3,5).

Ethical and Social Concerns

AI outputs may conflict with institutional values or misalign with ethical standards. In particular, overuse of AI, where automated systems are relied upon more than human judgment, can restrict academic freedom and discourage independent perspectives. This dependence may also reduce creativity and undermine professional expertise by replacing original thought and specialist knowledge with formulaic outputs. Staff–student relationships may also decline if AI is seen as replacing human roles, reducing trust, mentorship, and personal interaction. As a result, students may feel less supported or valued, while staff may experience a loss of professional identity and authority, ultimately weakening the sense of academic community (3,8).

Business and Reputational Risk

Failure to implement responsible AI innovation may compromise a university’s reputation and diminish its competitive position, while inadequate governance or misuse can erode institutional trust and credibility. Universities that adopt inconsistent or poorly defined AI policies risk exposure to privacy breaches, intellectual property disputes, and heightened public scrutiny (9).

4. Designing AI Resilient Assessments

Assessments most vulnerable to AI are formulaic, decontextualised, and focused on reproducing surface level knowledge. These are precisely the tasks that AI is designed to perform. The aim is not to create assessments that AI cannot complete, since this threshold continually shifts, but to emphasise human judgement, lived experience, contextual understanding, ethical reasoning, and creativity (10,11,12).

AI resilient assessment design combines selective invigilation, live components, contextualised and process-based tasks, collaboration, ethical judgment, experiential learning, and guided AI use to promote integrity, critical thinking, and authentic engagement (6–8).

Invigilated and On-Campus Components

  • Incorporate supervised or invigilated elements such as in-person exams, vivas, or practical demonstrations to confirm authorship and understanding.
  • Use theOn-Campus Sign-Off Method or short oral defences to verify student ownership of submitted work.

Contextual and Localised Assessment Tasks

  • Design assignments that draw on module-specific readings, seminars, or case studies.
  • Require reference to local data, examples, or institutional contexts that AI systems are less able to replicate accurately.

Interlinked and Developmental Assessment

  • Create portfolios or linked assessments across a module or programme to assess coherence and sustained learning.
  • Ask students to provide annotated drafts, research notes, or reflective commentaries that demonstrate process and progression.

Higher-Order Thinking and Critical Engagement

  • Emphasise skills such as analysis, synthesis, and evaluation in both task design and marking rubrics.
  • Use criteria that reward original argumentation, integration of theory, and engagement with course materials, reducing the grade potential of generic AI outputs.

Authentic and Scenario-Based Tasks

  • Frame assignments around realistic case studies, simulations, or applied problems that include specific constraints (defined time limits, limited data sets, local examples, or scenario-based conditions).
  • Ask students to adapt content for different audiences or purposes to demonstrate audience awareness and critical reflection.
  • Include opportunities for personal insight or experience, connected to academic evidence and theoretical frameworks.

Incorporation of AI within Assessment

  • Allow or require guided AI use under a traffic light system of “Amber” or “Green” conditions, where students critically evaluate AI outputs or demonstrate effective prompting.
  • Tasks may include comparing AI and human writing, improving AI-generated drafts, or reflecting on the ethical implications of AI use.

Programme-Level and Institutional Approaches

  • Embed AI resilience across entire programmes through a balanced mix of supervised, process-based, and long-form assessments.
  • Review assessment structures regularly with educational developers to ensure alignment with emerging AI capabilities and institutional policies.
  • Provide staff development through workshops and communities of practice on authentic assessment design and AI literacy.
  • Engage students in discussions and learning activities that clarify acceptable AI use, fostering transparency and shared responsibility for academic integrity.

Module-Level Assessment Approaches & AI-Resilience (6)

Format
Nested Tasks
Processfolio
Multimedia/Hybrid
Oral Assessments
Concept Maps/Visuals
How it Works
Sequential activities build into one summative grade, with feedback at each stage.
Collection of artefacts documenting one assignment’s development with commentary.
Assignments in authentic formats (blogs, vlogs, podcasts, posters, short films).
Live tasks such as debates, pitches, proposals, simulations, or vivas.
Graphic representations of knowledge or research (e.g. concept maps, abstracts).
AI-Resilience Benefit
Tracks student progress; harder to outsource as development is visible.
Requires justification of learning process; reveals engagement and critical reflection.
Motivating, employability-focused; less likely to be outsourced to AI.
Tests creativity, evaluation, and use of own data; difficult for AI-prepared answers.
AI can struggle with authentic visuals; externalises understanding of internal learning.

5. Pen & Paper Tests

Some universities are turning back to using handwritten, invigilated assessments to ensure authenticity and prevent student reliance on AI. For example:

The University of Liverpool emphasises that assessments must “never replace original thought, independent research, and the production of original work,” recommending traditional, invigilated, handwritten formats as part of a wider strategy to safeguard authenticity and uphold academic integrity (14).

Newcastle University recommends using assessment formats that “are less vulnerable to AI misuse, such as invigilated, in-person pen and paper exams, to preserve academic integrity in a world where generative AI is widely accessible” (15).

Types of Pen and Paper Tests (16)

Test Type
Closed-book exam
Open-book exam (on paper)
Short-answer test
Essay questions
Problem-solving tasks
Data analysis
Critical commentary
Translation test
Diagram labelling/drawing
What it assesses
Recall of knowledge, understanding of core concepts
Application of knowledge using permitted notes
Definitions, key terms, quick factual recall
Critical thinking, argument building, synthesis
Application of formulas, calculations, logical reasoning
Interpreting charts, graphs, or tables
Evaluation of a short text, image, or extract
Language skills, vocabulary, grammar accuracy
Visual knowledge, spatial understanding
Why it resists AI
No access to devices; relies on memory and preparation.
Still handwritten; AI tools are inaccessible in the exam room.
Time-limited, handwritten; students must rely on their own knowledge.
Handwritten under invigilation; cannot copy from AI.
Requires step-by-step working out by hand.
Visual data interpretation, not easily outsourced to AI in exam conditions.
Students must analyse unfamiliar material independently.
Handwritten; no access to online translation or AI tools.
Requires manual recall of terminology and drawing ability.

6. The Gen AI Traffic Light System

The Gen AI Traffic Light System sets clear policy on academic integrity by defining when AI use is prohibited (red), allowed in an assistive role (amber), or required as part of the assessment (green). It ensures students use AI responsibly, supporting learning without undermining originality, fairness, or academic standards (6,9,12).

AI Traffic Light AI Assessment Guide

7. Microsoft Copilot (Institutional Use)

Microsoft Copilot, integrated within the university’s Microsoft 365 suite, provides students with a secure, institutionally supported AI assistant that enhances productivity while protecting data privacy and academic integrity (5,7,9). It provides:

  • Data security and privacy: When accessed through a university Microsoft 365 licence, Copilot complies with GDPR and UK data protection standards, meaning student work and personal data are not harvested to train external AI models.
  • Integration with learning tools: Copilot works directly within Word, PowerPoint, Excel, Teams, and Outlook, streamlining study tasks like summarising lecture notes, generating revision questions, or creating presentation drafts.
  • Equity of access: Universities that provide Copilot give all students the same secure toolset, reducing unfair advantages that come with subscription-based commercial AIs.
  • Academic integrity support: Copilot can be positioned as an assistive tool (grammar, structure, summarising) rather than a content generator, aligning with institutional traffic-light policies (amber use).
  • Efficiency and productivity: It helps students manage workload by automating admin tasks (e.g. formatting references, cleaning transcripts, organising notes) so they can focus on higher-order thinking and original analysis.
  • N.B. Copilot doesn’t work in China

7. The Dos and Don’ts of using AI at university

This infographic outlines key dos and don’ts for using AI responsibly in academic work. Presenting the information visually makes it easier to understand, compare, and remember essential points on transparency, critical thinking, and ethical AI use.

What you can and cannot do with AI at University

9. Referencing AI

Citing AI in assignments ensures transparency and academic integrity by showing how and when these tools contributed to your work. It also allows tutors to evaluate your independent learning while recognising the role of AI in supporting, but not replacing, your academic skills (5).

Gen AI is evolving rapidly and there is not yet consensus on how to acknowledge and reference it. The minimum requirement to include in acknowledgement is:

In-text Citation (Harvard Style)

In-text Citation: (Microsoft, 2025)

Reference list: Microsoft (2025) Copilot (GPT-4) [Large language model], accessed 28 September 2025. Available at: https://copilot.microsoft.com

Image: The visual representation of digital transformation was generated using Microsoft Copilot (Microsoft, 2025).

Acknowledgement: I acknowledge the use of Microsoft (2025) Copilot (GPT-4) (https://copilot.microsoft.com) to generate draft ideas and provide feedback on grammar and style.

10. AI in English Language Courses

So, where does this leave English language courses? Ultimately, the decision rests on the pedagogical principles underpinning the course. What are the intended learning outcomes? For instance, if the goal is to raise a student’s IELTS score from 5.5 to 6.5, the emphasis may need to fall on the mechanical aspects of language development to a certain extent. This raises a series of pressing questions given that EAP is heavily skills-based, so, to what extent should students be permitted to use AI, and how can tutors regulate this? The challenges are significant, and policies must remain both clear and flexible in response to an ever-evolving AI environment.

Specific questions highlight the complexity: Is it acceptable to use AI for translating academic texts, generating outlines, or employing paraphrasing and referencing tools? To what degree should students rely on assistive AI such as Grammarly or MS Word’s built-in checkers?  Additionally, to what extent may they ‘polish’ their work before it compromises authorship? Even more challenging, how can tutors reliably determine what proportion of the work is genuinely the student’s own, especially when AI can now be trained to reproduce the linguistic errors of a B2-level learner.

Possible solutions include:

  • Reintroduction of Traditional Methods: Reinstate pen-and-paper classroom activities such as defining terminology, paraphrasing, summarising, note-taking, and vocabulary development. These practices not only reinforce core linguistic skills but also enable the creation of a Tutor Portfolio, wherein handwritten work can serve as verifiable evidence in cases of suspected AI-related academic misconduct.
  • Defining Acceptable AI Use: Establish clear parameters for the permissible use of generative AI within language learning contexts. For example, AI tools may be utilised for surface-level correction; such as grammar, vocabulary, and punctuation but must not be employed for generating original content or completing substantive academic tasks.
  • Assessing Student Comprehension: Integrate assessment methods that directly evaluate a student’s understanding of subject matter through oral formats, including vivas, oral defences, and structured in-person verification procedures such as the On-Campus Sign-Off Method: It typically involves students drafting, editing, or finalising their assignments under supervised conditions.
  • Institutional AI Policy Framework: Develop and implement a comprehensive AI policy at institutional level, incorporating an “AI Traffic Light” system that clearly delineates acceptable and prohibited uses. All students should be required to sign an AI usage declaration at the beginning of their course to ensure informed compliance.
  • Data Protection and Privacy Compliance: Promote awareness of data security and ensure full adherence to GDPR and UK data protection regulations. Student work, instructional materials, and personal data must not be uploaded to AI platforms without explicit consent and appropriate safeguards.
  • AI Literacy and Risk Awareness: Embed AI training modules within the curriculum to educate students on the limitations, ethical considerations, and potential risks associated with over-reliance on AI technologies. These sessions should provide practical strategies for mitigating misuse and fostering responsible engagement.

11. Conclusion

Generative AI is reshaping higher education by introducing new possibilities for personalising learning, streamlining teaching, and supporting accessibility, while simultaneously creating fresh challenges around accuracy, ethics, and academic integrity. The analysis of UK Government guidance and policies from leading universities demonstrates that responsible integration requires more than simply regulating tool use; it demands a shift in assessment design, with greater emphasis on human judgement, lived experience, and critical engagement. Frameworks such as the AI Traffic Light System provide clarity on when and how AI may be used, while the adoption of secure tools like Microsoft Copilot highlights how institutions can support innovation without compromising data privacy or fairness. Moving forward, universities must strike a balance between embracing AI’s opportunities and protecting the values that underpin academic study.

12. References

  1. Department for Education. Generative artificial intelligence (AI) in education [Internet]. GOV.UK; 2023 [cited 2025 Sep 26]. Available from:https://www.gov.uk/government/publications/generative-artificial-intelligence-in-education/generative-artificial-intelligence-ai-in-education
  2. University of Glasgow. Artificial intelligence in learning: important limitations and problems [Internet]. 2024 [cited 2025 Sep 26]. Available from:https://www.gla.ac.uk/myglasgow/sld/ai/students/#ai%3Aimportantlimitations%2Cimportantproblems
  3. UK Parliament. Artificial intelligence: education and impacts on children and young people [Internet]. Parliamentary Office of Science and Technology (POST); 2023 [cited 2025 Sep 26]. Available from:https://post.parliament.uk/artificial-intelligence-education-and-impacts-on-children-and-young-people/
  4. University of Reading. Generative AI and university study: limitations [Internet]. LibGuides; 2024 [cited 2025 Sep 26]. Available from:https://libguides.reading.ac.uk/generative-AI-and-university-study/limitations
  5. University of Edinburgh. Introducing AI in our learning technology [Internet]. Information Services; 2024 [cited 2025 Sep 26]. Available from:https://information-services.ed.ac.uk/learning-technology/more-about-learning-technology/introducing-ai-in-our-learning-technology-1
  6. King’s College London. Authentic assessment: approaches to assessment in the age of AI [Internet]. 2024 [cited 2025 Sep 26]. Available from:https://www.kcl.ac.uk/about/strategy/learning-and-teaching/ai-guidance/approaches-to-assessment/authentic-assessment
  7. University of Sussex. Developing writing assignments [Internet]. Staff guidance; 2024 [cited 2025 Sep 26]. Available from:https://staff.sussex.ac.uk/teaching/enhancement/support/assessment-design/developing-writing-assignments
  8. Wheeler S. Designing AI resilient assessment [Internet]. University of Manchester; 2024 [cited 2025 Sep 26]. Available from:https://personalpages.manchester.ac.uk/staff/stephen.wheeler/blog/0024_designing_ai_relilient_assessment.htm
  9. University of Leeds. GenAI quick checklist [Internet]. Generative AI at Leeds; 2024 [cited 2025 Sep 26]. Available from:https://generative-ai.leeds.ac.uk/ai-and-assessments/gen-ai-quick-checklist/
  10. University of Edinburgh. Using generative AI: guidance for students [Internet]. Information Services; 2024 [cited 2025 Sep 26]. Available from:https://information-services.ed.ac.uk/computing/communication-and-collaboration/elm/generative-ai-guidance-for-students/using-generative
  11. Biesta G. The beautiful risk of education. Boulder: Paradigm Publishers; 2013.
  12. Freire P. Pedagogy of the oppressed. New York: Herder and Herder; 1970
  13. Academic quality and standards [Internet]. University of Leicester; [cited 2025 Sep 26]. Available from:https://le.ac.uk/policies/quality
  14. University of Liverpool. Guidance on the Use of Generative AI (Teach, Learn & Assess) [Internet]. 2024 [cited 2025 Sep 28]. Available from:https://www.liverpool.ac.uk/media/livacuk/centre-for-innovation-in-education/digital-education/generative-ai-teach-learn-assess/guidance-on-the-use-of-generative-ai.pdf
  15. Newcastle University. AI in Assessment [Internet]. Newcastle: Newcastle University; 2023 [cited 2025 Sep 28]. Available from:https://www.ncl.ac.uk/learning-and-teaching/effective-practice/ai/ai-in-assessment/
  16. Moorhouse BL. Generative AI tools and assessment: Guidelines of the HE sector.Computers and Education: Artificial Intelligence [Internet]. 2023;5:100179. Available from: https://www.sciencedirect.com/science/article/pii/S2666557323000290

AI Lesson Download

An Introduction to AI in the Classroom [new 2025]

This lesson introduces students to the key opportunities, risks, and ethical considerations of using generative AI in higher education. Through reading, writing, and discussion tasks, students learn how to apply AI responsibly, evaluate its limitations, and understand institutional policies such as the AI Traffic Light System for academic integrity. EXAMPLE Level ***** [B1/B2/C1]  WEBPAGE TEACHER MEMBERSHIP / INSTITUTIONAL MEMBERSHIP

AI Student Checklist [new 2025]

This Student AI Checklist helps learners use generative AI safely, ethically, and within university guidelines. It is important because it promotes academic integrity, protects personal data, and ensures students remain responsible for their own understanding and original work. Level ***** [B1/B2/C1]  INFORMATION WEBPAGE TEACHER MEMBERSHIP / INSTITUTIONAL MEMBERSHIP

More Blog Articles…

Advertisement:

Follow us

24K followers

AEUK Privacy Policy 2025

 Read our full policy

Academic English Courses

 Students go here: English Courses

TEACHER

MEMBERSHIP

DOWNLOAD EVERYTHING
£100 / £150 / £200

payment options

Join Now

Terms & Conditions

INSTITUTIONAL

MEMBERSHIP

DOWNLOAD EVERTHING
£200 / £350 / £550

payment options

Join Now

Terms & Conditions