Learning Goal: Identify which assessments are most
vulnerable to inappropriate AI use and understand why.
1
Understanding Risk
2 min read
Every assessment type has a different vulnerability to inappropriate AI
use. This isn’t about blame — it’s about being
realistic.
Think about it this way: a 2,000-word essay assignment is much easier
for a student to hand to ChatGPT than a 15-minute live presentation is.
A take-home exam without supervision creates different risks than an
in-person exam does. A group project where each person’s
contribution is visible is lower risk than an individual written
assignment.
The 2026 Assessment Redesign Framework provides a risk assessment table
that categorises assessment types. This isn’t meant to say,
‘Never use essays!’ Rather, it says:
‘Know which tasks create which risks, and if you’re using
high-risk formats, build in safeguards.’
Those safeguards are what we’ll call
mitigation strategies: design choices that make
learning visible and make shortcuts much harder.
Low Risk
e.g., oral presentations, in-person exams, supervised problem sets
Medium Risk
e.g., quizzes, creative work, lab reports, research papers
High Risk
e.g., essays, unsupervised exams, take-home written assignments
2
The Risk Table
1.5 min read
Here’s the core of the risk assessment framework, simplified into
the key categories:
High Risk
AI can easily complete the core task
Essays & written assignments
Unsupervised or remote exams
Take-home written assessments
Medium Risk
AI can assist significantly, but barriers remain
Online quizzes
Research papers
Lab reports
Creative work (varies by discipline)
Low Risk
Difficult for AI to fake authentically
Hand-written problem sets
Group projects with individual accountability
Oral presentations & vivas
In-person supervised exams
This doesn’t mean ‘don’t use high-risk formats.’
Rather, it means:
if you’re using them, you need to add structural elements that
make learning visible.
3
Mitigation in Action
1.5 min read
Let’s look at a real example from the framework.
High-Risk Assessment: Essay Assignment
Before (High Risk)
‘Write a 2,000-word essay on the environmental impact of fast
fashion. Due week 10.’
AI can generate a high-quality, well-structured essay on this topic
easily. A student could submit AI output with minimal edits.
→
After (Mitigated)
Week 5: Submit a proposal outlining your argument
Week 8: Submit an annotated draft with notes on
how your thinking has evolved
Week 10: Submit final essay + written reflection
on feedback received
Week 11: 15-minute oral defence where you answer
questions about your choices
Why this works: The task now makes learning visible.
You can see how the student thought through the problem. You know they
understand it because they can defend it. An AI can’t do all of
this authentically.
The mitigation strategies used here are:
Multiple drafts — shows evolution of thinking
Peer review — students comment on each
other’s work
Oral defence — student must defend their
argument face-to-face
✎
Activity: Risk Assessment
5 min
You’ll now classify some assessments by risk level. This is a
learning tool — there’s no grade. The goal is to help
you recognise which of your own assessments might benefit from
redesign.
Scenario 1 of 4
Nursing Programme
Students complete a 2-hour online, unsupervised exam with
open-book access. Questions include scenario-based case
studies where students diagnose a patient and recommend
treatment.
What risk level would you assign?
Low Risk
Medium Risk
High Risk
Framework says: High Risk
Unsupervised + open-book + requirement to generate written
answers = multiple vulnerabilities. A student could use AI to
help diagnose cases. There’s no real-time human
oversight.
Mitigation options:
Add randomised case details unique to each student
Require students to show their reasoning in writing, not
just final answers
Follow the exam with an oral viva-style Q&A to verify
understanding
One lecturer redesigned this by adding a 15-minute video call
where students explained their top 2 diagnoses. This makes
understanding visible and harder to fake.
Architecture Programme
Students work in groups to design a commercial building. They
submit a 3,000-word report, architectural drawings, and a
presentation.
What risk level would you assign?
Low Risk
Medium Risk
High Risk
Framework says: Medium Risk
The report is high-risk (easily AI-generated), but the
drawings and presentation create barriers. An AI can help with
narrative, but the design work is harder to fake.
Add a presentation where students explain design decisions
(oral defence)
Include a peer/client feedback step where students respond
to questions
One lecturer added regular checkpoints (sketches in week 3,
draft report in week 6, final in week 10). This made the
design process visible, not just the final product.
Literature Programme
Students submit a close reading essay (1,500 words) analysing
textual techniques in an unseen poem or novel excerpt. They
have 2 hours, in person, with no access to external resources.
What risk level would you assign?
Low Risk
Medium Risk
High Risk
Framework says: Low Risk
In-person + unseen text + time constraint + no external
resources = high barriers to AI assistance. The student must
think and write in real-time.
Note: This is secure, but consider whether it
assesses what you want. If your learning outcome is
‘apply literary analysis strategies,’ a timed
essay might work well. If it’s ‘synthesise
multiple texts,’ you might want a take-home assignment
where you can rebuild it for authenticity.
Business Programme
Students work in groups to develop a business plan for a real
startup. They include market research, financial projections,
marketing strategy, and present findings to actual business
advisors.
What risk level would you assign?
Low Risk
Medium Risk
High Risk
Framework says: Medium–Low Risk
This is an authentic, real-world problem with external
accountability. AI can help generate content, but the
requirement to engage with real stakeholders and defend
decisions makes it harder to fake.
Why it works well:
Students present to real business people (motivation for
genuine work)
Group work with individual accountability visible
Oral component (harder for AI to fully substitute)
Real-world context (not a generic scenario AI can
pattern-match)
Reflect: Of these four assessments, which feels
most aligned with what you want your students to be able to do? Not
which is easiest to mark, but which actually develops the capability
you care about? If you see a high-risk format you love, don’t
abandon it. Instead, think: ‘What mitigation strategies could
I add to make this more robust?’
✓
Knowledge Check: Your Own Assessment
Think of ONE assessment you teach right now and classify its risk
level.
Low Risk
Medium Risk
High Risk
Good work
That’s exactly the kind of honest assessment that helps you
decide what to do next. In Module 3, we’ll look at design
strategies that could address this vulnerability. If you’re
thinking, ‘I love this assessment but it’s
high-risk,’ that’s perfect. You’ll learn how to
add safeguards while keeping what works.
You’ve now thought about risk. If you saw ‘high
risk’ for some of your assessments, that’s useful
information — not a verdict that you need to change everything,
but a signal that redesign could strengthen both learning and
integrity.
Next, we’ll look at two key frameworks for redesign: the
two-track approach and five practical strategies for making learning
visible.