Course
Module 1 of 5

Why Assessment Redesign?

7 minutes 3 sections + 1 activity + 1 knowledge check
Learning Goal: Understand why detection tools fail and why structural redesign matters.
1

The Detection Problem

2 min read

AI detection tools have emerged that offer to identify AI-generated content in student submissions. These tools feel like a straightforward solution.

But here’s the problem: they don’t work reliably.

Detection tools produce false positives (flagging human writing as AI-generated) and false negatives (missing AI-generated content). More importantly, there are serious equity concerns. Research shows these tools disproportionately flag multilingual writers and students whose writing doesn’t match dominant linguistic norms.

Even when detection tools work as intended, they still miss the deeper issue: a student who submits AI-generated work might be caught, but the assessment itself hasn’t changed. They’ve simply demonstrated that they can get away with (or not get away with) a shortcut.

Echoing the assertions of Corbin, Dawson, Liu, and others the 2026 Assessment Redesign Framework makes a bold claim: detection is not the answer. Redesign is.

Rather than trying to police student behaviour, we should redesign assessment so that learning is visible and shortcuts become more difficult.

False Positives

Human-written work incorrectly flagged as AI-generated, causing unwarranted misconduct investigations.

False Negatives

AI-generated content that passes undetected, giving a false sense of security.

Equity Concerns

Tools disproportionately flag multilingual writers and non-dominant writing styles.

Doesn’t Address Root Problem

Detection polices behaviour but doesn’t change the task itself. The shortcut remains available.

2

Two Approaches

2 min read

There are two fundamentally different ways to respond to GenAI in assessment.

Approach 1: Discursive Control

This means creating rules about what students can and can’t do. For example:

These rules communicate clear expectations. But here’s the catch: rules rely on compliance. Students remain free to follow or ignore them. You might catch some with detection software, but you can’t reliably enforce an expectation that students remain free to ignore.

Approach 2: Structural Redesign

This means changing the actual mechanics of the assessment task itself. For example:

Instead of: ‘Write a 2,000-word essay analysing a research topic’ (easily AI-generated)

Redesign to: ‘Submit a proposal (week 5) → annotated draft (week 8) → final essay (week 10) + one 15-minute oral defence where you explain your key decisions and respond to challenging questions’

In this redesigned task, learning is visible at every stage. The student must show how their thinking evolved. They must articulate their reasoning in real-time, face-to-face. AI can’t show up to an oral defence.

This is structural redesign: the task itself now makes it harder to submit work that isn’t genuinely their own.

Discursive Control
Rules & warnings
‘Don’t use ChatGPT’
Student compliance required
Difficult to enforce
Feels like surveillance
Structural Redesign
Changed task mechanics
Proposal → draft → oral defence
Learning made visible
Built into the task itself
Feels like good pedagogy
3

Why This Matters

1 min read

The shift from detection to redesign has a profound implication: assessment redesign is not primarily about fighting cheating. It’s about improving learning.

When assessments are redesigned to make learning visible — through staged tasks, oral interactions, reflective elements, and authentic problems — something shifts. Students can’t just submit a final product; they must show their thinking. That visibility strengthens the validity of the assessment AND makes it harder to fake understanding.

This framework rests on five core principles that guide this kind of redesign:

Validity
Fairness & Equity
Transparency
Accessibility & Inclusion
Alignment

Throughout this course, you’ll return to these principles as you redesign your own assessments.

Activity: The Scenario

3 min

Scenario: Your institution has decided to disable AI detection tools. In a faculty meeting, your colleague suggests three responses:

A

‘We should add a strict rule: students who use AI will face misconduct proceedings. We’ll monitor submissions carefully.’

B

‘We should require students to disclose any AI use in a signed statement.’

C

‘We should redesign our assessments. Instead of one final essay, we’ll have students submit a proposal, an annotated draft, and a reflection on feedback before the final submission. We’ll also add a brief oral component where they explain their key choices.’

Select an option to see feedback. There’s no wrong answer — this is about exploring different approaches.

💬

Found something interesting? Feel free to pause and discuss this scenario with your module team before continuing.

Knowledge Check

Based on what we’ve discussed, what’s the key limitation of using AI detection tools as a primary strategy for academic integrity in the age of GenAI?

A

They are too expensive for institutions to afford.

B

They produce both false positives and false negatives and cannot reliably determine authorship or intent. Additionally, they don’t address the root issue: the task itself might still be one that encourages shortcuts.

C

They slow down the marking process.

D

Students are too clever to be caught by them.

Now that you understand why detection alone isn’t the answer, the next step is to figure out where to start. Which of your assessments are most vulnerable to inappropriate AI use?

That’s what we’ll explore in Module 2.

Next: Assess Your Risk →