Course
Module 3 of 5

Understand Your Options

12 minutes 4 sections + 1 activity + 1 knowledge check
Learning Goal: Know the two-track approach and key structural redesign strategies.
1

The Two-Track Approach

3 min read

Here’s a key insight from the framework: you don’t have to choose between ‘all AI’ and ‘no AI.’ Instead, think about a two-track approach where your programme includes both secured (AI-restricted) and open (AI-integrated) assessments.

Track 1: Secured / AI-Restricted

These are assessments where students must demonstrate knowledge or capability independently, without AI support. Examples:

Why Track 1 matters: Some capabilities truly must be demonstrated without AI. A surgeon needs to know human anatomy independently. A teacher needs to understand pedagogical theories without a search engine. Not everything can be outsourced to AI.

Track 2: Open / AI-Integrated

These are assessments where students are encouraged to use AI ethically and critically, reflecting authentic professional practice. Examples:

Why Track 2 matters: In most fields, the future workplace will involve AI. Students need to learn how to use it responsibly, critically, and ethically. An assessment where students apply AI tools and reflect on the results develops genuine capability.

The balance: A coherent programme includes both tracks. Some high-stakes assessments are Track 1 (secure the foundational knowledge). Other assessments are Track 2 (develop AI fluency). Students experience a learning progression where they first master core material, then learn to apply AI thoughtfully.

This isn’t either/or. It’s both/and.

Track 1: Secured
AI Use: Not permitted
Focus: Independent capability
Supervision: Supervised
Examples: Exams, skills demos
Student sees: ‘Show you know this’
Assessment of learning
Track 2: Open
AI Use: Encouraged & expected
Focus: Critical AI engagement
Supervision: Less supervised
Examples: Complex projects, analysis
Student sees: ‘Use AI thoughtfully’
Assessment for learning
2

Five Design Strategies

3 min read

Whether you’re designing Track 1 or Track 2 assessments, the framework suggests five practical strategies that make learning visible and make shortcuts much harder:

Multi-Stage Tasks

Break work into stages to show the evolution of thinking.

Oral Components

Add presentations, defences, or discussions to make thinking audible.

Reflective Elements

Ask students to think aloud about their process and reasoning.

Collaborative Work

Include group projects with clear individual accountability.

Authentic Problems

Design tasks around genuine challenges, not artificial scenarios.

1. Multi-Stage Tasks

Instead of one submission, break the work into stages. For example:

Why it works: You can see the evolution of learning. It’s much harder to pretend understanding if you had to show your thinking at multiple points.

2. Oral Components

Add presentations, vivas, Q&A sessions, or discussions. For example:

Why it works: Real-time human interaction. AI can’t show up to defend ideas. You learn a lot about understanding from how students respond to tough questions.

3. Reflective Elements

Ask students to think aloud about their process. For example:

Why it works: Reflection reveals understanding. A student who can’t articulate their reasoning probably hasn’t done it themselves. Also signals: ‘I care about your thinking process, not just your final answer.’

4. Collaborative Work

Include group projects with clear individual accountability. For example:

Why it works: Collaboration is harder to fake. When people actually work together, you see negotiation, disagreement, synthesis. Plus, collaboration mirrors real-world professional practice.

5. Authentic, Real-World Problems

Design tasks around genuine challenges, not artificial scenarios. For example:

Why it works: Real-world problems have constraints and complexities that resist template answers. They require contextual judgement. They’re harder for AI to pattern-match because the situation is novel and specific.

3

These Strategies Work Together

1.5 min read

Here’s the thing: these aren’t isolated tricks. They work best together.

Engineering Design Project

By the time a student submits this, you have multiple forms of evidence of genuine understanding. That’s much more robust than a single essay.

Literature Seminar

Again, multiple forms of evidence. Learning is visible.

4

What About Class Size?

1.5 min read

You might think: ‘That sounds great, but I have over 100 students in my class. How do I add all this?’

The framework directly addresses this. You don’t have to mark everything the same way.

For early stages (multi-stage tasks):

For collaborative work:

For oral components:

For reflective elements:

The point: You don’t redesign everything simultaneously. Start with your highest-risk assessments. Start with strategies that feel most manageable for your context. Iterate.

Multi-stage tasks
Medium effort High impact
Oral components
Medium effort High impact
Reflective elements
Low effort Medium impact
Peer evaluation
Medium effort Medium impact
Authentic problems
High effort High impact

Activity: Strategy Explorer

5 min

Below are five design strategies. For each one, we’ve provided a description, discipline-specific examples, and guidance on when it works best. Click on any strategy to see more detail, then think about which might work for your teaching.

What it looks like

A timeline of submissions where students show their thinking at multiple points, rather than submitting one final product.

Why it works

You can see the evolution of learning. It’s much harder to fake understanding if you have to show your thinking at every stage.

Discipline examples

  • Engineering: Problem statement (week 2) → Design draft (week 5) → Final report (week 8)
  • Nursing: Case analysis plan (week 1) → Evidence review (week 4) → Final care plan (week 6)
  • Business: Market research summary (week 3) → Strategy proposal (week 6) → Final presentation (week 9)

Marking tips

Early stages can be pass/fail or checklist-based. Don’t grade everything with full rubrics.

Works at scale. Early stages can be quick to assess.

What it looks like

Different oral formats: presentations, vivas, Q&A sessions, seminar discussions, or recorded video explanations.

Why it works

Real-time human interaction. AI can’t show up to defend ideas. You learn a lot about understanding from how students respond to tough questions.

Discipline examples

  • Architecture: Presentation of design decisions (10 minutes) + Q&A (5 minutes)
  • Literature: Seminar discussion defending interpretation (peer-led)
  • Chemistry: Lab report + viva explaining methodology and results

Marking tips

Group presentations reduce workload. Class discussion can be structured so it’s not one-on-one.

At scale, consider group presentations or structured class discussions rather than individual vivas.

What it looks like

Structured reflection prompts, process journals, or brief written explanations of key decisions.

Why it works

Reflection reveals understanding and metacognition. A student who can’t articulate their reasoning probably hasn’t done it themselves. It also signals: ‘I care about your thinking process, not just your final answer.’

Discipline examples

  • Psychology: ‘Explain how your literature review shaped your research question’
  • Visual Arts: ‘Describe three decisions you made in this design and why’
  • Philosophy: ‘How did the seminar discussion change your argument from draft to final?’

Marking tips

Use structured prompts (not open essays). Keep these relatively brief.

Very manageable at scale. Structured prompts are quick to assess.

What it looks like

Different collaboration models: group projects with peer evaluation, collaborative problem-solving with documented roles, or peer feedback sessions.

Why it works

Collaboration is authentic and harder to fake. When people actually work together, you see negotiation, disagreement, synthesis. Plus, collaboration mirrors real-world professional practice.

Discipline examples

  • Business: Group marketing campaign with individual contribution statements + peer evaluation
  • Engineering: Team design project with role documentation + individual reflection
  • Education: Collaborative lesson plan with peer feedback on each person’s section

Marking tips

Use peer evaluation forms to share the assessment workload. Structured contributions reduce fairness concerns.

Scales well. Peer evaluation reduces staff marking.

What it looks like

Different authentic contexts: real client briefs, current events analysis, community partnerships, or discipline-specific professional scenarios.

Why it works

Real-world problems have constraints and complexities that resist template answers. They require contextual judgement. They’re harder for AI to pattern-match because the situation is novel and specific.

Discipline examples

  • Marketing: Real client brief from local non-profit (not hypothetical case study)
  • Social Work: Community partnership project addressing actual local needs
  • Engineering: Design that solves a real problem (water access, energy efficiency, etc.)
  • Teaching: Develop resources for actual learners/classes, not imaginary ones

Marking tips

Rubrics should assess application of knowledge to real constraints, not generic criteria.

Can be done at scale if you have multiple real projects/clients.

Reflect: Of these five strategies, which ONE would be most feasible to add to one of your assessments this term? What would need to happen to make that possible?
💬

Worth discussing? If a particular strategy sparked ideas, consider sharing it with your module team before continuing.

Knowledge Check

Scenario: A lecturer teaches a large second-year course (100 students) on Irish history. Currently, assessment is one 2,500-word essay due at the end of term. The lecturer wants to maintain this core assessment but is concerned about AI-generated submissions.

The lecturer is considering three approaches. Which most closely aligns with the framework we’ve discussed, and why?

A

Add a strict rule prohibiting AI use and use detection software.

B

Redesign as a Track 1 (secured) assessment with multi-stage tasks: proposal (week 4) → annotated draft (week 7) → final essay (week 10). Keep it individual work, no AI.

C

Redesign as a Track 2 (integrated) assessment where students use AI to find sources, but must reflect critically on what AI recommended vs. what actual scholars say.

Select an option to see feedback. None is ‘perfect’ — this is about applying what you’ve learned.

You now understand why detection alone doesn’t work (Module 1), which assessments are most vulnerable (Module 2), and what frameworks and strategies are available (Module 3).

The big question remains: How do I apply this to my own assessment? That’s what Module 4 is about. You’ll redesign one real assessment using what you’ve learned.

Next: Redesign One Assessment →