The Two-Track Approach
3 min readHere’s a key insight from the framework: you don’t have to choose between ‘all AI’ and ‘no AI.’ Instead, think about a two-track approach where your programme includes both secured (AI-restricted) and open (AI-integrated) assessments.
Track 1: Secured / AI-Restricted
These are assessments where students must demonstrate knowledge or capability independently, without AI support. Examples:
- In-person exams (no resources, supervised)
- Clinical skills demonstrations (real-time, observed)
- Foundational knowledge assessments (verifying core concepts students must know)
Why Track 1 matters: Some capabilities truly must be demonstrated without AI. A surgeon needs to know human anatomy independently. A teacher needs to understand pedagogical theories without a search engine. Not everything can be outsourced to AI.
Track 2: Open / AI-Integrated
These are assessments where students are encouraged to use AI ethically and critically, reflecting authentic professional practice. Examples:
- Analysis and critique of AI outputs (students use AI, then evaluate what it got right/wrong)
- Complex problem-solving in realistic contexts (students might use AI as a thinking partner)
- Projects that mirror real-world work (where professionals actually use AI tools)
Why Track 2 matters: In most fields, the future workplace will involve AI. Students need to learn how to use it responsibly, critically, and ethically. An assessment where students apply AI tools and reflect on the results develops genuine capability.
The balance: A coherent programme includes both tracks. Some high-stakes assessments are Track 1 (secure the foundational knowledge). Other assessments are Track 2 (develop AI fluency). Students experience a learning progression where they first master core material, then learn to apply AI thoughtfully.
This isn’t either/or. It’s both/and.
Five Design Strategies
3 min readWhether you’re designing Track 1 or Track 2 assessments, the framework suggests five practical strategies that make learning visible and make shortcuts much harder:
Multi-Stage Tasks
Break work into stages to show the evolution of thinking.
Oral Components
Add presentations, defences, or discussions to make thinking audible.
Reflective Elements
Ask students to think aloud about their process and reasoning.
Collaborative Work
Include group projects with clear individual accountability.
Authentic Problems
Design tasks around genuine challenges, not artificial scenarios.
1. Multi-Stage Tasks
Instead of one submission, break the work into stages. For example:
- Week 5: Submit a proposal or plan (shows initial thinking)
- Week 8: Submit an annotated draft or progress report (shows refinement)
- Week 10: Submit final work + reflection on how thinking changed
Why it works: You can see the evolution of learning. It’s much harder to pretend understanding if you had to show your thinking at multiple points.
2. Oral Components
Add presentations, vivas, Q&A sessions, or discussions. For example:
- 10-minute presentation of key findings (shows understanding)
- 15-minute defence where you answer challenging questions (can’t fake it if you don’t understand)
- Seminar discussion where students explain and critique each other’s work
Why it works: Real-time human interaction. AI can’t show up to defend ideas. You learn a lot about understanding from how students respond to tough questions.
3. Reflective Elements
Ask students to think aloud about their process. For example:
- ‘What was your thinking when you made this choice?’
- ‘What feedback surprised you? How did it change your approach?’
- ‘If you used AI, where did you use it and why? What limitations did you notice?’
- ‘What would you do differently if you had more time?’
Why it works: Reflection reveals understanding. A student who can’t articulate their reasoning probably hasn’t done it themselves. Also signals: ‘I care about your thinking process, not just your final answer.’
4. Collaborative Work
Include group projects with clear individual accountability. For example:
- Group project with peer evaluations of individual contributions
- Collaborative problem-solving where each person’s role is documented
- Peer feedback sessions where students comment on each other’s work
Why it works: Collaboration is harder to fake. When people actually work together, you see negotiation, disagreement, synthesis. Plus, collaboration mirrors real-world professional practice.
5. Authentic, Real-World Problems
Design tasks around genuine challenges, not artificial scenarios. For example:
- Real client projects (non-profit needing marketing strategy)
- Current events analysis (applying concepts to what’s happening now)
- Community partnerships (students solving problems that matter to local stakeholders)
- Discipline-specific professional scenarios (what would a real engineer/nurse/architect actually do?)
Why it works: Real-world problems have constraints and complexities that resist template answers. They require contextual judgement. They’re harder for AI to pattern-match because the situation is novel and specific.
These Strategies Work Together
1.5 min readHere’s the thing: these aren’t isolated tricks. They work best together.
Engineering Design Project
- Multi-stage task: Proposal (week 3) → Design sketches (week 6) → Final report (week 10)
- Oral component: Presentation + Q&A defending design decisions
- Reflective element: Written reflection on trade-offs: ‘Why this material? Why this shape? What would you change?’
- Collaborative work: Group project with individual accountability through peer evaluation
- Authentic problem: Real building or product design, not a hypothetical scenario
By the time a student submits this, you have multiple forms of evidence of genuine understanding. That’s much more robust than a single essay.
Literature Seminar
- Multi-stage: Reading notes (week 2) → Close reading draft (week 6) → Final essay (week 10)
- Oral: Seminar discussion where students defend interpretations
- Reflective: ‘How did the class discussion change your reading?’
- Collaborative: Peer feedback on drafts
- Authentic: Analysing current texts and their cultural impact, not just canonical works
Again, multiple forms of evidence. Learning is visible.
What About Class Size?
1.5 min readYou might think: ‘That sounds great, but I have over 100 students in my class. How do I add all this?’
The framework directly addresses this. You don’t have to mark everything the same way.
For early stages (multi-stage tasks):
- Pass/fail marking instead of detailed criteria
- Checklists (‘Did they submit? Is it clear?’) instead of rubrics
- Sampling: grade a subset, spot-check others
For collaborative work:
- Peer feedback does some of the evaluation work (students assess each other)
- Structured peer evaluation forms can be quick to complete
For oral components:
- Group presentations (5 students at a time) instead of individual vivas
- Class discussion instead of individual Q&A
- Student recordings (video submission of explanation) instead of live oral
For reflective elements:
- Structured prompts (‘In 100 words, explain one key decision’) instead of open essays
- Rubric focused on presence of reflection, not depth (easier to assess at scale)
The point: You don’t redesign everything simultaneously. Start with your highest-risk assessments. Start with strategies that feel most manageable for your context. Iterate.
Activity: Strategy Explorer
5 minBelow are five design strategies. For each one, we’ve provided a description, discipline-specific examples, and guidance on when it works best. Click on any strategy to see more detail, then think about which might work for your teaching.
Multi-Stage Tasks
Break assessment into multiple submissions over time.
What it looks like
A timeline of submissions where students show their thinking at multiple points, rather than submitting one final product.
Why it works
You can see the evolution of learning. It’s much harder to fake understanding if you have to show your thinking at every stage.
Discipline examples
- Engineering: Problem statement (week 2) → Design draft (week 5) → Final report (week 8)
- Nursing: Case analysis plan (week 1) → Evidence review (week 4) → Final care plan (week 6)
- Business: Market research summary (week 3) → Strategy proposal (week 6) → Final presentation (week 9)
Marking tips
Early stages can be pass/fail or checklist-based. Don’t grade everything with full rubrics.
Works at scale. Early stages can be quick to assess.
Oral Components
Add presentations, defences, or discussions to make thinking audible.
What it looks like
Different oral formats: presentations, vivas, Q&A sessions, seminar discussions, or recorded video explanations.
Why it works
Real-time human interaction. AI can’t show up to defend ideas. You learn a lot about understanding from how students respond to tough questions.
Discipline examples
- Architecture: Presentation of design decisions (10 minutes) + Q&A (5 minutes)
- Literature: Seminar discussion defending interpretation (peer-led)
- Chemistry: Lab report + viva explaining methodology and results
Marking tips
Group presentations reduce workload. Class discussion can be structured so it’s not one-on-one.
At scale, consider group presentations or structured class discussions rather than individual vivas.
Reflective Elements
Ask students to think aloud about their process and reasoning.
What it looks like
Structured reflection prompts, process journals, or brief written explanations of key decisions.
Why it works
Reflection reveals understanding and metacognition. A student who can’t articulate their reasoning probably hasn’t done it themselves. It also signals: ‘I care about your thinking process, not just your final answer.’
Discipline examples
- Psychology: ‘Explain how your literature review shaped your research question’
- Visual Arts: ‘Describe three decisions you made in this design and why’
- Philosophy: ‘How did the seminar discussion change your argument from draft to final?’
Marking tips
Use structured prompts (not open essays). Keep these relatively brief.
Very manageable at scale. Structured prompts are quick to assess.
Collaborative Work
Include group projects with mechanisms for recognising individual contributions.
What it looks like
Different collaboration models: group projects with peer evaluation, collaborative problem-solving with documented roles, or peer feedback sessions.
Why it works
Collaboration is authentic and harder to fake. When people actually work together, you see negotiation, disagreement, synthesis. Plus, collaboration mirrors real-world professional practice.
Discipline examples
- Business: Group marketing campaign with individual contribution statements + peer evaluation
- Engineering: Team design project with role documentation + individual reflection
- Education: Collaborative lesson plan with peer feedback on each person’s section
Marking tips
Use peer evaluation forms to share the assessment workload. Structured contributions reduce fairness concerns.
Scales well. Peer evaluation reduces staff marking.
Authentic Real-World Problems
Design tasks around genuine challenges, not artificial scenarios.
What it looks like
Different authentic contexts: real client briefs, current events analysis, community partnerships, or discipline-specific professional scenarios.
Why it works
Real-world problems have constraints and complexities that resist template answers. They require contextual judgement. They’re harder for AI to pattern-match because the situation is novel and specific.
Discipline examples
- Marketing: Real client brief from local non-profit (not hypothetical case study)
- Social Work: Community partnership project addressing actual local needs
- Engineering: Design that solves a real problem (water access, energy efficiency, etc.)
- Teaching: Develop resources for actual learners/classes, not imaginary ones
Marking tips
Rubrics should assess application of knowledge to real constraints, not generic criteria.
Can be done at scale if you have multiple real projects/clients.
Worth discussing? If a particular strategy sparked ideas, consider sharing it with your module team before continuing.
Knowledge Check
Scenario: A lecturer teaches a large second-year course (100 students) on Irish history. Currently, assessment is one 2,500-word essay due at the end of term. The lecturer wants to maintain this core assessment but is concerned about AI-generated submissions.
The lecturer is considering three approaches. Which most closely aligns with the framework we’ve discussed, and why?
Add a strict rule prohibiting AI use and use detection software.
Redesign as a Track 1 (secured) assessment with multi-stage tasks: proposal (week 4) → annotated draft (week 7) → final essay (week 10). Keep it individual work, no AI.
Redesign as a Track 2 (integrated) assessment where students use AI to find sources, but must reflect critically on what AI recommended vs. what actual scholars say.
Select an option to see feedback. None is ‘perfect’ — this is about applying what you’ve learned.
Approach A: Discursive Control
You may have selected this because it’s familiar (detection and rules). But remember Module 1? This is discursive control — rules students remain free to ignore, plus detection tools aren’t reliable. The framework would say this doesn’t redesign the task itself.
Approach B: Track 1 Redesign
This aligns well with the framework. You’ve kept the assessment (essay) but added structural elements (multi-stage tasks) that make learning visible. You’re building integrity into the task itself, not relying on rules or detection. This is a strong Track 1 redesign.
Approach C: Track 2 Redesign
This is also aligned with the framework — you’re just making a different choice about which track. Track 2 suggests AI-integrated assessment where students develop critical evaluation skills. This would work well if your learning outcome includes ‘critically engage with AI tools.’ The key is that you’re clear about what you’re assessing and why AI is part of the task.
You now understand why detection alone doesn’t work (Module 1), which assessments are most vulnerable (Module 2), and what frameworks and strategies are available (Module 3).
The big question remains: How do I apply this to my own assessment? That’s what Module 4 is about. You’ll redesign one real assessment using what you’ve learned.
Next: Redesign One Assessment →