The Times Australia
Google AI
The Times World News

.

Assessment in the age of AI – unis must do more than tell students what not to do

  • Written by Thomas Corbin, Research fellow, Center for Research in Assessment and Digital Learning, Deakin University

In less than three years, artificial intelligence technology has radically changed the assessment landscape. In this time, universities have taken various approaches, from outright banning the use of generative AI, to allowing it in some circumstances, to allowing AI by default.

But some university teachers and students have reported[1] they remain confused and anxious, unsure about what counts as “appropriate use” of AI. This has been accompanied by concerns AI is facilitating a rise in cheating[2].

There is also a broader question about the value of university degrees[3] today if AI is used in student assessments.

In a new journal article[4], we examine current approaches to AI and assessment and ask: how should universities assess students in the age of AI?

Read more: Researchers created a chatbot to help teach a university law class – but the AI kept messing up[5]

Why ‘assessment validity’ matters

Universities have responded to the emergence of generative AI with various policies aimed at clarifying what is allowed and what is not.

For example, the United Kingdom’s University of Leeds set up a “traffic light[6]” framework of when AI tools can be used in assessment: red means no AI, orange allows limited use, green encourages it.

For example, a “red” light on a traditional essay would indicate to students it should be written without any AI assistance at all. An “amber” marked essay would perhaps allow AI use for “idea generation” but not for writing elements. A “green” light would permit students to use AI in any way they choose.

In order to help ensure students comply with these rules, many institutions, such as the University of Melbourne[7], require students to declare their use of AI in a statement attached to submitted assessments.

The aim in these and similar cases is to preserve “assessment validity[8]”. This refers to whether the assessment is measuring what we think it is measuring. Is it assessing students’ actual capabilities or learning? Or how well they use the AI? Or how much they paid to use it?

But we argue setting clear rules is not enough to maintain assessment validity.

Our paper

In a new peer-reviewed paper[9], we present a conceptual argument for how universities and schools can better approach AI in assessments.

We begin by making the distinction between two approaches to AI and assessment:

  • discursive changes: only modify the instructions or rules around an assessment. To work, they rely on students understanding and voluntarily following directions.

  • structural changes: modify the task itself. These constrain or enable behaviours by design, not by directives.

For example, telling students “you may only use AI to edit your take-home essay” is a discursive change. Changing an assessment task to include a sequence of in-class writing tasks where development is observed over time is a structural change.

Telling a student not to use AI tools when writing computer code is discursive. Developing a live, assessed conversation about the choices a student has made made is structural.

A reliance on changing the rules

In our paper, we argue most university responses to date (including traffic light frameworks and student declarations) have been discursive. They have only changed the rules around what is or isn’t allowed. They haven’t modified the assessments themselves.

We suggest only structural changes can reliably protect validity in a world where AI use means rule-breaking is increasingly undetectable[10].

So we need to change the task

In the age of generative AI, if we want assessments to be valid and fair, we need structural change.

Structural change means designing assessments where validity is embedded in the task itself, not outsourced to rules or student compliance.

This won’t look the same in every discipline and it won’t be easy. In some cases, it may require assessing students in very different ways from the past. But we can’t avoid the challenge by just telling students what to do and hoping for the best.

If assessment is to retain its function as a meaningful claim about student capability, it must be rethought at the level of design.

References

  1. ^ have reported (www.tandfonline.com)
  2. ^ rise in cheating (www.theguardian.com)
  3. ^ value of university degrees (www.theguardian.com)
  4. ^ new journal article (www.tandfonline.com)
  5. ^ Researchers created a chatbot to help teach a university law class – but the AI kept messing up (theconversation.com)
  6. ^ traffic light (generative-ai.leeds.ac.uk)
  7. ^ University of Melbourne (students.unimelb.edu.au)
  8. ^ assessment validity (www.tandfonline.com)
  9. ^ new peer-reviewed paper (www.tandfonline.com)
  10. ^ increasingly undetectable (arxiv.org)

Read more https://theconversation.com/assessment-in-the-age-of-ai-unis-must-do-more-than-tell-students-what-not-to-do-257469

Times Magazine

Navman MiVue™ True 4K PRO Surround honest review

If you drive a car, you should have a dashcam. Need convincing? All I ask that you do is search fo...

Australia’s supercomputers are falling behind – and it’s hurting our ability to adapt to climate change

As Earth continues to warm, Australia faces some important decisions. For example, where shou...

Australia’s electric vehicle surge — EVs and hybrids hit record levels

Australians are increasingly embracing electric and hybrid cars, with 2025 shaping up as the str...

Tim Ayres on the AI rollout’s looming ‘bumps and glitches’

The federal government released its National AI Strategy[1] this week, confirming it has dropped...

Seven in Ten Australian Workers Say Employers Are Failing to Prepare Them for AI Future

As artificial intelligence (AI) accelerates across industries, a growing number of Australian work...

Mapping for Trucks: More Than Directions, It’s Optimisation

Daniel Antonello, General Manager Oceania, HERE Technologies At the end of June this year, Hampden ...

The Times Features

When Holiday Small Talk Hurts Inclusion at Work

Dr. Tatiana Andreeva, Associate Professor in Management and Organisational Behaviour, Maynooth U...

Human Rights Day: The Right to Shelter Isn’t Optional

It is World Human Rights Day this week. Across Australia, politicians read declarations and clai...

In awkward timing, government ends energy rebate as it defends Wells’ spendathon

There are two glaring lessons for politicians from the Anika Wells’ entitlements affair. First...

Australia’s Coffee Culture Faces an Afternoon Rethink as New Research Reveals a Surprising Blind Spot

Australia’s celebrated coffee culture may be world‑class in the morning, but new research* sugge...

Reflections invests almost $1 million in Tumut River park to boost regional tourism

Reflections Holidays, the largest adventure holiday park group in New South Wales, has launched ...

Groundbreaking Trial: Fish Oil Slashes Heart Complications in Dialysis Patients

A significant development for patients undergoing dialysis for kidney failure—a group with an except...

Worried after sunscreen recalls? Here’s how to choose a safe one

Most of us know sunscreen is a key way[1] to protect areas of our skin not easily covered by c...

Buying a property soon? What predictions are out there for mortgage interest rates?

As Australians eye the property market, one of the biggest questions is where mortgage interest ...

Last-Minute Christmas Holiday Ideas for Sydney Families

Perfect escapes you can still book — without blowing the budget or travelling too far Christmas...