Quality assurance processes will become increasingly important in maintaining trust in academic standards, writes Eve Alcock, highlighting the importance of consistent approaches by QA agencies as well as the opportunities AI tools give them to catch problems before they impact the student experience.

As artificial intelligence (AI) continues to revolutionise the educational landscape, providers must adapt their practices to harness its potential and mitigate its challenges. This requires new approaches to academic integrity, assessment and learning strategies, skills and knowledge development and quality assurance processes.

How does AI impact student assessment?

The education system relies on the assumption that an assessment output is evidence that learning has taken place. But generative AI tools mean that these outputs can be produced with no learning process at all. And research increasingly suggests that AI tools can produce outputs that not only fool assessors but are often marked higher than those produced by students.

It is therefore important to reflect on how and why we assess students. This involves considering the balance of formative and summative assessments and the relationships between knowledge transfer and skills acquisition, as well as whether the output or learning process should be the focus of assessment.

We must also recognise that relying on AI tools too early may undermine students' engagement with the foundational skills they need to master their subject. Delaying the introduction of these tools, or at least thinking critically about when and how to use them, may help students to eventually use them in ways that are more conducive to effective learning.

But, even if students use generative AI tools critically and ethically, it is possible that over time, AI could see the proportion of upper-class degrees increase, and undermine trust in the value of awards. Quality assurance processes will therefore become increasingly important in maintaining trust in academic standards.

How should QA agencies adapt to AI in higher education?

In the long term, quality assurance organisations must ask whether the integration of generative AI into learning and assessment processes necessitates a wholesale review of academic standards. If students score higher marks when assisted by AI, over time, we may need to recalibrate those standards to encompass increased expectations of student achievement.

Quality agencies will also need to encourage providers to adopt broadly consistent approaches to AI to avoid significant discrepancies in graduate skills between an ‘AI-integrated’ graduate whose university embraced AI-integrated learning and teaching, and an ‘analogue’ graduate whose university took a more conservative approach

How then can standards of awards be maintained – or even calibrated – when we are no longer comparing like with like? Can an external examiner system which underpins the comparability of standards across higher education sectors – aligning the value of awards at different providers – withstand such fundamentally divergent approaches to the assessment of learning? Will an external examiner from a zero-tolerance institution, say, be able to compensate for these differences in their judgement of the worth of work produced by students at a more AI-permissive institution?

Here, quality agencies need to walk a fine line. On the one side, institutions fiercely guard their academic autonomy – just as individual academics defend their academic judgements. Nobody would take kindly to the external imposition of an inflexible set of rules across all providers.

But, on the other side of that line, there remains a need to maintain public, governmental and employer confidence in the standards which underpin the value of a degree – without creating a multi-tiered system of institutions based on their tolerance of the use of AI in their students' work.

At QAA, we believe that this line is best navigated by negotiating and agreeing with our stakeholders to a set of sector-owned principles to ensure the maintenance of these shared standards, an approach which we also employ in the formulation of key reference points such as subject benchmark statements, qualifications frameworks and the UK Quality Code.

When AI meets QA

AI offers a challenge to quality assurance processes, but it may also offer opportunities by supporting efficiencies in QA processes. These powerful tools may allow for greater consistencies in the ways in which we aggregate, present and compare data on student satisfaction, engagement, retention, progression, attainment and outcomes.

With better data and evidence, and faster, less burdensome ways of analysing it, AI could support predictive modelling, helping to identify potential risks to the student learning experience or student outcomes before they materialise. This would combat the current issues with time-lagged outcome data, and save time and resource in the long term by catching problems before they begin to impact the student experience.

But we must at the same time remain aware that, if technology-enhanced QA practices grow too fast and too far, this could in itself undermine confidence in their judgements.

Approaches that foreground human engagement – based in academic expertise and peer review – remain core tenets of quality assurance. AI might facilitate data analyses to provide quantitative bases for quality assurance processes, but the qualitative activities involved in key monitoring and review activities, such as contextualising data to providers’ contexts, understanding the ways providers operate, and engaging with staff and students, are better suited to human oversight.

Huge challenges remain – yet, by addressing the impacts of these technologies in the assessment of students and their engagement with learning, as well as in the assurance of their own quality and standards, providers can position themselves at the forefront of an AI-driven future, ensuring that students receive a high-quality education that prepares them for the challenges and opportunities of this century.

Note: This article is a follow-up to the author’s contribution to a session on ‘QA and technology: challenges from AI’ at the 2024 European Quality Assurance Forum.