10 March 2026 | Story Queen Sigebedu | Photo Supplied
Mike Perkins
Prof Mike Perkins, Head of the Centre for Research and Innovation at British University Vietnam, shared insights on the role of Generative AI in higher education during his visit to the University of the Free State. Prof Perkins holds a PhD in Management from the University of York and is recognised for his work on integrating GenAI into teaching, learning, and assessment practices.

As generative artificial intelligence (AI) tools such as ChatGPT continue to rapidly reshape how learners write, conduct research, and engage with knowledge, institutions should avoid reacting to these developments with fear or rigid enforcement. Instead, educators should rethink assessment approaches in ways that are practical, ethical, and forward-looking.

This was some of the advice offered by Prof Mike Perkins, an international expert in generative AI and assessment practices, during a public lecture titled ‘Assessment After ChatGPT: A Practical Integrity Framework for Schools and Universities’ recently hosted at the University of the Free State (UFS).

The lecture attracted academics, educators, and students, with participants joining in both in person and online.

 

Rethinking assessment in the age of generative AI

Prof Perkins reminded his audience that, while AI technologies present new opportunities for teaching and learning, they also raise pressing questions about academic integrity and the future of assessment.

To illustrate the complexities of AI use among students, he introduced the audience to the example of a hypothetical student named Linh. Initially, Linh used AI tools to improve her academic performance, but over time she became increasingly reliant on them. The example highlighted how easily students can become dependent on generative AI if clear guidelines and responsible practices are not established.

Prof Perkins noted that higher education institutions across the world are grappling with similar challenges related to the integration of generative AI. Among the key issues identified were:

- a lack of professional development for educators in AI literacy;
- limited technological infrastructure;
- unclear strategies for AI integration in teaching and learning; and
- the need for critical reflection on the role of generative AI in education.

He emphasised that these challenges are not unique to any single institution but rather reflect a global shift in how knowledge is produced and assessed.

Importantly, he cautioned against relying on AI-detection tools as the primary mechanism for addressing academic misconduct.

“AI text detectors exist, but they do not work well enough to make decisions about academic integrity, especially when students use methods to hide the origin of the text,” he explained.

 

Introducing the AI Assessment Scale

As a practical response to these challenges, Prof Perkins introduced the AI Assessment Scale, a five-level framework designed to guide the responsible integration of generative AI into teaching, learning, and assessment practices.

Level 1: No AI

At this level, assessments are completed entirely without AI assistance, usually within a controlled environment. Students rely solely on their own knowledge, understanding, and skills.

Level 2: AI Planning

AI may be used for pre-task activities such as brainstorming, outlining ideas, or conducting initial research. However, students are expected to develop and refine these ideas independently.

Level 3: AI Collaboration

Students may use AI tools to support the completion of a task, including generating ideas, drafting content, or receiving feedback. Learners are required to critically evaluate and modify AI-generated outputs to demonstrate their own understanding.

Level 4: Full AI Use

AI may be used to complete various elements of the task, with students directing AI tools to achieve assessment goals. At this level, engagement with AI tools may be a required component of the assessment.

Level 5: AI Exploration

This level encourages creative and innovative uses of AI to enhance problem-solving, generate novel insights, or develop solutions to complex challenges. Students and educators may collaborate to design assessments that explore unique AI applications within a specific field of study.

 

Balancing opportunity and academic integrity

In concluding his lecture, Prof Perkins emphasised the importance of taking a proactive approach to emerging technologies in education.

He said generative AI presents opportunities for enhanced learning as well as risks to academic integrity. Institutions therefore need to rethink traditional approaches to assessment and adopt frameworks that support responsible and transparent AI use.

He suggested that tools such as the AI Assessment Scale could help educators navigate this transition and develop more effective assessment strategies in the era of generative AI.

 

Academic perspectives from UFS

Responding to the lecture, Prof Johann Rossouw, a lecturer in the UFS Department of Philosophy and Classics, reflected on the implications of generative AI within disciplines such as philosophy. He noted that in modules where critical reading, deep reflection, and analytical thinking are central, AI tools may have limited usefulness.

His remarks highlighted the need for discipline-specific approaches to AI integration, recognising that the role of generative AI may vary across different academic fields.

The lecture formed part of the University of the Free State’s ongoing commitment to engaging critically with emerging technologies and exploring innovative approaches to teaching, learning, and academic integrity in higher education.


We use cookies to make interactions with our websites and services easy and meaningful. To better understand how they are used, read more about the UFS cookie policy. By continuing to use this site you are giving us your consent to do this.

Accept