All the chat's about AI, but humans rule

Opinion: Artificial Intelligence tools used for writing present an opportunity to reimagine education assessment, says Associate Professor Alex Sims.

Associate Professor Alex Sims sitting in a computer lab.
Associate Professor Alex Sims Photo: Elise Manahan

ChatGPT is creating angst in tertiary education and it seems it’s difficult for some university lecturers, who are used to doing things a certain way, to embrace the rapid change in this technology. But many of us teaching in universities also relish the opportunity such tools enable.

The software has attracted considerable media attention this year for its ability to answer questions, provide advice on almost any topic in fluent, well-written English, to write computer code and perform various other tasks.

The chatbot, launched in November 2022, has been tested using a broad range of exam questions, including law, medical and business exams. It passed those exams.

Some of the answers provided by ChatGPT have rendered experts speechless. Yet the uncanny answers were pure luck. ChatGPT does not know whether an answer is correct; it simply predicts the solution based on its massive dataset. Therefore, many answers are not 100 percent accurate and can even be spectacularly wrong. A human is needed to determine the accuracy of answers.

The reaction of universities to ChatGPT and other similar artificial intelligence (AI) tools has been mixed, falling into three main types: prevention, banning and embracing.

To prevent their use, some universities are falling back to in-person exams featuring old-fashioned pen and paper. However tests and exams have never been ideal assessment methods. They don’t indicate whether a person can work well in teams, present and communicate information orally, and they disadvantage those with debilitating exam anxiety and so on. To accommodate these limitations, many courses have reduced the percentage of course marks given out for tests and exams.

In addition, preventing the use of ChatGPT would work only if all of a course’s assessments were for in-person work. To ensure no student could use ChatGPT would require increasing the percentage of marks for old-school tests and exams, a retrograde step.

Second, some tertiary providers have explored banning ChatGPT, other AI tools, and promoting the use of AI detection tools. These AI detection tools are not 100 percent accurate and can be worked around. My concern is that students will spend more time attempting to circumvent the system than learning the content.

AI tools are not a replacement for human expertise, but are tools that can augment and enhance it.

Associate Professor Alex Sims, Business School University of Auckland

Both banning and preventing the use of AI tools for all or most assessments is counterproductive. People will not, for the foreseeable future, be in competition with AI. Instead, they will be competing with people who are adept and skilled at using such tools. People unable to use AI tools may become unemployable in many professional settings as they will be too inefficient and slow.

The key to successfully integrating AI into education lies in understanding that AI tools are not a replacement for human expertise but are tools that can augment and enhance it.

Universities need to teach students how to use these tools effectively. A concerted effort is necessary to provide training and guidance on how to use AI to enhance students’ learning and prepare them for the workforce.

We have adapted to new tools in the past. For example, the fears that electronic spreadsheets would put accountants out of work did not materialise.

Now AI tools are forcing a much-needed opportunity to reimagine the role of education in the 21st century. So where does this leave us with the vexed question of assessment? How do we assess students’ knowledge? For most university courses, some element of in-person evaluation, whether written, oral, or both, is necessary. The remaining assessments require rethinking, and what may work for one discipline or course may not work for others.

One idea is that instead of providing a question to which the student writes an answer (the traditional approach), both the question and answer could be given. The students could critique both the question and answer and explain what they think is correct and/or incorrect and why.

Alternatively, a student could be assessed on the nature and quality of the prompts they ask an AI tool. This will increase the time required for marking, but it will assist the students’ skills with using the tools and provide a good way of assessing their knowledge of the subject matter at hand.

As with most technology, the challenge is not the technology itself but rather our human emotions, experience and reaction to it.

Alex Sims is an associate professor in the Department of Commercial Law, University of Auckland Business School and an associate at the UCL Centre for Blockchain Technologies.

This article from UniNews April 2023 is adapted from one first published on Newsroom in February as ‘ChatGPT and the future of university assessments’.

Dr Alex Sims was a panellist for 'Embracing the Revolution: Chat GPT and its impact on business communication'. The event was run by the Auckland Business School and the Auckland Business Chamber. 4 April, 4.30pm – 7.30pm, Sir Owen G Glenn Building