Can academic integrity prevail when AI is so good?

In Taking Issue, a regular opinion piece in Ingenio, three experts answer a burning question but only have 350 words to do so. And this time ChatGPT gets a turn too.

Headshot of Hazim Namik who wears glasses
Hazim Namik is a professional teaching fellow in Mechanical and Mechatronics Engineering.

In 350 words or fewer, discuss whether academic integrity can prevail
when AI is so good?

HAZIM NAMIK
Use of invigilation is key

For academic integrity to prevail, all that’s needed is invigilated assessments, wherever possible. Before setting recent tests I, like many other academics, jumped into the world of generative artificial intelligence and challenged ChatGPT by asking it questions from courses I teach (engineering physics, hardware programming, and control systems). It gave average responses to physics and control-systems problems.

It made conceptual and arithmetic mistakes in its answers and justifications, in the same way an average student would. Because you’d expect a student to make similar mistakes, detecting AI-generated code is more difficult than detecting AI’s use in essay-style questions.

In creating a recent test, what I did was use ChatGPT myself and ask it to create some computer code. I then set a question in which students, under invigilation with no computers, had to analyse what was wrong with the code and why it might not work in the real world. Lecturer 1. ChatGPT 0. The bottom line is students have to understand the code to know why it may not work.

For sure, AI does have an impressive ability to write code, even for real hardware, such as a robot with a certain configuration. It can produce working code in the desired programming language with provisions to account for the physical properties of the hardware. At postgraduate level and senior undergraduate, I’d encourage its use and for students to then test its application. But in a student’s first year, they need to know fundamental principles such as how to write, debug and fix their code. If they have no clue, they’ll be found out.

AI tools will improve with each iteration, but so will AI detection tools. This is just the beginning.

Hazim Namik, professional teaching fellow Faculty of Engineering

It’s also 2023 and we need ‘authentic assessment’, too. Students must be able to use computers for tests just as they do in the real world. An ideal might be to have exams on campus in computer labs for suitable subjects, with only certain programs loaded.

There are other innovative approaches we can adopt, but ChatGPT may always be one step ahead of us. AI tools will improve with each iteration, but so will AI detection tools. This is just the beginning.

Academic integrity will prevail if we are creative in our assessment methods. Invigilated tests and exams have a crucial role in protecting the reputation of academic institutions. They are not assessment tools of a bygone era. As teachers, we just have to think outside the box.

Hazim Namik is a professional teaching fellow in Mechanical and Mechatronics Engineering.

Alex Sims is an associate professor in the business school.
Alex Sims is an associate professor in the Department of Commercial Law in the Auckland Business School.

ALEX SIMS

Let’s embrace AI in education and work

There is no single definition of academic integrity. At the University, it is equated with “integrity and honesty”. The University aims to develop its students’ intellectual independence as well as maintain the reputation and quality of the University’s qualifications.

ChatGPT and other AI tools have transformed the game. Students can use AI to provide fluently written answers in tests, exams and assignment questions. The concern is that AI-generated answers often pass without students needing much, or indeed any, knowledge of the subject matter.

But any proposal to ban ChatGPT and other AI tools, and to use AI detection tools to catch “cheating”, is counterproductive. It is doomed to fail because AI detection tools are not 100 percent accurate; and it would be counterproductive because if students don’t know how to use AI, they will be disadvantaged in the workplace. Companies are already using AI to create material for their clients, saving time and money. Knowing how to use AI is an employable skill.

Instead of entering a futile arms race between AI tools and AI detection tools, we need to embrace both the future and the past. For some assessments, students should be encouraged to use AI tools, but those assessment questions must change so we can better test students’ knowledge through the semester. One advantage may be that students’ ability to produce work in fluent English through AI means they can concentrate on demonstrating their knowledge through other means of assessment, instead of the current practice that can privilege students who write well over those whose technical knowledge is better than their writing.

ChatGPT’s arrival involves academics unlearning decades of practice and experience. As such, our traditional notions of academic integrity must also change.

Associate Professor Alex Sims, Department of Commercial Law Auckland Business School

Students may also need to pay attention to what ChatGPT outputs to ensure it reflects their own values, especially as ChatGPT has been shown to demonstrate bias.

It is possible to construct ChatGPT-proof assessments, but it may require a radical rethink. Assessments vary between courses and subjects. ChatGPT’s arrival involves academics unlearning decades of practice and experience. As such, our traditional notions of academic integrity must also change.

We can look to the past with a twist. We could return to traditional pen and paper tests or exams (using Crowdmark to mark them online), with a requirement the student needs to pass the test to pass the course. That way, we can maintain the quality of the University’s qualifications by guaranteeing students’ knowledge of the subject matter. And so, integrity prevails.

Alex Sims is an associate professor in the Department of Commercial Law, Auckland Business School.

Dr Andrew Withy is a professional teaching fellow in Philosophy in the Faculty of Arts.
Dr Andrew Withy is a professional teaching fellow in Philosophy in the Faculty of Arts.

The problem is not the technology, but the values of these individuals. Ultimately, they value the acquisition of a credential more than the work required to master knowledge.

Dr Andrew Withy, professional teaching fellow, Philosophy Faculty of Arts

ANDREW WITHY

It’s all about your values

Academic integrity is not a battle between cheats and enforcers, or a technological arms race between students and teachers.

It’s ultimately an intellectual code valuing honesty, responsibility, accountability and professionalism. Students, tutors, lecturers, researchers and anyone else participating in a shared intellectual community must have some mutually agreed and respected norms of intellectual behaviour.

Without this common behavioural code, global academic trust would collapse, and shared academic knowledge would devolve into a proprietary corporate, guild or apprenticeship model. That is, it could be the end of universities and public education. This would radically change society’s fabric and endanger our democratic values.

Certainly AI enables those who do not accept the current academic integrity standards to cheat more efficiently with less risk. The problem is not the technology, but the values of these individuals. Ultimately, they value the acquisition of a credential more than the work required to master knowledge. How they choose to cheat is merely a matter of convenience and risk.

AI is currently convenient and low-risk. It removes the requirement for assistance by another human. Instead, it can be used in total secrecy. AI can also feel to students a lot like current best practice – they already research on the internet while writing on a gadget that has online spelling and grammar assistance. This means that how we understand, describe, teach, model, practise and assess our research and learning will have to change, so staff and students can better see where and why they are crossing the line. Some current best practices may even become unacceptable.

The shift in our communal practices will be bumpy and uncomfortable, and mistakes will be made. But we will find a new equilibrium between trust and quality assurance.

There’s no denying AI text generators are becoming essential in many jobs. At some point soon, the associated legal and ethical issues may create unacceptable risks for businesses, and so commercial norms and practices will become established. Hopefully, these standards will not be too far from those academic integrity norms required for a vibrant research and educational community to thrive.

That way, academic integrity must prevail.

Dr Andrew Withy is a professional teaching fellow in Philosophy, Faculty of Arts.

An image of a robot writing an essay also created by Artificial Intelligence to illustrate these opinion pieces
An image also created by Artificial Intelligence to illustrate these opinion pieces.

THE ANSWER BELOW WAS CREATED BY CHATGPT

Note: ChatGPT's answer runs as it arrived. It is unedited and therefore may contain errors, repetition and US spellings.

Academic integrity refers to the ethical principles of honesty and truthfulness that are crucial in the academic world. With the rise of artificial intelligence, there is a growing concern that it may pose a threat to academic integrity. However, it is unlikely that AI will completely eliminate academic integrity.

While AI can automate many tasks and make them faster and more efficient, it cannot replace human judgment and ethics. AI can only work within the parameters it is programmed with and can't make independent ethical decisions. For example, AI can assist with tasks such as checking plagiarism, but it cannot determine if the source material was appropriately cited or if the author had the right to use it.

Moreover, academic institutions have a responsibility to instill academic integrity in their students and faculty. This includes educating them on ethical behavior and academic practices, as well as setting expectations and consequences for any violations of academic integrity. The responsibility to uphold academic integrity ultimately falls on the individual, not the technology they use.

Additionally, AI-assisted research or writing can still be subject to bias and manipulation. This may lead to unethical or even fraudulent practices, undermining the credibility of the research and the academic institution. To prevent this, it is crucial to have proper oversight and regulation of AI in academic settings.

In conclusion, while AI may change the way academic work is done, it is unlikely to replace the need for academic integrity. Human judgment, ethical decision-making, and proper oversight and regulation will still be crucial in ensuring that academic work remains credible and trustworthy.

In the end, academic integrity will always be essential in maintaining the credibility of the academic world and ensuring the trust of the public in academic research and education. While AI may provide new tools and opportunities, it cannot replace the importance of human ethics and values in the academic world.

ChatGPT is an open AI program launched as a prototype on November 30, 2022. The Chat references chatbot and the GPT means generative pre-trained transformer.

These articles are opinion and may not be representative of Waipapa Taumata Rau, University of Auckland. 

This Taking Issue column first appeared in the Autumn 2023 edition of Ingenio magazine.