Why ChatGPT matters more than you think

For the first time, we have an entity that is clearly both artificial and apparently intelligent – something approaching true AI, writes Paul Geertsema.

Dr Paul Geertsema
Dr Paul Geertsema is a senior lecturer in finance at the University of Auckland Business School and a founding member of the Artificial Intelligence Researchers Association, where he serves on the board.

OPINION: If you’ve been following the news, you have likely heard of ChatGPT by now. You probably think you know what it’s all about, but there’s more to this artificial intelligence tool than most people are aware of.

This may be partly because coverage on the topic has mostly centred around ChatGPT’s ability to help students cheat.

In domains where the right answer is essentially factual or based on reasoned opinion rather than rigorous calculation (think law vs engineering), ChatGPT now often performs at the level of a first-year undergraduate.

We can illustrate the power of the AI tool with an example. Consider the following sentence: “I am clever and much good at sport and want to at Harvard do study.”

Perhaps ChatGPT can help?

When prompted to, “rewrite the following as a short paragraph in the style of a Harvard application essay” and reminded to “condense the writing to two sentences”, it generates the following: “I am a skilled athlete and an intellectually capable individual who is eager to study at Harvard. With a strong commitment to academic excellence and a passion for sports, I am confident that Harvard is the ideal institution to help me achieve my goals.”

Close to perfect. So, does “writing well” still matter?

I would argue that it does, but not for the same reasons it once did. The ability to generate good writing is now less important, and the ability to tell good writing apart from bad writing is now much more important.

This means that it’s not a waste of time to teach students writing and other skills, because they will still need those skills to be able to make effective use of ChatGPT and similar AI tools.

And students will have to make effective use of ChatGPT if they want to survive in the workforce. Those who effectively use ChatGPT and related tools will simply outcompete those that don’t.

No matter how strong or fit you are, you cannot outperform someone using a digger if you are working with a shovel.

So not only should universities not ban ChatGPT (such a ban, in any case, is not enforceable), they should encourage students to learn how to use it. Fortunately, there appears to be growing support for this view within academia.

As a field, AI has been around for perhaps six decades and has been able to solve industrial strength problems reliably since the mid-2000s.

Much of the early focus of discussion over new generative AI tools such as ChatGPT has been about their potential use and misuse in education, and particularly for cheating on assignments.

But in all that time, if you wanted to solve problem X, you had to train an AI model specifically to solve problem X.

By contrast ChatGPT is able to do things it was not specifically trained to do, and do them well.

For the first time, we have an entity that is clearly both artificial and apparently intelligent – something approaching true AI.

It’s no surprise that ChatGPT can generate text since this is what it was trained to do.
In fact, earlier AI approaches could generate text that sounded vaguely Shakespearean when trained on the collected works of Shakespeare.

What we (AI researchers) did not expect from GPT was a model that could also draw up purchase and sale contracts, argue the finer points of deconstructionism, act as a tutor on any topic, pretend to be Genghis Khan or Barbie, write computer code to solve a specified problem, grade student essays, generate original jokes (and explain them), translate between any two languages, and much more.

This is not the first technology revolution humanity has experienced, and I think we’ll be able to muddle through as we have done before.

In short, what we have with ChatGPT and similar emerging large language models is the birth cry of Artificial General Intelligence or AGI.

We have an AI tool that can understand and follow instructions, conduct a conversation, explain itself and adapt to a person’s responses. It’s hard to overstate how significant this is.

It’s becoming clear that AI, particularly the kind of technology underpinning ChatGPT, is the “next big thing”. The only secret worth keeping – that general-purpose AI is even possible – is now out in the open, and the research and development dollars will start to flow in their billions and then tens and hundreds of billions of dollars, year after year.

When a particular problem is on the receiving end of that kind of money, you get results, and we are likely to see a rapid succession of AI systems, each a bit more capable than the one before.

Whatever the tasks AI models such as ChatGPT can’t do well at the moment, thousands of the smartest people in the world are working on fixing them, and they are being funded by some of the wealthiest companies in the world, who see this as a matter of corporate survival.

Speaking personally, I’m grateful to be alive during this time in history.

This is not the first technology revolution humanity has experienced, and I think we’ll be able to muddle through as we have done before.

Hopefully, you’re now curious enough to have another look at ChatGPT. Nobody can tell you what talking to an AI is like. You have to try it for yourself.

Dr Paul Geertsema is a senior lecturer in finance at the University of Auckland Business School and a founding member of the Artificial Intelligence Researchers Association, where he serves on the board.

This article reflects the opinion of the author and not necessarily the views of Waipapa Taumata Rau University of Auckland.

This article was first published on Stuff.

Media contact

Sophie Boladeras | Media adviser
M: 022 4600 388
E: sophie.boladeras@auckland.ac.nz