Feature: Artificial intelligence offers powerful new opportunities, but also unprecedented challenges. Owen Poland talks to University of Auckland AI experts about where the technology may lead us.

Professor Gill Dobbie
Professor Gill Dobbie and a team of researchers in the Machine Learning Group are working to eliminate bias and the so-called ‘hallucinations’, from which large language models such as ChatGPT suffer. Photo: Elyse Manahan

The prospect of a neurosurgeon being guided by artificial intelligence to perform brain surgery is thought provoking, to say the least. However, the potential precision that comes from using AI to diagnose and treat complex medical procedures may be closer to reality than many care to think.

There’s no doubt that AI has rapidly begun to touch many walks of life, so much so that it became the Collins Dictionary word of the year in 2023. For Auckland Bioengineering Institute research fellow Dr Hamid Abbasi, the technology led him into a brave new world of medical research.

“AI is poised to revolutionise every aspect of current medical practice and patient care, equipped with robust capabilities that significantly enhance outcomes,” he says.

For Hamid, the revolution is the development of an advanced neuro-navigation tool called Neurofanos. Meaning ‘lantern’ or ‘to bring to light’, the tool is designed to analyse complex data in real-time and provide neurosurgeons with high-resolution images while they conduct high-risk tasks like brain tumour resection.

“AI improves precision and helps surgeons see through the ‘unseen’ to visualise and preserve critical structures at every moment of surgery,” he says. “Knowing those factors is going to help reduce accidental harm and then improve patient care and outcomes.”

Having won the University of Auckland’s 2022 Velocity $100k Challenge, the transdisciplinary Neurofanos team of biomedical engineers, Auckland City Hospital neurosurgeons and researchers at the Mātai Medical Research Institute were subsequently granted $1 million from the MBIE Endeavour Smart Ideas fund.

“That funding has helped us to find our feet and start walking,” says Hamid, but “we need to run”. Additional financial support will be required to get the concept into operating theatres as soon as possible.

“Our talented team is fully equipped with all the necessary expertise and resources and is set for a robust launch,” he says. “But we need more local funding to ensure that the IP stays within the country.”

Beyond Neurofanos, Hamid believes that AI will play an increasing role in what’s known as personalised or precision medicine for illnesses like cancer, where advanced algorithms can explore long-range connectivity in data that humans struggle to comprehend.

“AI is able to bring it forward, analyse it in a blink and say, ‘Look, this patient needs this specific type of care. This is the right time to start the treatment’.”

Hamid Abbasi portrait
Dr Hamid Abbasi is working on an AI tool to help neurosurgeons make better decisions (pictured with surgical training model). Photo: Chris Loufte

AI is poised to revolutionise every aspect of current medical practice and patient
care.

Dr Hamid Abbasi Auckland Bioengineering Institute

Medical advances

In the Faculty of Engineering, Dr Reza Shahamiri uses his deep learning engineering expertise to design software platforms that leverage AI technologies to make healthcare services more accessible and provide healthcare professionals with better digital tools to help patients and their families.

For more than a decade, Reza has been developing automatic speech-recognition technologies, similar to Apple’s Siri, which can hear and comprehend the otherwise unintelligible speech of those with speech impediments.

“When we enable computers to hear the atypical speech, the impact of this technology is way beyond having software like Siri that can understand them. We can build automated speech therapy systems to help patients find their voice.”

Reza Shahamiri
Dr Reza Shahamiri says AI could release health system pressures. Photo: Chris Loufte

Another project involves the analysis of speech patterns to automatically identify memory flaws and detect the early signs of dementia and memory loss, which prompt an early referral to a specialist.

“With dementia, there is no known cure. The best way to deal with it is to detect it as early as possible so that the progression can be slowed down and managed.”

Designing an AI platform to help with the early identification of autistic children is another of Reza’s long-term projects, and his Autism AI platform has now collected behavioural data from around 12,000 people worldwide.

“It’s very important that we identify autistic children as early as possible to ensure that the effectiveness of support plans is maximised.”

Reinforcing the power of AI, he points to Google’s recent conversational AI diagnostics tool AMIE, which was tested alongside primary care physicians in a blind trial to evaluate patient needs.

“Surprisingly, the AI tool was around 15 percent more accurate in diagnosing patients – and AI asked better questions and communicated more effectively with patients,” says Reza. “Once thoroughly tested and deployed, such technologies could have a huge impact in releasing pressure on our healthcare sector.”

And while open AI research risks being misused, he says that its collaborative nature has fostered the development of powerful tools.

“These AI advancements have significantly boosted our capabilities and opened doors to unimaginable achievements. They are making us smarter and more capable, likely leading to massive productivity improvements.”

Aiding humanity

As the leader of the Strong AI Lab (SAIL), which sits within a wider centre in the University focused on intelligence research, Professor Michael Witbrock believes that current limitations around AI will be removed very quickly.

“This is here to stay, and it’s going to get better very rapidly because there’s a huge need for better problem solving for humanity and there’s, frankly, a huge amount of money to be made in serving that need.”

What’s more, he says New Zealand has a history of “looking change in the face”, with strong advantages to doing so.

“New Zealand probably can do a better job, more rapidly, of integrating AI effectively into the way we do things.”

It’s going to get better very rapidly because there’s a huge need for better problem solving for humanity.

Professor Michael Witbrock, Strong AI Lab Faculty of Science, University of Auckland

Given the country’s productivity crisis and our limited population, the creation of an ‘intelligent ecosystem’, where AI helps with organisational intelligence that enhances human intelligence, is a focus for Michael and his team.

“Imagine if we had 5 million shareholders in the country and then 115 million AI systems filling all the gaps. We could be the most productive country in the world.”

From a business perspective, he says AI could be an economy-wide accelerator that helps entrepreneurs with legal and accounting advice, and product design.

“All of these things can be done to some degree at the moment with AI systems, and increasingly well quite rapidly.”

 Michael Witbrock portrait
Professor Michael Witbrock says AI could boost our productivity. Photo: Elise Manahan

On the teaching front, Michael helped design a new Master of Artificial Intelligence programme at the University, which provides an opportunity to study the field in depth – and potentially attract local and international talent.

“There’s a tremendous need for people to increase the amount that they know about AI, and also a tremendous need for people to have mentorship in doing things with AI.”

As co-founder and chair of the global AI for Good Foundation, Michael says there’s a large gap between what can be done and what needs to be done to meet the Sustainable Development Goals. He says more technological capacity could be deployed “in the service of humanity”.

The deployment of large-scale robotics to deliver agricultural surpluses to people in need is one of many possibilities, and he’s called on alumni who have the time and capability to support the cause.

Michael’s view on where AI could ultimately take us: “There’s a real opportunity for New Zealand to lead in AI for humanity.”

We want to add in more semantic information so that [ChatGPT large language models] understand the context in which the writing is happening.

Professor Gill Dobbie School of Computer Science, University of Auckland

Ensuring a level playing field

Behind the scenes, Professor Gill Dobbie and a large team of researchers in the School of Computer Science’s Machine Learning Group are working to eliminate bias and the so-called ‘hallucinations’, from which large language models such as ChatGPT suffer.

“They don’t actually understand what they’re writing about. We want to add in more semantic information so that they understand the context in which the writing is happening.”

The aim is to develop deep-learning neural networks that work more like the human brain and produce richer results. Gill says the University is pushing the boundaries, “so other countries can learn from us”.

She is involved in a variety of projects in the AI field, including the use of algorithms to predict severe acute pancreatitis.

“With our machine-learning model, clinicians will be able to input some routinely collected data into the model and get a result based on others in the population.”

Having previously been involved in predicting the peaks and troughs of Covid cases, she’s now helping to predict the number of flu cases, which fluctuates from season to season and impacts demand on hospital beds and staff.

“If we can predict, say, ten days in advance, then that will allow hospitals to manage their resources better.”

Another piece of research is an analysis of data to identify diabetes-related dementia, which is prevalent in Māori and Pacific communities. “If you can control the diabetes, then that may have an effect on their memory,” she says.

As one of the founding chairs of the Artificial Intelligence Researchers Association, which has around 300 members, Gill says the group is a way of pulling everyone together, including those from Crown Research Institutes (CRIs).

“That’s important, because a lot of the practical research is going on in the CRIs and it’s useful for academics to link with that practical research.”

While there’s a lot of fear about the potential harm that can come from AI, she says it’s not a valid reason to stop the research. “New Zealand could benefit a lot from AI, and we just have to work out how we can best benefit.”

Daniel Wilson portrait
Dr Daniel Wilson says we must ensure AI is equitable. Photo: Chris Loufte

You don’t want companies simply vacuuming up more Māori information, because that data has its roots in people and intentions

Dr Daniel Wilson, School of Computer Science Waipapa Taumata Rau, University of Auckland

Another School of Computer Science lecturer supporting the AI masters programme is Dr Daniel Wilson (Ngāpuhi, Ngāti Pikiao), whose special interest is in AI ethics and Māori algorithmic sovereignty.

“I’m hopeful that through raising awareness about potential pitfalls, we can create equitable AI,” says Daniel. “We could help lead the way, particularly with respect to Indigenous communities. I think that’s a hopeful thing.”

One of the underlying problems is that some algorithms don’t have representative information, which leads them to discriminate against certain populations. A case in point was the Amazon CV filtering system, which eliminated female job applicants because it was trained on historical data that favoured men.

Likewise, Daniel says, “You’re bound to have lots of examples of English texts but relatively fewer texts in te reo Māori.” On the flip side, however, “You don’t want companies simply vacuuming up more Māori information, because that data has its roots in people and intentions.”

Another problem is that the performance metrics of algorithms tend to be one-dimensional. “So you might be missing out on lots of different dimensions related to Māori health and well-being, or hauora. That includes the spiritual components, the cultural components, and the social components, for instance.”

With that in mind, Daniel is part of a team involved in a tikanga technology project, which he says needs to engage with communities to iron out issues from the beginning.

“The idea is to create algorithms that Māori will benefit from as well. It’s about putting forward some principles to help guide thought on how to make these systems safer and more culturally appropriate.”

Daniel says the Centre of Machine Learning for Social Good, launched in 2023, will consult with communities about their needs. As well as undertaking health and social projects, the centre is also looking into issues like pest control.

“In order for the Centre of Machine Learning for Social Good to have real traction and change, and to survive outside of research funding rounds, it needs this real motivation to come from the community.”

Ethics of use

Auckland Business School senior lecturer Dr Benjamin Liu is ringing alarm bells about the rise in the use of artificial general intelligence (AGI, a type of AI that can perform cognitive tasks as well as, or better than humans), and a widening divide he sees between what it can do and people’s understanding of it.

“I can see the tremendous benefits that AGI can bring to society, but I also see the tremendous dangers and I just don’t think we are prepared for that.”

Benjamin Liu portrait
Dr Benjamin Liu says lawmakers will always be several steps behind the commercial world. Photo: Elise Manahan

A major concern is the increased power the technology has given to governments like those in Russia and China in terms of surveillance and its potential for political manipulation and misinformation.

“Governments can use it to enhance power and increase control over every aspect of society,” he says.

Including in war. In recent months, it has been claimed Israel used an AI-powered database known as Lavender to assist in the identification of human targets.

Benjamin believes there should be a proper legal and regulatory framework for the safe development of AI, but he says that lawmakers don’t understand the technology and will always be several steps behind the commercial world as it evolves.

“I don’t think anything we do here in New Zealand is going to have any substantial impact on the future of AI, and I don’t see how governments internationally can come together to do something really meaningful about it.”

But from a research perspective, he says that ChatGPT “is really the best tool that I have ever encountered”, primarily because of its ability to sift through sometimes voluminous texts and journals to quickly summarise a particular legal issue.

He’s also a huge believer in using AI to learn faster, and strongly encourages students to use OpenAI to access the free version of ChatGPT, which he has also used to design his own financial law tutor.

“This kind of AI is going to revolutionise a lot of teaching – not just, of course, in law but also in accounting, maths and many other areas.”

Where he draws the line is in the use of tools like ChatGPT in examinations. “That’s where we test the students’ own ability to perform original thinking or reasoning to solve a problem.”

Data sovereignty is another key issue for academics, and associate professor in commercial law at the University of Auckland Business School Gehan Gunasekara says it’s incredibly complicated from a legal and technological standpoint.

“It’s an arms race, which I’m not confident that the lawmakers and policymakers will win, because technology is forging ahead so quickly, unless we come up with a radical new way of regulation itself,” says Gehan.

While encryption techniques could theoretically be used to give people control over the use of data, locally owned companies could still be subject to overseas data requests because of political considerations.

Gehan Gunasekara portrait
Associate Professor Gehan Gunasekara says ultimately he's not taking a doomsday view of the technology. Photo: Chris Loufte

It’s an arms race, which I’m not confident that the lawmakers and policymakers will win because technology is forging ahead so quickly.

Associate Professor Gehan Gunasekara University of Auckland Business School

“Look what happened to Kim Dotcom. Data localisation, in my opinion, is not a solution, because even a local New Zealand cloud provider is ultimately subject to these overseas requests.”

One issue he believes needs more scrutiny is the government’s ‘cloud first’ policy, which enables foreign companies to provide cheap services but also allows them to aggregate and commercially exploit New Zealand data.

“We don’t know how they are mining it to train their algorithms, and if they are, as is likely, who’s going to get the value from that? Well, it’s not going to be the New Zealand government.”

As a former chair of the Privacy Foundation New Zealand, and now the convenor of its surveillance working group, Gehan says the Privacy Act applies if AI is fed with personal information. However, there’s less control over outputs – like job assessments – where he says the Privacy Act falls down.

There’s also scepticism about a recommendation from the Privacy Commissioner that businesses should conduct a ‘human review’ of AI use, especially when thousands of people might apply for one job.

“The nature of the digital age and technology is that human review is possible sometimes, but sometimes it’s not going to be possible or practical.”

Nevertheless, Gehan is confident that machines won’t be allowed to take over.
“I’m not taking a doomsday view,” he says. “Humans will come up with a new way to deal with these technologies as we’ve always done. So, I’m ultimately optimistic.”

Yun Sing Koh portrait
Professor Yun Sing Koh, programme director for the Master of AI. Photo: William Chea

BIG DEMAND FOR MASTER OF AI PROGRAMME

“It’s been surreal,” admits Professor Yun Sing Koh, when talking about demand for the University of Auckland’s new Master of AI programme.

Despite little promotion, the programme, which launched in March, was oversubscribed, with 33 students now enrolled.

AI is a hot topic, says Programme Director Yun Sing, and the masters has attracted a diverse group of students drawn from both the faculties of Engineering and Science (computer science), and from industry.

The masters is designed to meet the needs of both developers and researchers who want to improve the capability of AI systems.

The course covers the fundamentals of AI, how to build AI systems and manage AI projects, she explains, but importantly, students learn about the ethical, philosophical and societal implications of the technology.

“This is a transformative programme that allows us to train the next generation to better prepare ourselves in the face of AI being everywhere,” says Yun Sing.

“It’s about understanding what AI means to us, not on a superficial level, but on a deeper level; to understand how AI technology, for one, is a tool but also how we could use this tool in more facets and with an understanding of how we use it responsibly and ethically.”

This article first appeared in the Autumn 2024 edition of Ingenio.