Now’s our chance to lead on AI regulation
5 May 2025
Opinion: The use of AI by governments demands transparency because it supercharges state power and automates decisions that can deeply affect our lives, says Benjamin Liu.

I’ve been teaching and researching Artificial Intelligence law for over a decade so when ChatGPT came along in 2022, I was thrilled. Finally, an AI tool that could help me draft, explain, and brainstorm without making me want to throw my laptop out of the window. I use it daily in my academic work, but as useful as it has been for me, it has also brought new challenges, including a rise in exam cheating.
AI is already reshaping everyday life in New Zealand. Our farmers use AI to detect parasites in sheep. Airlines analyse thousands of in-flight meal photos to see what passengers actually eat. Supermarkets are turning to AI-powered cameras to catch shoplifters. It’s useful, it’s here, and it’s only growing. So should we regulate AI? The short answer: yes, but not in the way many may think.
Globally, governments are wrestling with this question. In the United States, the federal government has mostly kept a hands-off approach. A 2022 bill that aimed to make companies audit their algorithms died quietly. California tried to pass a law to test AI models for dangerous behaviours, such as autonomously conducting cyberattacks or enabling the creation of weapons of mass destruction. Governor Gavin Newsom vetoed it, worried it would scare off tech innovation.
Transparency builds trust. It gives people a chance to understand and debate how AI is used in their name. It makes sure algorithms aren’t being deployed in ways that are biased, unfair, or just plain wrong.
Meanwhile, the European Union has gone the other way. The EU AI Act, which came into effect in 2024, classifies AI systems into risk categories and imposes strict rules on “high-risk” applications. But critics say it’s another General Data Protection Regulation-style overreach – with lots of paperwork, confusing definitions, and more red tape for innovators than actual protection for the public.
New Zealand has taken a more measured path. In 2024, the Government proposed a “light-touch, proportionate, and risk-based” approach. That means using existing laws, such as the Privacy Act 2020 and Human Rights Act 1993 to deal with problems, rather than rushing to create a whole new AI law. This is smart. Many AI risks are speculative. When real harms happen – for example, discrimination or privacy breaches – existing legal tools often work just fine.
However, although AI in the private sector can be largely managed through existing frameworks, the use of AI by governments is a different beast entirely. That’s because AI supercharges state power. It enables mass surveillance, predictive policing, and automated decisions that can deeply affect people’s lives. The public may not have fully realised that AI is becoming deeply embedded in public services. For example, Inland Revenue already uses more than 30 AI applications, from document scanning to fraud detection. Our public laws were written with human officials in mind. They weren’t designed for algorithms making calls about benefits, visas, or sentencing. If governments adopt AI without proper checks, we risk sleepwalking into a future where opaque software, not people, makes critical decisions about us.
This isn’t about banning government use of AI. On the contrary, AI can make the public sector more efficient and responsive, but that use must come with full transparency. Specifically, every government agency should publicly disclose:
The data (except for personal or sensitive data) and algorithms behind them.
If an agency relies on a private vendor, it should require full transparency as a condition of purchase. No black-box tools. No trade-secret excuses.
Need convincing? Let’s revisit two cautionary tales. First, the 2016 Wisconsin case where a judge used a risk-assessment algorithm to help sentence a defendant. The algorithm was proprietary; the defendant couldn’t examine how it worked. The appeal was dismissed, sparking outrage. Sentencing should never be based on software no one can question.
Second, in New Zealand, ACC came under fire in 2017 for secretly using a predictive tool to triage claims. The goal was efficiency, but the secrecy bred distrust. If ACC had simply explained what the tool did and why, the public might have been on board.
Transparency builds trust. It gives people a chance to understand and debate how AI is used in their name. It makes sure algorithms aren’t being deployed in ways that are biased, unfair, or just plain wrong. So yes, regulate AI, but start where it matters most: when governments use it.
Let the private sector innovate (with a close eye on harmful applications); transparency must be the default, not the exception, for the public sector. New Zealand has a chance to lead here. Other jurisdictions, including the UK and New York State, have taken tentative steps towards transparency in government AI use, but none have gone all in. We could be the first to require full, proactive disclosure across all agencies. Done right, New Zealand can show the world that democratic oversight and technological innovation don’t have to be enemies. In fact, they can make each other stronger.
Dr Benjamin Liu is a senior lecturer at the University of Auckland Business School.
This article reflects the opinion of the author and not necessarily the views of Waipapa Taumata Rau University of Auckland.
This article was first published on Newsroom, Now’s our chance to lead on AI regulation, 5 May, 2025
Media contact
Margo White I Research communications editor
Mob 021 926 408
Email margo.white@auckland.ac.nz