How NZ can lead the way on human-AI ‘co-thinking’

Opinion: New Zealand’s new AI strategy urges organisations to adopt existing tools, but the real opportunity lies isn't in faster uptake, but learning how to think with AI, says Guy Bate and Rod McNaughton.

woman on keyboard, with AI-generated images hovering around her hands

New Zealand’s first AI strategy, launched in July, sounds confident and pragmatic: adopt proven tools, apply them to local problems, move quickly, and lift productivity. That is sensible, but adoption is not the same as assimilation. The real test is whether we can think with AI in ways that strengthen human judgment rather than hollow it out. Until we build that capability, uptake risks outrunning understanding.

Even perfect regulation cannot compensate for weak human-AI reasoning. The distinctive opportunity for a small, agile country such as New Zealand is not to chase chips or train frontier models, but to design the layer that sits between tools and decision-making: the routines, habits, and skills that make machine outputs useful for human purposes. As a nation, we must build the infrastructure to support our collective use of AI.

AI will not simply replace tasks; it will reshape how work gets done. The World Economic Forum’s Future of Jobs Report highlights rising demand for creative thinking, resilience, flexible problem framing, and lifelong learning.

These are not peripheral attributes. They are exactly what enable people to test claims, notice anomalies, and add the context that AI models miss. New Zealand’s strategy speaks to uptake, regulation, and trust, yet it is largely silent on how such capabilities will be cultivated at scale, to deepen the very human skills needed to work with AI.

There are three modes of failure that show why this gap matters.

  1. Automation capture. Teams accept outputs they do not understand because the system is so fluent and fast.
  2. Deskilling by delegation. Repeated reliance on AI model suggestions erodes organisations confidence in their own judgment, which they need in high-stakes situations.
  3. The theatre of compliance. Boxes get ticked, risk registers swell, yet there’s little or no improvement in the quality of decisions made. 

If we are serious about thinking with AI, we should measure what improves reasoning, not just what increases use. One signal is the quality of decisions made: whether choices become more robust, options more diverse, and trade-offs clearer. 

Each type of failure is a symptom of what we call ‘shallow adoption’. Our recommended remedy is deliberate co-thinking.

Co-thinking is more than clever prompting. It is understanding how AI and human users interact: frame the problem, surface assumptions, use AI to generate alternatives, interrogate evidence, decide, and record why.

In classrooms, this can take the form of visible reasoning, where students show how dialogue with an AI model changed their view. In boardrooms, directors can adopt decision logs, capturing dissent, and stress-testing AI outputs before governance choices are locked in. On the frontline, this can mean pausing to weigh context, equity, and harm before AI-supported decisions are made.

Switzerland offers a credible start. Its national AI white paper proposes human-AI co-thinking as a form of literacy. Younger learners engage through role-play and storytelling; older learners practise reframing questions, checking sources, and documenting how their view shifted through the exchange with AI. The goal is not to replace judgment but to strengthen it. New Zealand can adapt this approach to our own context, including bicultural commitments and sector innovation needs.

If we are serious about thinking with AI, we should measure what improves reasoning, not just what increases use. One signal is the quality of decisions made: whether choices become more robust, options more diverse, and trade-offs clearer when AI is part of the process. Another is time to insight: how quickly teams move from raw information to sound explanations, rather than from information to fluent answers. A third is auditability: whether someone else can follow how a conclusion was reached and see where the model helped or hindered.

These metrics shift attention from ‘How many systems are in use?’ to ‘Are our decisions actually improving?’ The impact is concrete: can a loan decision be explained and justified? Is an AI-assisted medical diagnosis accurate and traceable to a responsible party?

New Zealand is unlikely to build new boundary-pushing AI models at scale. We should focus on our potential strength – how AI can be best integrated at a national level.

We can, for instance, create schools where teachers are trained to supervise AI-assisted inquiry, clinics where model suggestions are systematically compared with human judgment, and procurement systems that require a reasoning plan rather than generic assurance statements. The point would be to create conditions in which AI adoption is widespread and we use it to our national advantage.

To do this, co-thinking with AI should be treated as infrastructure and funded in the same spirit as digital connectivity. Standards can establish simple, sector-specific routines for human-AI decisions, for example, an agreed review protocol for employment screening, or a dissent log for high-impact service choices.

Pilot projects could bring together educators, public servants, and firms to trial these routines in real settings, with independent evaluation. Shared assets such as open libraries of prompts, decision logs, and teaching materials can help lessons spread quickly.

Education, business, and government would all have roles in developing this infrastructure. Universities and training providers could embed AI literacy across disciplines, teaching visible reasoning as a habit rather than a bolt-on module.

Firms could redesign workflows so AI augments scrutiny rather than evades it, with incentives that reward explanation and challenge. The government could fund applied research, micro-credentials, and procurement rules that require co-thinking practices wherever AI is used in public services.

The prevailing narrative is that we should adopt tools to lift productivity. The deeper insight is this: productivity gains depend on better collective reasoning. Tools accelerate, but judgment orients. Countries that master the orientation layer will make better decisions faster, avoid costly errors, and build public trust because their choices remain legible and contestable.

New Zealand can lead here. Not by being first to deploy every system, but by being first to show how a whole country learns to think with AI. That will make adoption safer, innovation more meaningful, and our advantages as a small, adaptive nation count where they matter most.

Guy Bate is a Professional Teaching Fellow in Management and International Business, Business School

Rod McNaughton is a professor of Management and international Business, Business School

This article reflects the opinion of the author and not necessarily the views of Waipapa Taumata Rau University of Auckland.

This article was first published on Newsroom, How NZ can lead the way on human-AI ‘co-thinking', 11 September, 2025

Margo White I Research communications editor
Mob
021 926 408
Email margo.white@auckland.ac.nz