AI clash raises important questions for New Zealand

The Pentagon’s fight with Anthropic has prompted questions for New Zealand, where dedicated AI legislation is lacking, write Alex Sims and Dulani Jayasuriya.

AI Getty

A dispute between the Pentagon and Anthropic, maker of AI assistant Claude, is raising questions for countries like New Zealand where AI is governed through a light-touch framework rather than a standalone AI act.

Anthropic recently rejected a Pentagon deal, saying it would not allow its large language model, Claude, to be used for domestic mass surveillance or autonomous weapons systems.

The Department of Defence retaliated, formally declaring Anthropic a supply-chain risk, demanding businesses cut ties with the AI firm.

In the midst of this, Anthropic released a revised version of its responsible scaling policy, which is the voluntary framework the company uses to mitigate “catastrophic risks” from AI systems.

The original policy had included a commitment not to train an AI system unless it could guarantee in advance that all safety measures were adequate. The AI firm said it would pause development if internal safety thresholds were exceeded; a hard brake of sorts.

But the updated responsible scaling policy dropped the promise not to release AI models without guaranteeing proper risk mitigation in advance. Instead, it created a framework based on public commitments, regular risk reporting, and, in some cases, independent review.

Dr Dulani Jayasuriya
Dr Dulani Jayasuriya

Anthropic was transparent about why it made this change, saying an “anti-regulatory political climate” made its earlier safety promises unworkable. If one company pauses while competitors continue building, it argued, the result is less safety-focused developers leading the field.

US Defence Secretary Pete Hegseth was central to his government’s stand-off with Anthropic over allowable use of its Claude AI system by the Pentagon.

Anthropic filed two federal lawsuits, arguing the Pentagon violated the company's First Amendment rights and exceeded the scope of supply chain risk law, which was designed for foreign adversaries, not domestic companies that disagree with government contract terms.

In other words, Anthropic held the line on weapons and surveillance, but also adjusted how it describes and governs broader safety commitments, citing competition and political constraints.

Why this is our problem too

New Zealand has not passed dedicated AI legislation.

Our AI strategy prefers a light-touch, principles-based approach. It relies on existing laws, privacy, consumer protection and human rights, rather than creating a single, comprehensive AI act.

On the one hand, this light touch logic is reasonable. Some projections estimate large AI-supported productivity gains for New Zealand by 2038, reported in the tens of billions, though estimates vary widely.

On the other hand, the Anthropic saga shows the potential cost of this approach. When even the most safety-focused AI companies face commercial and geopolitical pressure to adjust their governance frameworks, this raises questions for smaller markets like New Zealand about whether voluntary reliance on US company policies is a sufficient long-term strategy.

Professor Alex Sims
Professor Alex Sims

If Anthropic's guardrails are just public goals, not hard rules, what are New Zealand firms signing up to when they use Claude in their operations?

This becomes especially practical in financial services, where AI already affects day-to-day decisions and compliance obligations. AI is now embedded in lending decisions, fraud detection, credit scoring, anti-money laundering compliance and customer services across New Zealand.

When a major AI provider's governance framework evolves, as with Anthropic, the downstream question for banks is whether their third-party risk processes are robust enough to keep pace.

The Financial Markets Authority's third-party risk guidance is designed to address this; at the same time, governance infrastructure needs to keep up with the technology.

Explainability is also a real problem. Regulators increasingly want to understand why AI makes decisions about a mortgage or flags a transaction. But if the AI's underlying governance is shifting under commercial and geopolitical pressure, that explainability becomes harder to guarantee, technically and ethically.

The bigger picture

The Anthropic standoff is a stress test. If powerful customers can pressure AI firms to revisit self-imposed limits, this raises big questions about the reliability of voluntary frameworks under commercial and geopolitical pressure. It also raises concerns about what backstops exist when incentives shift.

New Zealand's light-touch regime may be a sensible starting point. We are asking the right questions, but the Anthropic episode adds urgency.

We need to ensure our regulatory frameworks are sufficiently robust to stand on their own, regardless of what an AI company decides.

This article reflects the opinion of the authors and not necessarily the views of Waipapa Taumata Rau, University of Auckland.

It was first published by Stuff

Media contact:

Sophie Boladeras, media adviser
M: 022 4600 388
E: sophie.boladeras@auckland.ac.nz