How our lawmakers should respond to AI-related copyright issues

Opinion: If the Government isn’t proactive in implementing its AI policy it will be vulnerable to being built on corporate interests and unnecessary deregulation, says Joshua Yuvaraj.

Abstract digital Vitruvian Man an on circuit board technology blue background. AI art or design concept.
Judges are being forced to apply copyright laws to technologies that weren’t around when they were made: is training an AI using copyright-protected works permitted? Should AI-generated songs, books, and videos be protected under copyright law?

In September the technology company Anthropic agreed to pay US$1.5 billion to a group of authors to settle a lawsuit about its use of their books to train its AI model without consent or compensation.

Although that settlement is now up in the air – US District Court Judge William Alsup wanted more information about how it would work, and how authors wouldn’t be pressured into accepting the terms – it’s a vivid illustration of the difficulties AI technologies are causing lawmakers around the world.

Judges are being forced to apply copyright laws to technologies that weren’t around when they were made: is training an AI using copyright-protected works permitted? Should AI-generated songs, books, and videos be protected under copyright law? And who is liable when those AI-generated works copy pre-existing styles without attribution or consent, as OpenAI did when it enabled users to generate images in the style of the world-renowned animation powerhouse Studio Ghibli?

Copyright law is just one example. AI issues are affecting all areas of law, and all countries including Aotearoa New Zealand. For example, who is liable for deepfake AI pornography under New Zealand law when technology companies are often borderless services?

How can privacy law protect the personal information of New Zealanders entered into ChatGPT, Claude and other chatbot services? How can children be protected when chatbots are permitted to engage in conversations with them that involves inappropriate, and potentially life-threatening, behaviour?

New Zealand lawmakers have three options in response to these AI-related issues: AI-positivity, AI-negativity, and AI-neutrality.

The Government must seek out expertise in New Zealand in computer science, economics, psychology, law, the humanities, engineering and other disciplines. Doing so gives the Government the best chance of developing AI policy that is balanced, engages its potential and risks, and protects the rights of all New Zealanders, 

AI-positivity extols the virtues of AI for New Zealand’s population and economy. It aims to facilitate access to New Zealand data to train AI models, and encourages AI companies to bring operations here with a light-touch regulatory approach – in a similar way to the Republic of Ireland’s reinvention as a low-tax jurisdiction for technology companies such as Apple.

AI-negativity views AI with caution. It emphasises the danger of AI services to New Zealanders, from the misuse of personal information, biases in algorithmic decision-making applied by government agencies and corporations, and a disrespect for te ao Māori, tikanga Māori and mātauranga Māori (for example, by indiscriminately training AI models on te reo Māori literature without acknowledging it is a taonga). AI-negativity emphasises regulation, and the need to control AI even at the expense of productivity and efficiency.

AI-neutrality, meanwhile, takes a wait-and-see approach. It considers our existing laws to be sufficient to protect New Zealanders. It neither validates AI nor over-emphasises its risks. AI is another technological innovation with benefits and risks, which must be used within our regulatory framework. Should changes be necessary, they will be made. However, AI-neutrality is about minimising changes to avoid waves of law reform in different directions that can create uncertainty in the population.

None of these approaches is necessarily the ‘right way’ to regulate AI. But the Government must be proactive about the approach it wants to take, either across the board or in relation to different areas of the economy. The Government’s controversial AI policy is a start, but much more is needed.

The Government must seek out expertise in New Zealand in computer science, economics, psychology, law, the humanities, engineering and other disciplines. Doing so gives the Government the best chance of developing AI policy that is balanced, engages its potential and risks, and protects the rights of all New Zealanders, especially the most vulnerable. Failing to do so leaves the Government vulnerable to AI policy built on corporate interests, unnecessary deregulation and a population at risk of harm in service of overseas technology companies.

Flashy billion-dollar settlements seem alluring. But even more alluring is balanced, effective, culturally sensitive AI regulation that maximises what AI can bring while protecting New Zealanders against its considerable risks.

Dr Joshua Yuvaraj is a Senior Lecturer and Co-Director of the New Zealand Centre for Intellectual Property in the Faculty of Law.

This article reflects the opinion of the author and not necessarily the views of Waipapa Taumata Rau University of Auckland.

This article was first published on Newsroom, Government’s AI strategy only the start, 17 September, 2025. 

Media contact

Margo White I Research communications editor
Mob
021 926 408
Email margo.white@auckland.ac.nz