Generative AI tools
University-approved generative AI tools, applications, services and systems for research, along with policies and approvals for using a generative AI tool.
University-approved generative AI tools
Generative AI tools are increasingly used by researchers at various stages in the research process. Researchers need to review the University's generative AI usage standard particularly regarding data classification before selecting a tool and applying it to their research.
For an up-to-date list of approved AI tools, see Getting started with AI.
Policies
The University has clear expectations for the use of artificial intelligence (AI) in research, as do research funders, data providers, and publishers. Rather than creating standalone policies, the University is integrating AI requirements into existing frameworks. This ensures AI use is reflected in our standard considerations of privacy, ethics, security and integrity.
University policies and guidelines
Privacy
- Privacy policy
- Privacy Impact Assessment (PIA)
Complete a PIA before collecting or using personal information with generative AI tools.
Ethics
- Ethical Guidelines
These will take effect in April 2026.
Security
Integrity
- Research Integrity Policy
- Authorship and Publishing Guidelines
- Doctoral policies and guidelines
Includes the editing and proofreading of theses. Proposed guidance for the use of Generative AI in doctoral research is currently pending University endorsement (March 2026).
Data management
- Research Data Management Policy
- Data classification standard
- Generative Artificial Intelligence Usage Standard
IP
Other policies and guidelines
Royal Society Te Apārangi
Health New Zealand
University approval of applications, services and systems (tools)
Why is this required?
The University is strengthening our data protection processes and needs to manage research software sustainably. To protect staff, students, research participants, and the reputation of the University, all digital applications, services, and systems will undergo a security assessment by Digital Services to ensure compliance with these policies:
- IT Security Policy
- Privacy Policy
- Research Data Management Policy including the Data classification standard
- Generative AI tools are also subject to the GenAI Usage Standard.
Completion of ethics application and IT procurement processes rely on tools being confirmed as University-approved.
Identifying tools you intend to use
- Consider research software and tool requirements for your project and document this in your Data Management Plan.
- Identify if the tool is University-approved by searching the ResearchHub or contacting your Digital Services Business Relationship Manager or the Centre for eResearch to check. Previous or current use of a tool does not mean it is currently University approved.
Seeking University approval
- Plan for the University-approval process to take 3–5 weeks.
- Start the process by contacting your Digital Services Business Relationship Manager or the Centre for eResearch. Be prepared to state what tools are needed, the intended purpose, including data classification, and how currently approved tools do not meet your needs.
- Work with Digital Services staff to compile information including a completed Privacy Impact Assessment (PIA) and security information. This will be assessed by the Digital Services Security team.
- Digital Services' approval will be for a period of time, aligned with the University's data classification, and may include additional conditions of use (e.g. limited to a specific research project or prescribed settings).
Contact
ASK IT
Digital Services - Connect
Email: askit@auckland.ac.nz
Research Data Support Services
Email: researchdata@auckland.ac.nz
Digital Services
Business relationship managers