Student guidelines for using Artificial Intelligence (AI) in taught courses
Note: These Guidelines complement the University of Auckland's binding policies and are designed to support students at the Faculty of Law in using AI ethically and compliantly. When in doubt, University Policies take precedence. Familiarise yourself with the University Policies.
The key principle
The use of AI to generate, draft, or assist in creating content for any graded assignment is prohibited, unless your instructor explicitly permits otherwise in writing (in your course outline or on Canvas).
This prohibition applies to all generative AI systems, including but not limited to ChatGPT, Claude, Gemini, DeepSeek, Microsoft Copilot, xAI and similar tools.
Key University resources:
- Advice for students on using Generative AI in coursework
- Guidelines on permitted use of software in assessment activities
- Student Academic Conduct Statute
Why "Treat AI Like a Person"?
The Principle
The use of artificial intelligence tools is analogous to seeking assistance from another person. Just as the appropriateness of consulting another person depends on the context and assignment requirements, so does the appropriateness of using AI.
While AI is technically a "tool," it's not like a calculator or spell-checker. Here is why we ask you to think of it more like getting help from another person:
- AI performs intellectual tasks that you are being assessed on;
- The learning process requires “learning” and independent thinking;
- Professional competence requires genuine capability;
- Ethical parallels with professional practice.
Before using AI for any academic task, ask yourself:
"Would it be appropriate to ask another person to do this for me?"
If the answer is no, then using AI for that purpose is also inappropriate.
Use of AI and its risks
AI can be a learning tool when used appropriately outside of graded assessments.
There are risks to using Gen-AI to support your studies. It is important to:
- Think about and understand the risks of using Gen-AI;
- Critically consider and evaluate the material Gen-AI produces;
- Remember that there are differences between different Gen-AI technologies.
Accuracy
Even though the information presented sounds convincing, the content that is generated by Gen-AI technologies can be:
- Non-factual
- Inaccurate
- Out-of-date
You should not use information generated by Gen-AI as your primary and/or only source. You must fact-check all Gen-AI outputs using other reliable sources.
Bias
Content produced by Gen-AI can reflect any biases found within the data sources that the Gen-AI uses, include, but are not limited to:
- Discrimination against marginalised groups
- Under-representation of marginalised groups
You should critically review any output generated by Gen-AI with the potential of biases in mind.
Quality
When you are expressly permitted to use GenAI for an assessment, remember that content produced by Gen-AI may lack originality and may not use the appropriate tone or language necessary for your assessment.
Additionally, the content may not be written in a way that is appropriate for your assessment.
Content generated by Gen-AI may not be of a high standard, and you are ultimately responsible for the tone, language, originality, and quality of all work in your assessments.
Why these prohibitions exist: Your development as a Lawyer
These restrictions are not arbitrary rules - they protect your professional development.
Consider what happens if you use AI inappropriately:
What you miss
Legal knowledge: When AI is used to save time reading, analysing, and interpreting legal materials, you do not gain and build the necessary knowledge on which analytical capacities and critical thinking are built. Do not finish law school without learning law and how to think like a lawyer.
Analytical skills: When AI identifies relevant legal issues, you do not develop the ability to spot issues yourself - a skill essential for practice.
Research competence: When AI provides case summaries, you do not learn to actually read cases critically, evaluate precedents, or understand judicial reasoning. Therefore, you will not be able to use these skills in real life.
Legal writing: When AI drafts your arguments, you do not develop the ability to construct clear, persuasive legal writing - the primary way lawyers communicate.
Professional judgment: When AI makes decisions for you, you do not develop the judgment needed to advise clients, assess risks, or make ethical choices.
Real consequences
Certificate of character: All confirmed breaches of academic integrity are recorded on the University’s Register of Academic Misconduct. Any records on the Register are declared to the Law Society in respect of any applicant as a barrister and solicitor.
Career development: Partners and supervisors can tell when junior lawyers lack foundational skills. Over-reliance on AI now will limit your career progression.
Disclosure requirements when AI use Is permitted
If your instructor permits AI use for a specific assignment, you must disclose all AI use fully and accurately. Failure to disclose permitted AI use may constitute academic misconduct under the Student Academic Conduct Statute.
What to disclose
Your disclosure should include:
1. What tool(s) you used
- Be specific: "ChatGPT-4" not just "AI"
- Include version numbers if known
2. The purpose of AI use
- What tasks did you use AI for?
Examples: research, brainstorming, editing, citation checking
3. What prompts did you provide
- Either include verbatim prompts or detailed summaries
- Show what information you gave the AI
4. How you evaluated AI outputs
- What verification steps did you take?
- What sources did you consult to check AI's accuracy?
5. How you integrated AI into your work
- Did you use AI output directly, or as a starting point?
- What portions of your final work came from AI vs. your own analysis?
Example disclosure
Good disclosure:
"I used ChatGPT-4 to generate an initial list of potential cases related to negligence in medical malpractice. I provided the AI with the assignment question and asked it to identify relevant cases. I independently verified all case citations through LexisNexis, read each case in full from the primary source, and conducted my own analysis of their relevance and application to the assignment question. I also conducted independent research using Westlaw and identified three additional cases not mentioned by the AI. The final legal analysis, argument structure, and all writing are entirely my own work based on my reading of the primary sources."
Insufficient disclosure: "I used AI to help with research."
This is too vague and does not demonstrate your independent work or verification process.
Why disclosure matters
Academic integrity: Transparency about your work process is fundamental to honest scholarship and also for your potential future career as a lawyer. Hiding AI use is dishonest, even when AI use itself was permitted.
Professional development: Documenting your AI use helps you reflect critically on when AI adds value vs. when it hinders your learning.
Accountability: You remain responsible for all work you submit, regardless of AI assistance. Disclosure does not transfer responsibility - it demonstrates that you understand your obligations.
Ethical modelling: Disclosure practices in law school prepare you for professional attribution requirements in practice.
Note: Request clarification from your instructor whether the disclosure is counted as within the limit of your word count or beyond.
Privacy and confidentiality when using AI
University-provided secure AI tools
The University provides access to generative AI tools with enhanced privacy protection when you sign in with your University account:
- Microsoft Copilot - Sign in with University credentials to see the "Protected" shield icon
- Google Gemini - Sign in with your @aucklanduni.ac.nz account
- NotebookLM - Sign in with your University Google account
Access information: Generative AI tools for students
Important: Even with University-provided tools, always check with your instructor before uploading course materials or assignment-related content.
What never to enter into AI tools
Do not enter:
- Confidential information from internships, clinical work, or employment
- Client information of any kind, including hypothetical scenarios based on real cases
- Personal information about yourself, classmates, clients, or others
- Proprietary information or trade secrets
- Copyrighted course materials without authorisation (lecture slides, case PDFs, assigned readings)
- Information protected by privacy laws or professional duties
Why this matters
Data retention: Information you enter into AI tools may be used for further training or for other activities that are outside of your control.
Professional consequences: Breaching confidentiality - even during law school - can:
- Violate professional ethics rules;
- Harm clients or individuals;
- Result in academic misconduct findings;
- Affect your character and fitness evaluation for bar admission.
Legal obligations: Some information is protected by law (privacy statutes, professional privilege). Sharing it with AI may violate legal obligations.
For use of other software, see: Guidelines on permitted use of software in assessment activities
Questions and support
If you are unsure, ask your instructor before using AI. It is always better to seek clarification than to assume AI use is permitted and face academic misconduct consequences.
Academic integrity support
- Complete the required Academic Integrity Course (ACADINT A01)
- Review About Academic Integrity
- Check Academic Integrity FAQ
- Contact the Academic Quality Office: academicqualityintegrity@auckland.ac.nz
Academic skills development
For support in developing legal research, writing, and analytical skills:
- Contact Learning Advisers at Te Tumu Herenga Libraries and Learning Services
- Access QuickCite for referencing guidance, including how to cite AI-generated content
Student support
If you're struggling with coursework or feeling pressure to use AI inappropriately:
- Speak with your course instructor about support options
- Contact Student Learning Services
- Access Student Health and Counselling if stress is affecting your wellbeing.
Key principles to remember
- Default to prohibited: Unless explicitly permitted in writing, AI use for graded work is not allowed
- Treat AI like a person: If you couldn't get that help from another person for an assessment, you can't get it from AI
- Your professional development comes first: Shortcuts now create gaps in your competence later
- Ethical obligations begin now: The habits you form as a law student shape your professional character
- Always disclose: When AI use is permitted, full transparency is required
- Verify everything: AI is frequently wrong about legal matters - never trust it blindly
- Protect confidentiality: Never enter sensitive information into AI tools
- Think long-term: Your legal career depends on genuine competence, not AI-assisted work
- Ask when in doubt: Clarify expectations with your instructor before using AI
- Take responsibility: You are accountable for all work you submit, regardless of what tools you used
Your education, your career, your responsibility
Your legal education is preparing you for professional responsibility. The skills you develop now - legal analysis, research, writing, and ethical judgment - form the foundation of your entire career.
AI cannot develop these skills for you. It cannot sit for the bar exam on your behalf. It cannot appear in court for you or advise your clients with the judgment and ethical responsibility that only comes from human expertise.
Use AI wisely, ethically, and honestly - or not at all.
The choice you make now about how to engage with your legal education will shape the lawyer you become.
Document Version: 1.2 | Last Updated: November 2025 | Next Review: April 2026
Disclosure: This is a working document that will be reviewed regularly to ensure our guidelines remain responsive to rapid developments in AI technology and emerging pedagogical challenges.