Cybersecurity and privacy in the age of AI agents
5 May 2026
As artificial intelligence becomes embedded in everyday tools, a new kind of risk is emerging.
AI agents are systems that can act, make decisions and interact with other systems. This new technology is changing how digital services are built and used. They also introduce a critical challenge: how do we make sure these systems are secure, trustworthy and safe to use?
At Waipapa Taumata Rau, University of Auckland, this is already a challenge. David Nandigam, a training and awareness specialist in the University’s cybersecurity team, is working with staff and students who are building AI-enabled tools.
David says if security is treated as an afterthought, the consequences can be serious.
“We’re building powerful systems, but we also need to be aware that we don’t always know how they will behave in every scenario, which is why security has to be part of the design from the beginning.”
Across the University, more people are experimenting with AI in research, coursework, and early-stage start-up or innovation projects. But in some cases, solutions are being developed without fully thinking through data protection and security.
His team has seen a pattern emerge where projects gain momentum, only to be halted when they fail to meet basic security requirements. What could have been a strong idea becomes difficult to take forward, with additional time to fix the issues.
However, this is not just a technical issue; it can also affect the University’s credibility. In a world where trust matters, insecure systems are risk indicators to partners, investors and users.
What makes AI agents different is that they go beyond processing information. They can connect to systems, trigger actions and influence outcomes. That means developers need to think more carefully about how these systems behave in real-world settings.
It starts with understanding the full picture:
- what data the agent can access
- which systems it connects to
- what actions it can take
- what happens as a result
Without that visibility, risks can multiply. This is already reflected in industry forecasts, with AI agents expected to play a role in a growing share of data breaches.
David says there are three common risks that they see and advise on:
- Prompt injection: these are hidden instructions that can be embedded in content that an agent reads, causing it to behave in unintended ways. Without safeguards, this can lead to data leaks or harmful actions.
- Too much access: giving an agent broad permissions increases the chance of mistakes. Even a simple request could trigger wide-reaching actions if boundaries are not clear.
- Unverified outputs: AI-generated outputs are often treated as safe by default, but they are not. If they are fed into other systems without checks, they can introduce new vulnerabilities.
Secure AI systems are built on a few simple principles. Agents need to be given clear and narrow identities. Their access should be limited to what is necessary, and their actions should be traceable.
Just as importantly, everything an agent reads should be treated as untrusted. Inputs need to be checked, filtered and controlled. These practices are not about slowing down innovation. They are what makes innovation usable in the real world.
AI agents can act independently, but they should not operate without oversight. Introducing human decision points, especially for higher-risk actions, helps maintain accountability. It also reflects a broader approach, where technology should support people, not replace human judgement.
Building secure, privacy-aware systems from the start protects the work that has gone into them. It also makes it far more likely that ideas can move beyond the University and into real-world use.
As AI becomes more embedded in how we learn, research, and build, the ability to create systems that are both useful and trustworthy will matter just as much as the ideas themselves.
“We’re operating in a very dynamic environment where many things are still unknown,” says David. “We don’t fully understand the lifecycle of an agent, who is responsible for monitoring it, or what happens over time once it’s deployed.”
Contact
Questions? Contact the Centre for Innovation and Entrepreneurship for more information.
E: cie@auckland.ac.nz