The robots are coming, so let's preserve our people skills
12 February 2026
Commentary: People bring context, empathy, cultural understanding and ethics. As Shahper and Alex Richter point out, AI doesn’t.
If someone had told us 10 years ago that our most cherished family photos would eventually include details that never actually existed, we might have been sceptical. Yet almost every smartphone user today reshapes our images, creates photos with skies that were never quite that blue, or faces subtly reconstructed from multiple shots.
We did not intentionally decide to let algorithms reinterpret our personal history. It happened incrementally as we traded the raw, grainy truth of a moment for the glossy perfection of a ‘good shot’.
Today, many of us don’t capture reality; we produce a polished, curated version of it.
A similar, more consequential shift is happening in how decisions are made. As artificial intelligence moves from a tool that helps us to a system that makes choices on our behalf, we face a creeping transfer of autonomy, a gradual erosion of human agency in areas far more critical than family photos.
Creeping autonomy
Small delegations of responsibility to AI systems can quickly turn into significant decision-making power. The tension lies between assistance and outsourcing. When an AI system suggests a course of action based on millions of data points, it becomes difficult for a human to challenge that advice, especially if we have lost the skills or confidence to make the call themselves.
We are at risk of the incremental loss of our ability to act independently. How long before we ‘out-source’ our response to an emergency? In Aotearoa New Zealand, where flooding and natural disasters have devastating consequences for lives and infrastructure, the stakes of handing the steering wheel to an algorithm are incredibly high.
A critical question we must ask is whose values are shaping AI decisions. AI systems inevitably reflect human values, but those values are rarely universal.
Why future thinking matters
To understand these risks, we must look beyond the present. In our research we use narratives of the future – stories about everyday life in a highly technologised world – to explore the pathways and risks we often miss when we focus only on today and models or datasets.
Instead of relying on graphs or data modelling, we use narratives and describe the lived experience of a person in a tech-saturated future, which allows us to see and comprehend what role technology may play and the friction points. This narrative approach allows us to explore ‘what if’ scenarios before we are forced to live through them. It is a practical tool for anticipating the human experience (the challenges, the dilemmas, the frustrations and dangers) at the centre of the tech-dominated future
The conflict of values
A critical question we must ask is whose values are shaping AI decisions. AI systems inevitably reflect human values, but those values are rarely universal.
Many existing AI models are built on the dominant values of the United States, which often emphasise financial efficiency and individualism. These priorities may differ sharply from the values of Aotearoa New Zealand, where decisions are also informed by environmental stewardship, collectivist priorities, and te ao Māori perspectives.
Future narratives allow us to see these value conflicts early, and obliges us to ask what values do we want to inform our future.
The human in the loop
In one of our future narratives, we speculated about a future where an AI recommends a specific natural disaster emergency response that conflicts with human judgment.
In the case of this natural disaster, the AI might prioritise protecting the highest-value real estate to ensure economic recovery. The human manager, however, might prioritise a vulnerable neighbourhood that lacks the resources to rebuild and opt for a more expensive, community-focused response.
Both decisions are rational – but they reflect different values.
By imagining these clashes now, we learn to work through them, before we have them.
Preserving the human voice
Preserving human oversight is about ensuring technology reflects and amplifies our values. Humans bring context, empathy, cultural understanding and ethics. AI doesn’t.
We still have the opportunity to shape how AI is embedded in our critical systems, but we must be deliberate about the values we encode into these tools today.
Future narratives provide a practical tool that, when equipped with value principles, can protect human autonomy.
They allow us to decide what kind of society we want to be before the technology becomes so deeply embedded that we can no longer change course.
The future of AI is a human question. We should decide now what kind of humans we want to be in that future.
Shahper Richter is a senior lecturer in marketing at the University of Auckland Business School. Alexander Richter is Professor of Information Systems and the Academic Programme Leader of the Executive MBA in Wellington School of Business and Government, Victoria University.
This article reflects the opinion of the author and not necessarily the views of Waipapa Taumata Rau University of Auckland
This article was first published on Newsroom, How to keep the humans in charge, 12 February, 2026.
Media contact
Margo White I Research communications editor
Mob 021 926 408
Email margo.white@auckland.ac.nz