Benjamin Liu

Senior Lecturer Benjamin Liu from the Department of Commercial Law investigates the benefits of, and issues with AI.

The power and perils of AI

If it seems like artificial intelligence (AI) is everywhere, you are probably right. In just a few decades, AI has transformed the way we work, learn, and play.

AI describes technology that teaches machines to mimic cognitive functions that we associate with human minds (such as learning and problem-solving) and make decisions based on them. While this is undeniably useful, there are downsides.

I’m not talking about the existential threat it poses to the human species, which is a whole topic in itself. As a lawyer, I am concerned with the legal issues when a company uses AI to make decisions that affect individuals like you and me.

Uber is a good example. Unlike traditional taxi fares, Uber fares are set by AI or, more accurately, machine-learning algorithms. For each ride, the fare takes into account not only the travel time and distance but also the demand at the relevant time and area. If you’re travelling from a wealthy area, your fare is likely to be higher than another person travelling from a poorer part of the city because the computer ‘knows’ you can afford it.

While paying a few extra dollars for a ride is one thing, what if a machine was making decisions in areas of your life that could have serious impacts such as credit scores, medical care, crime prevention or even criminal sentencing?

While the benefits of AI systems are undeniable – freeing us from boring tasks, reducing costs, and boosting efficiency – automated decision-making suffers from two serious problems.

The first problem is non-transparency. Just as Google will not tell you how they rank search results, AI system designers do not disclose what input data the AI relies on, and which learning algorithms it uses. The reason is simple – these are trade secrets and companies do not want their competitors to know.

In the United States, a 2016 study showed that ‘risk scores’ – which is the score produced by a computer programme predicting the likelihood of a defendant committing a future crime – are systematically biased against black people. As the programme designer would not publically disclose the calculations, which they said were proprietary, it was impossible for defendants or the public to challenge risk scores.

The second problem with automated decision-making goes deeper into how AI works. Today, many advanced AI applications use ‘neural networks,’ a type of machine learning algorithm based on the structure of human brains.

While a neural network can produce accurate results, the way it does so is often impractical or impossible to explain in human logic. This is commonly referred to as the ‘black box’ problem.

Overseas regulators have started to regulate automated decision-making. On May 25th 2018, a set of EU regulations, the General Data Protection Regulation (GDPR), came into effect. One of its key features is the right to explanation. In short, if a person is subjected to automated decision-making, that person has a right to request ‘meaningful information about the logic involved.’ And individuals have the right to opt out of automated decision-making in a wide range of situations.

The GDPR will have important implications globally. To the extent that a New Zealand company controls or processes the personal information of EU residents, they will need to comply with the GDPR, even if they do not have any physical presence in the EU.

Back home, the use of AI for decision-making is still rare but expanding. The potential legal issues cannot be overlooked. Policy makers, lawyers and market participants need to start thinking about a regulatory framework for AI. For instance, should we set up an AI watchdog to ensure that AI applications are being used in a fair way? Should every person have a right to an explanation? The answer to this seems, to me at least, pretty clear. After all, if our rights are affected by AI, we should know why.