Why police surveillance can’t be carte blanche
12 March 2021
Opinion: When it comes to surveillance, police say they're damned if they do and damned if they don't. Gehan Gunasekara writes how police can build the public’s trust.
In an article in the NZ Herald on Thursday, police commissioner Andrew Coster argues that police are in an invidious position: they are criticised on the one hand for failure to mount surveillance, for example, on white supremacists while, on the other hand, being criticised when technologies such as facial recognition software are used in their everyday policing activities. There is an element of being damned if they do and damned if they don’t, he argues.
I have some sympathy for his predicament. However, surveillance ought not to be carte blanche. Neither should it be left to the police to decide the extent and types of surveillance. That would be a slippery path. Two factors must be kept in mind. First, surveillance nowadays is of a different nature entirely than surveillance, say, 50 years ago. We live in the age of data that allows unimaginable quantities of personal information to be collected and linked in ways that make the ability to track and delete it virtually impossible. Secondly, it is important to differentiate types of surveillance technologies.
Consider, for instance, facial recognition: several states and municipalities in the US have banned its use altogether while others have introduced stringent requirements whenever it is deployed by law enforcement. These include measures such as a separate authorisation procedure akin to applying for a warrant. This is because facial recognition is qualitatively different to a constable noting down an observation in a notebook. Recognition software allows the biometric image to be instantly matched against existing databases or perhaps be added to them. Automated processing may then take place with decisions potentially being made whether someone was an offender who could be arrested.
Although such technologies may appear to be efficient, they contain hidden dangers. Mobile apps employing the facial recognition software may be a Trojan horse, by also copying and directing the images to the company owning the software for its own purposes and resale. Existing privacy rights, such as the rights of individuals to access their information and check its accuracy, may be difficult to enforce. There is evidence the accuracy of facial recognition is variable for certain ethnicities.
Recent reports of police routinely pressuring youngsters to allow their images to be captured through smart phone technology have not fostered a high-trust environment. Although the individuals concerned have rights under the Privacy Act to seek access to the images and any information compiled from them, it is less clear whether they can seek to have them deleted and how long they might be retained.
Andrew Coster says essentially “trust us we know what’s best”. He is correct in that public trust is crucial to successful policing. Here’s how it might be better established. First, all high-risk surveillance technologies such as facial recognition ought to undergo robust Privacy Impact Assessments (PIAs). Part of such assessment is consultation with experts, the community as well as civil society groups. Although the Office of the Privacy Commissioner is one such body a PIA must do more than obtaining its approval. PIAs must also be made public with appropriate sensitive information redacted where necessary.
Secondly, privacy by design must be incorporated in all such applications. This must consider, for instance, if the information concerns young people or other vulnerable communities such as the LGBTQI. It must factor in retention periods with automatic deletion when the information is no longer relevant. Finally, regular independent audits must take place of how such technologies were used, both their effectiveness as well as whether safeguards proved adequate. These must be made public (with once again redactions safeguarding privacy as well as police operational details).
Andrew Coster claims police surveillance does not deliberately target or profile specific ethnic groups. Be that as it may, the onus is on the police to prove it. The steps I have outlined above may go some way towards allowing police to build and maintain the public’s trust.
Gehan Gunasekara is an associate professor in commercial law at the University of Auckland Business School and chair of Privacy Foundation New Zealand.
This article reflects the opinion of the author and not necessarily the views of the University of Auckland.
Used with permission from Newsroom Why police surveillance can’t be carte blanche 12 March 2021.
Alison Sims | Research Communications Editor
DDI 09 923 4953
Mob 021 249 0089