Brian Sims
Editor

About Face: Where is the Police Use of Surveillance Technology Going?

AS SUMMER’S lease comes to an end, Parliament returns with a new Prime Minister and duly begins to consider many pressing issues, one of which is the proposed legislation designed to reform public space surveillance by the police service. In this exclusive article for Security Matters, Professor Fraser Sampson (Commissioner for the Retention and Use of Biometric Material and Surveillance Camera Commissioner) offers a personal perspective.

Among the less dramatic events ahead of the recess in July was the quiet arrival of the Data Protection and Digital Information Bill in the House of Commons. Proposing many changes to our data protection regime, the Data Protection and Digital Information Bill will abolish the office of the Surveillance Camera Commissioner.

As two Commissioners – one for surveillance cameras, the other for biometrics I’m as much an exhibit for as I am a witness to the Government’s recognition of the growing overlap between these areas. That they are to be split again so soon is not easily explained, but some of the key issues ahead can be readily understood.

Lending perspective

How we understand anything depends heavily on perspective. There are three vantage points from which to view the proposals for reform in police surveillance: the technological (ie what can be done), the legal (ie what must/must not be done) and the societal (ie what people will support or even tolerate being done). Each perspective raises different questions and demands different answers in arriving at an understanding. All three converge most acutely at the point of facial recognition.

Let’s examine each in turn and then review the specific issues involved when it comes to facial recognition.

Technological

Biometric surveillance capability will revolutionise the investigation and prevention of crime and the prosecution of offenders, while the way in which that technology is used could jeopardise our very model of policing.

Public space surveillance is no longer about where you put a camera: it’s about what you do with the millions of images and other biometric information captured by everyone’s cameras. When it needed a human to analyse it, there was simply too much surveillance material out there, but the technology means that actors are now able to tap into an aggregated surveillance capability that’s both vast and growing.

Of all the technological developments in terms of public space surveillance, that of facial recognition is by turns the most powerful and the most sensitive.

As I mentioned at the Ada Lovelace event to launch the Ryder Review this summer, I’m often a lone voice in saying there’s a case for facial recognition technology in policing. Yes, even live facial recognition in some extreme circumstances. The attack on the New York subway at 36th St Station, Sunset Park, Brooklyn on 12 April this year is, perhaps, an example and I’ve explained precisely why elsewhere.

The same technology can be used to frustrate policing, interfere with witness relocation or disrupt covert operations. We’re now experiencing what one lawyer called “omniveillance” a decade ago. Virtually everyone has access to biometric surveillance capabilities that were once the preserve of state intelligence agencies. What’s more, they’re using them.

Legal

The first thing to note about the legal framework covering biometric surveillance is that there are many different facets to it. Matthew Ryder QC’s report summarises them and they span data protection, human rights, the common law and the concept of implied consent.

One element, namely the Surveillance Camera Code of Practice, emphasises the importance of any public space surveillance by the police being ‘legitimate’ and carried out ‘in a way that the public rightly expects and to a standard that maintains their trust and confidence.’

How do we know what the public ‘rightly expects’? Have we asked the public? What are the standards that will maintain their trust and confidence in the surveillance technology? Where are they to be found and who sets them?

We know one thing that the public will rightly expect: that the police service is able to show how its afforded due regard to the Code of Practice because the Code of Practice expressly states that it’s a ‘legitimate expectation’.

At the moment, the burden of proof lies very much with the police and this is why I’ve just written to all chief officers in England and Wales asking for evidence of legitimacy and compliance. The Code is published by the Home Secretary, was approved by Parliament in January to cover facial recognition and is currently the only legal instrument specifically written for the police use of public space surveillance.

Along with the rest of the legal landscape, the Code is less the product of some ‘eureka’ policy moment and more a feature on the battleground of litigation by citizens and regulators asking proper and pertinent questions and receiving either an unsatisfactory answer or no answer at all.

The aforementioned Ryder Review confirms that we are still dependent on litigation to set the boundaries and corroborates the view that we still don’t understand where those boundaries reside.

Societal

The societal perspective of police surveillance is changing because the technology is changing. So too is our awareness of – and attitude towards – its use. If you’re in the business of public space surveillance, it’s important that you understand what level of public support you enjoy. If you are going to rely on the citizen’s implied consent as a basis for your activity, it’s pretty fundamental to gauge different public attitudes in the first instance.

Technology is also changing the surveillance relationship between the citizen and the state. The first police communication following an incident is often an appeal for any images that individuals may have captured on their GoPros, dashcams, shedcams and ‘Ring’ doorbells, none of which is specifically regulated.

Increasingly, the police service depends not just on biometric information about the citizen, but also from the citizen – from their private devices as well as those belonging to their businesses and employers. This has profound implications for the ‘biometric relationship’ between the citizen and the state.

If one part of the surveillance system has been winding up the citizen by issuing automated penalty notices to the wrong vehicle owner or misusing their Automatic Number Plate Recognition data, the citizen may be less inclined to help when we need their privately captured and unregulated information.

We should look after this surveillance relationship very carefully indeed because we’re going to need each other.

Facial recognition

All three perspectives – technological, legal and societal – are brought into sharp relief in the areas of facial recognition and Artificial Intelligence. While the technology has raced ahead, early police experimentation with the former generated some poor statistics and even poorer stories, in turn realising a somewhat negative image for public trust.

Users of facial recognition are now faced with statistics about algorithmic bias and unreliability from four years ago which, in technological terms, is from the Pleistocene period. In terms of legality, there has been surprisingly little legislation or litigation specifically around facial recognition, thereby creating an atmosphere of uncertainty and diffidence.

At the same time, Artificial Intelligence has excited a mixture of fascination and fear. I’ve heard people say their Artificial Intelligence-based surveillance technology is simply “too complicated” to explain and that even its designers don’t really understand how it works.

Well, if you’re demonstrating that you’ve met your Public Equality Duty, that you’ve avoided bias and are in no way perpetuating unlawful discrimination, that scenario simply will not do. If you’re relying on automated decision-making, that will not do either and if, like the police, you’re putting ethics at the heart of your every action, the exact same can be said.

Transparency and ‘explainability’ are touchstones of public trust and confidence. Whether it’s your technology or your company’s ethical trading history, if it’s too opaque to be understood by the citizen who’s funding it – and purported to be benefiting from it – then the problem isn’t the citizen.

Sheffield Hallam University has developed a practical accountability framework for the use of Artificial Intelligence in law enforcement. The methodology included a citizen survey across 30 different countries in which over 80% of respondents ranked the need for a universal Accountability Framework governing the police service’s use of Artificial Intelligence as being either important or extremely important.

Dependencies and vulnerabilities

Technological development in the biometrics realm has meant that our ability to prepare for, respond to and recover from critical incidents on a global level has increased beyond anything our forebears might have imagined. At the same time, though, it has created dependencies and vulnerabilities on a similar scale.

If society is to derive the most return from biometric surveillance technology, it will need a systemic approach focusing on the integrity of both technology and practice, along with the standards of everything and everyone in it because, in a systemic setting, if you infect one part, you infect all of it.

In his valedictory report as Her Majesty’s Chief Inspector of Constabulary and Fire and Rescue Services, Sir Tom Winsor states that policing needs “a material intensification of partnership with the private sector, soundly and enduringly based on trust and common interest.” That is certainly true of the police service’s use of biometric surveillance.

In a world where almost all of our police surveillance capability is in private ownership, we need to be very careful whose corporate company we keep. If our surveillance partnerships are not “soundly and enduringly based on trust and common interest” then we are in trouble, not just as a sector, but as a society.

Looking to the future, Parliament may decide to treat police surveillance as simply a data protection matter. Of course, biometric surveillance uses individuals’ personal data, but which public or private function doesn’t? That’s like saying it uses electricity. However, biometric surveillance is no more ‘just’ data protection than DNA profiling is ‘just’ chemistry or facial recognition is ’just’ photography.

Legitimate role

The Data Protection and Digital Information Bill represents the Government’s response to the public consultation orchestrated last year and will bring an opportunity – perhaps a necessity – to address for the first time the many pressing questions around the legitimate role for newly-intrusive technology (such as facial recognition systems) by the police service.

We now have an opportunity to do something momentous. Will we lead by thoughtful and courageous planning or wait to be slowly sued into shape, either by the citizen or the regulators?

Policy is for others and legislation is for Parliament, but practically speaking I believe we need a set of clear and indefeasible principles by which the police service can be held to account for its use of surveillance technology, both transparently and auditably.

There are many different models by which to achieve this end goal, but the acid test for all of them will be whether they ensure that the technology (ie what is possible) is only being used for legitimate and authorised purposes (ie what is permissible) and also in a way that the citizen is prepared to support (ie what is acceptable).

Professor Fraser Sampson is Commissioner for the Retention and Use of Biometric Material and Surveillance Camera Commissioner 

Professor Sampson has over 40 years’ experience of working in the criminal justice sector having served as a police officer for 19 years before becoming a solicitor specialising in policing law, conduct and governance. 

He’s an Honorary Professor and member of the Advisory Board at the Centre for Excellence in Terrorism, Resilience, Intelligence and Organised Crime research at Sheffield Hallam University where he gained a PhD in digital accountability in law enforcement. 

Professor Sampson has also worked as the national chair of the Association of Police and Crime Chief Executives and was appointed CEO and solicitor to the Police and Crime Commissioner for West Yorkshire in 2012, later being seconded as CEO to the Police, Fire and Crime Commissioner in North Yorkshire.

Company Info

WBM

64 High Street, RH19 3DE
EAST GRINSTEAD
RH19 3DE
UNITED KINGDOM

03227 14

Login / Sign up