Dame Cressida Dick, the Commissioner of the Metropolitan Police Service (MPS), delivered the first ever RUSI Annual Security Lecture this week, and she was keen for her audience to take away the following message:
Policing will remain an essentially human service, supported by better information and tools.
There was a clear effort to clarify terms - what do we really mean when we say ‘artificial intelligence’? - and that she intended for the MPS’ model to be one of augmented intelligence:
a human-centred partnership model of humans and machines… as technology will never be able to replace the human police person’s empathy.
The Lecture saw the launch of the report we commissioned RUSI to carry out on the use of data analytics in policing in England and Wales. This forms part of our wider Review on the use of algorithms in human decision-making, which has a particular focus on bias. Policing is one of four sectors we have looked at in this Review, and RUSI’s recommendations will help inform the final, cross-sector recommendations we make to the Government in April.
Bias in decision-making is not new; we see bias in our everyday lives and it is reflected in data. But we must be careful not to encode these biases in the algorithms or machines we are building, and we must hold algorithmic decision-making to the same or greater standards of transparency and accountability than we do humans. One way to address this is ensuring AI and humans work in partnership, rather than reducing human responsibility. This can help minimise the potential flaws whilst maximising the benefits of both human and algorithmic decision-making processes.
What does the RUSI report say?
We commissioned RUSI to carry out this independent research nine months ago and it is the product of interviews and discussions with over 60 people who all have a stake in how police develop and use technology, including senior police officers, technologists, academics, legal experts, regulatory and oversight bodies and civil society organisations. It is the most comprehensive study to date on the subject and makes recommendations to the Government, regulators, police forces and software developers.
We welcome RUSI’s calls for clearer national guidelines for police use of data analytics as a vital step to ensure the police have the confidence to innovate legally and ethically. RUSI highlight that future guidelines should be:
- Complement existing police guidance
- Establish standard processes, rather than prescriptive rules.
The report insists that diverse input is needed from the outset of a technology project from specialist lawyers, data scientists, ethicists and police officers. Moreover, the police participants interviewed recognised that “transparency is the most important thing” in the use of analytical tools to ensure public trust is not lost.
The report describes that police are using data analytics in order to manage an increased volume and complexity of digital data under significantly reduced resources, whilst being required to focus on the preventative aspects of their role in society. It also identifies the following areas as major barriers to successful implementation:
- The lack of a robust evidence base
- Poor data quality
- Insufficient skills and expertise
It concludes that the use of algorithms in policing brings significant potential opportunities but also raises legal, ethical and human rights concerns and stronger safeguards are needed to ensure these issues are considered at the outset of new technology projects.
What have the CDEI been doing on policing?
Alongside RUSI’s research, we have been speaking to police officers, national policing bodies, academics, policymakers, and regulators to develop an Ethics Framework which could feed into these national guidelines. Our Framework is intended to support police project teams to think through the hard ethical questions, for example:
- Is the problem you are trying to solve best suited to a technical tool?
- Do you hold good quality data on the problem?
- How will you review the impact of the tool and manage any unintended effects?
We have been struck by the strong interest in and momentum on data ethics in the policing community. From the drive shown by the National Police Chiefs’ Council, the transparent way the West Midlands Police and Crime Commissioner’s Ethics Committee operates, through publishing a record of its discussions, to the exciting pockets of innovation happening in individual forces.
But interest and excitement are tempered by real concern.
We have spoken to police forces who are worried about adopting these tools because they are unsure about the legal implications and worry what their communities might think. This is concerning given the police have a responsibility to protect citizens and should seize opportunities to better use data to achieve this goal.
Bringing the public into the conversation
As Dame Cressida Dick said at the launch of the RUSI report,
It is not for the police to decide where the boundary lies between security and privacy.
This is crucial. We need to find a way to explain clearly how police are using technology, and will in the future, to get a genuine understanding of what the public think, acknowledging there will not be one commonly held view.
Positive steps have been taken to engage the public in debates about police technology. For example, the Police Foundation asked members of the public, through deliberative discussions, about their perceptions of, and expectations from, today’s police service. They found that respondents ranked the importance of ‘improving efficiency through technology and collaboration’ more highly following discussions highlighting the complexity and ‘multi-agency’ nature of many of policing’s current challenges.
In a different take on public engagement, MIT Technology Review developed an interactive tool, using the contentious COMPAS algorithm developed in the United States. The tool asks you to set the “high risk” threshold which, in turn, will help determine whether a defendant should be kept in jail or be allowed out while awaiting trial. It illustrates the point that there are multiple definitions of fairness which are defensible but cannot simultaneously be satisfied, much like the trade-offs we have been making in human decision-making throughout history.
But much more needs to be done. Going forward we want to work with the police and Government, building on existing momentum, to open these ethical questions up to a transparent debate with the public. Crucially, this will ensure future guidelines and national leadership are informed by a balanced, diverse public view.
We have defined data analytics as advanced algorithms used by the police to derive insights, inform operational decision-making or make predictions.
The CDEI will be publishing a report on Bias in Algorithmic Decision-Making in April which will include recommendations to Government on how we can ensure algorithms are incorporated in decision-making in a way that does not discriminate and ensures fair outcomes for all in society. The report looks at four sectors: policing, recruitment, financial services and local government.