Sustainable innovation in data-driven technology is dependent on building and maintaining public trust. There is clearly some way to go to build public trust in algorithms, and the obvious starting point for this is to ensure that algorithms are trustworthy; in the CDEI’s review into bias in algorithmic decision-making we looked into a key part of this.
In the review, we considered the role that transparency can play in building a trustworthy environment and ensuring fairness. While this is true in all sectors, the public sector has a particular responsibility to set an example for what good transparency in the use of algorithms should look like. The public sector makes many of the highest impact decisions affecting individuals, and we expect the public sector to be able to justify and evidence its decisions. While an individual has the option to opt-out of using a commercial service whose approach to data they do not agree with, they do not have the same option with essential services provided by the state.
The use of algorithms in policing highlights some of these issues. The UK’s long-standing philosophy of policing by consent emphasises the importance of public trust and legitimacy. However, we know that without sufficient care, the use of algorithms in policing can lead to outcomes that are biased against particular groups, or systematically unfair. Even where care has been taken to avoid bias, there is also an important balance to be struck between algorithmic decision-making and the application of professional judgement and discretion, again a highly valued feature of UK policing. Given these sensitivities and the context in which police are operating it is critical that they are transparent in where and how such algorithmic tools are being used in order to maintain public trust and confidence.
The views of the public
The most prominent recent debate on algorithmic decision-making has been around the approaches taken by the UK’s four exam regulators to award exam results this summer. Clearly the issues around this were complex, but the controversy highlights a clear and understandable nervousness about the use and consequences of algorithms. Transparency about how and why algorithms are being used, and the checks and balances in place, is the best way for organisations to deal with this.
This brings its own challenges: concerns around media scrutiny and unpredictable public reaction can either lead to cautiousness to innovate or, worse, a reluctance to be fully transparent. This was highlighted in research we published earlier this year on how to drive trustworthy data sharing in the public sector. This lack of transparency can then itself lead to the public assuming the worst, and potentially inaccurate media coverage on the organisation’s activity, and so the cycle continues.
To understand this better, the CDEI carried out two waves of polling: one in late July and another in mid October, investigating the UK public’s awareness of, and attitudes towards, the use of algorithms in making decisions about individuals. This suggested that, prior to August’s controversy over exam results, 57% of people were aware of algorithmic systems being used to support decisions about them, with only 19% of those disagreeing in principle with the suggestion of a “fair and accurate” algorithm helping to make decisions about them. By October, we found that awareness had risen slightly (to 62%), as had disagreement in principle (to 23%). This doesn’t suggest a step change in public attitudes, but there is clearly still a long way to go to build trust in algorithmic systems.
Next steps for improving transparency and building trust in the public sector
As a major developer, buyer and user of data-driven technology, the UK public sector has the opportunity to set an example for transparency in the use of algorithms across the public and private sectors.
The UK government has many mechanisms in place already for transparency both on decision-making and technology. In technology, the last few years have seen hugely increased levels of transparency, with the government’s design principles noting that “making things open makes them better”, while significant information about human-driven decision-making processes is also published (and more available to interested parties via rights under the Freedom of Information Act and Data Protection Act).
However, there is not a consistent approach for algorithmic decision-making. The CDEI believes that the government should introduce a mandatory transparency obligation on all public sector organisations using algorithms that have a significant influence on significant decisions affecting individuals. Our report published last week suggests definitions for these terms.
Whilst a transparent approach is vital to building a trustworthy environment, we should not assume that greater transparency from public sector organisations will inevitably lead to greater trust in the public sector. In line with Onora O’Neill’s principle of intelligent accountability, organisations should ensure they are providing intelligible information on the algorithms they are using to properly inform the public rather than risk fostering even greater concern through the sharing of incomplete or complex information. In this case, we suggest the scope of information required to achieve this should include:
- Overall details of the decision-making process in which an algorithm/model is used
- A description of how the algorithm/model is used within this process, including how humans provide oversight of decisions and the overall operation of the decision-making process
- An overview of the algorithm/model itself and how it was developed, covering for example: the type of machine learning technique used to generate the model, a description of the data on which it was trained, an assessment of the known limitations of the data and any steps taken to address or mitigate these
- An explanation of why the overall decision-making process was designed in this way, including impact assessments covering data protection, equalities, human rights, carried out in line with relevant legislation.
The CDEI will continue to focus on the theme of public trust and transparency. We plan to support work the Government Digital Service has begun, to explore the development of an appropriate and effective mechanism to deliver more transparency on the use of algorithmic assisted decision-making within the public sector. In addition, we will continue to work in partnership with organisations as they develop practical governance structures to support responsible and trustworthy data innovation.
Recent Comments