Skip to main content

https://rtau.blog.gov.uk/2020/12/09/public-sector-equality-duty-and-bias-in-algorithms/

Public Sector Equality Duty and bias in algorithms

Posted by: , Posted on: - Categories: Algorithms, Bias, Facial recognition technology

In our recently published review into bias in algorithmic decision-making, we explored the regulatory context in which algorithmic decisions take place, which includes equality law, human rights law, discrimination law and sector specific regulations. 

The main piece of legislation that governs issues related to equality is the Equality Act 2010, which builds on previous equality and discrimination laws. While the Equality Act’s prohibition on discrimination applies to both public and private sector organisations, the public sector has additional obligations to address equality through the Public Sector Equality Duty (PSED).

Public Sector Equality Duty 

The PSED requires all public bodies to consider how their activities, including algorithmic decision-making, impact on inequality. 

The duty states that: A public authority must, in the exercise of its functions, consider the need to:

  • eliminate discrimination, harassment, victimisation and any other conduct that is prohibited by or under the Act;
  • advance equality of opportunity between persons who share a relevant protected characteristic and persons who do not share it;
  • foster good relations between persons who share a relevant protected characteristic and persons who do not share it.

Protected characteristics refer to the list of nine characteristics that are protected by the Equality Act. These are age: disability; gender reassignment; marriage and civil partnership; pregnancy and maternity; race; religion or belief; sex; and sexual orientation.

It is well established that there is a risk that algorithmic systems can lead to biased decisions. There are a variety of reasons why, however the most common explanation is that historical inequalities become encoded in the data used to train models, which further perpetuate the disparities. These risks can be managed with appropriate transparency, oversight and testing. However, without taking proactive measures, these algorithms can cause harm and worsen inequality.

Public Sector Equality Duty and the use of algorithms

One of the few legal cases to test the application of equality law to algorithmic bias was on the use of live facial recognition technology by police forces, following concerns around violations of privacy and potential biases within the system. Facial recognition technology has been frequently criticised for performing differently against people with different skin tones, meaning accuracy of many systems is often higher for white men compared to people with other ethnicities. 

South Wales Police have trialled the use of live facial recognition in public spaces on several occasions since 2017. These trials were challenged through judicial review, and were found unlawful in the Court of Appeal on 11 August 2020. 

One of the grounds for successful appeal was that South Wales Police failed to adequately consider whether their trial could have a discriminatory impact, and specifically that they did not take reasonable steps to establish whether their facial recognition software contained biases related to race or sex. In doing so, the court found that they did not meet their obligations under the PSED. 

In this case there was no evidence that this specific algorithm was biased in this way, but that South West Police failed to take reasonable steps to consider this. It seems likely that this could have significant legal implications for public sector use of algorithmic decision-making generally, suggesting that the PSED requires organisations to take reasonable steps to detect algorithmic bias.

Implications for public sector bodies

There is increasing consensus that public bodies have a legal obligation to test their algorithms for any direct or indirect discrimination under the PSED. If public sector organisations fail to consider these issues upfront through the development and deployment process, there is a risk that they will be found to be failing to comply with their legal responsibilities under the PSED. 

In our final report, we have identified how both private and public sector organisations will need to address the risks of algorithmic bias, ranging from choosing between inconsistent definitions of fairness, to measuring bias through demographic data, to responsible use of bias mitigation techniques. The PSED means that public sector bodies have an even greater responsibility to consider and address bias when adopting algorithmic decision-making, and we believe these organisations will need greater support in how to do this well. 

The PSED also extends to regulators who are responsible, as public sector bodies, to ensure the industry they regulate is upholding appropriate standards for testing for bias in the use of algorithms. In order for regulators to fulfil these obligations, they will need support, through building relationships between regulators and from organisations with specific expertise in these areas. 

For further discussion on these topics, including our recommendations for how public bodies can be supported to carry out these responsibilities, read our recently published review into bias in algorithmic decision-making, and keep an eye out for our future work on AI assurance.

Sharing and comments

Share this page