Skip to main content

In a world biased against women, what role do algorithms play?

Posted by: , Posted on: - Categories: Algorithms, Bias

9 out of 10 people have some kind of bias against women

So came the news from a UN report this International Women’s Week, reminding us that with progress comes pushback, backlash and new ways to discriminate. Every International Women’s Day should be a cause for celebration, but also a reminder that the hard work continues.

We wanted to mark International Women’s Day this year by talking about bias in a world of data-driven technology and artificial intelligence, and our forthcoming publication on the issue following a year-long review by the CDEI. The growing use of these technologies has raised concerns of bias and fairness in algorithmic decision-making. Bias is nothing new (as we well know) and society has developed ways to address it - from agreed social norms, to equality legislation. But how does an increasingly data-driven world change the game?

Defining bias in the context of algorithmic decision-making is challenging. In general use, when we describe a decision as biased, what we mean is that it is not only skewed, but skewed in a way which is unfair. The decision has been informed by characteristics which are not justifiably relevant to the outcome, such as loan decisions that are more favourable to men than women with otherwise similar financial situations.

Human decision-making can of course also be flawed, shaped by biases that are often unconscious. Algorithms only consider factors that have either been explicitly included or improve predictive accuracy (for machine learning methods). They can offer a quantified alternative to subjective human interpretation. So in that sense, the use of algorithms has the potential to improve the quality of decision-making, increasing the speed and accuracy of decisions and, if designed well, they can reduce bias. 

But issues can arise if, instead, algorithms begin to reinforce problematic biases, for example because of errors in design or because of biases in the underlying datasets. Built wrongly, an algorithm could serve to retain and reinforce previous human biases. This has happened in areas such as recruitment, including an example where an artificial intelligence system was trained on data from existing employees, where the majority of the workforce and “high performers” were men. This then resulted in a system that prioritised male candidates. So when these algorithms are used to support important decisions about people’s lives - for example determining whether they are invited to a job interview - they have the potential to cause serious harm. 

The use of these tools is not only going to grow in quantity, but also in complexity, and an understanding of the issues is crucial to enable these tools to have a positive impact and actually improve our decision-making. 

The CDEI's review into bias in algorithmic decision-making

The way in which decisions are made, the potential biases which they are subject to and the impact these decisions have on individuals are highly context dependent. Our upcoming review focuses on exploring bias in four key sectors: policing, financial services, recruitment and local government. We chose these sectors because they all involve significant decisions being made about individuals, there is evidence of the growing uptake of machine learning algorithms and there is evidence of historic bias in decision-making within these sectors. 

From the work we have done on the four sectors, as well as our engagement across government, civil society, academia and interested parties in other sectors, we will draw themes, issues and opportunities that go beyond these sectors.

Our review also considers the tensions and trade offs involved in algorithmic decision-making, options for mitigating bias, the areas where we think there should be greater transparency and the current governance and regulatory landscape. We will make recommendations to the Government on the steps should be taken to maximise the benefits and minimise the risks of innovation in this area.

You can read more about the academic, policy and other literature relating to bias in algorithmic decision-making, commissioned by the CDEI, in our landscape summary. This illustrates the complexities of this issue and highlights both the significant potential of these technologies to challenge biased decision-making and the risks that these same technologies could exacerbate existing biases.

Sharing and comments

Share this page

Leave a comment

We only ask for your email address so we know you're a real person

By submitting a comment you understand it may be published on this public website. Please read our privacy notice to see how the GOV.UK blogging platform handles your information.