Bias
Today, DSIT’s Responsible Technology Adoption Unit (RTA) is pleased to publish our guidance on Responsible AI in recruitment. This guidance aims to help organisations responsibly procure and deploy AI systems for use in recruitment processes. The guidance identifies key considerations …
Following our review into bias in algorithmic decision-making, the CDEI has been exploring challenges around access to demographic data for detecting and mitigating bias in AI systems, and considering potential solutions to address these challenges. Today we are publishing our …
Building and using AI systems fairly can be challenging, but is hugely important if the potential benefits from better use of AI are to be achieved. Recognising this, the government's recent white paper “A pro-innovation approach to AI regulation” proposes …
This week, we announced the launch of a new programme of work around enabling responsible access to data. One of the initial workstreams in this programme is exploring the potential for novel approaches to data stewardship to support organisations to …
In our recently published review into bias in algorithmic decision-making, we explored the regulatory context in which algorithmic decisions take place, which includes equality law, human rights law, discrimination law and sector specific regulations. The main piece of legislation that …
The Race Disparity Unit (RDU) and Centre for Data Ethics and Innovation (CDEI) began a partnership in March 2019 at the start of the CDEI’s review into bias in algorithmic decision-making. The RDU is a UK government unit which collates, …
The CDEI believes that the government should introduce a mandatory transparency obligation on all public sector organisations using algorithms that have a significant influence on significant decisions affecting individuals. Our report published last week suggests definitions for these terms. But whilst a transparent approach is vital to building a trustworthy environment, we should not assume that greater transparency from public sector organisations will inevitably lead to greater trust in the public sector.
This report draws together the findings and recommendations from a broad range of work. We have focused on the use of algorithms in significant decisions about individuals, looking across four sectors (recruitment, financial services, policing and local government), and making cross-cutting recommendations that aim to help build the right systems so that algorithms improve, rather than worsen, decision-making.
Financial companies are increasingly using complex algorithms to make decisions regarding loans or insurance - algorithms that look for patterns in data which are associated with risks of default or high insurance claims. This raises risks of bias and discrimination …
Recent reports suggest 9 out of 10 people are biased against women in some way. We wanted to mark International Women’s Day this year by talking about bias in a world of data-driven technology and artificial intelligence, and our forthcoming report on bias in algorithmic decision-making.
Recent Comments