Skip to main content

How AI assurance can support trustworthy AI in recruitment

A pair of hands is shown in the foreground and a laptop on a desk in the background.

Today, DSIT’s Responsible Technology Adoption Unit (RTA) is pleased to publish our guidance on Responsible AI in recruitment. This guidance aims to help organisations responsibly procure and deploy AI systems for use in recruitment processes. The guidance identifies key considerations …

Six lessons for an AI assurance profession to learn from other domains - part three: features of effective certification schemes

Posted by: and , Posted on: - Categories: Algorithms, Artificial intelligence, Assurance

We are looking at professionalisation and certification as part of our programme of work to support the vision laid out in our roadmap to an effective AI assurance ecosystem. As discussed in part one, it will be helpful to learn …

Six lessons for an AI assurance profession to learn from other domains - part two: conditions for effective certification

Posted by: and , Posted on: - Categories: Algorithms, Artificial intelligence, Assurance

Lesson two: Broad community building is crucial  Community building that emphasises skills, communication, and diversity is crucial for ensuring that certification is reliable and accountable. Other sectors, like cybersecurity and healthcare, as well as cross-sector communities organised around ESG and …

Six lessons for an AI assurance profession to learn from other domains - part one: how can certification support trustworthy AI?

The UK government's recently published approach to AI regulation sets out a proportionate and adaptable framework that manages risk and enhances trust while also allowing innovation to flourish. The framework also highlights the critical role of tools for trustworthy AI, …

Improving responsible access to demographic data to address bias

Following our review into bias in algorithmic decision-making, the CDEI has been exploring challenges around access to demographic data for detecting and mitigating bias in AI systems, and considering potential solutions to address these challenges.  Today we are publishing our …

Fairness Innovation Challenge: Call for Use Cases

Building and using AI systems fairly can be challenging, but is hugely important if the potential benefits from better use of AI are to be achieved.  Recognising this, the government's recent white paper “A pro-innovation approach to AI regulation” proposes …

Developing the Algorithmic Transparency Standard in the open

Today the Central Digital and Data Office (CDDO) and the Centre for Data Ethics and Innovation (CDEI) are sharing an updated version of the Algorithmic Transparency Standard on GitHub. Sharing the updated Standard on GitHub will allow interested stakeholders to …

Piloting the UK algorithmic transparency standard

Today, the Central Digital and Data Office (CDDO)  and Centre for Data Ethics and Innovation (CDEI) have published the first reports from our transparency standard pilots. The public sector algorithmic transparency standard is one of the world’s first initiatives of …