Skip to main content

Artificial intelligence

How AI assurance can support trustworthy AI in recruitment

A pair of hands is shown in the foreground and a laptop on a desk in the background.

Today, DSIT’s Responsible Technology Adoption Unit (RTA) is pleased to publish our guidance on Responsible AI in recruitment. This guidance aims to help organisations responsibly procure and deploy AI systems for use in recruitment processes. The guidance identifies key considerations …

Privacy Attacks in Federated Learning

This post is part of a series on privacy-preserving federated learning. The series is a collaboration between CDEI and the US National Institute of Standards and Technology (NIST). Learn more and read all the posts published to date on the …

Championing responsible innovation: reflections from the CDEI Advisory Board

Text on beige background reads reflections from the outgoing CDEI Advisory Board

The Centre for Data Ethics and Innovation leads the Government’s work to enable trustworthy innovation using data and artificial intelligence. At the CDEI, we help organisations across the public and private sectors to innovate, by developing tools to give organisations …

Six lessons for an AI assurance profession to learn from other domains - part three: features of effective certification schemes

Posted by: and , Posted on: - Categories: Algorithms, Artificial intelligence, Assurance

We are looking at professionalisation and certification as part of our programme of work to support the vision laid out in our roadmap to an effective AI assurance ecosystem. As discussed in part one, it will be helpful to learn …

Six lessons for an AI assurance profession to learn from other domains - part two: conditions for effective certification

Posted by: and , Posted on: - Categories: Algorithms, Artificial intelligence, Assurance

Lesson two: Broad community building is crucial  Community building that emphasises skills, communication, and diversity is crucial for ensuring that certification is reliable and accountable. Other sectors, like cybersecurity and healthcare, as well as cross-sector communities organised around ESG and …

Six lessons for an AI assurance profession to learn from other domains - part one: how can certification support trustworthy AI?

The UK government's recently published approach to AI regulation sets out a proportionate and adaptable framework that manages risk and enhances trust while also allowing innovation to flourish. The framework also highlights the critical role of tools for trustworthy AI, …

Working with the ICO to encourage the adoption of PETs

Posted by: , Posted on: - Categories: Algorithms, Artificial intelligence, Data, Ethical innovation

Last year, the CDEI launched a responsible data access programme to address the challenges organisations face to access data they need in a responsible way. A key component of this programme is our work to encourage adoption of Privacy-Enhancing Technologies …

Improving responsible access to demographic data to address bias

Following our review into bias in algorithmic decision-making, the CDEI has been exploring challenges around access to demographic data for detecting and mitigating bias in AI systems, and considering potential solutions to address these challenges.  Today we are publishing our …

Fairness Innovation Challenge: Call for Use Cases

Building and using AI systems fairly can be challenging, but is hugely important if the potential benefits from better use of AI are to be achieved.  Recognising this, the government's recent white paper “A pro-innovation approach to AI regulation” proposes …