Skip to main content

How the ‘Introduction to AI assurance’ guide is supporting government’s innovative approach to AI regulation

Posted by: , and , Posted on: - Categories: Assurance

DSIT’s Responsible Technology Adoption (RTA) Unit is pleased to publish its Introduction to AI assurance. This guidance is an accessible introduction that aims to support organisations to better understand how AI assurance techniques can be used to ensure the safe …

Privacy Attacks in Federated Learning

This post is part of a series on privacy-preserving federated learning. The series is a collaboration between CDEI and the US National Institute of Standards and Technology (NIST). Learn more and read all the posts published to date on the …

The UK-US Blog Series on Privacy-Preserving Federated Learning: Introduction

This post is the first in a series on privacy-preserving federated learning. The series is a collaboration between CDEI and the US National Institute of Standards and Technology (NIST). Advances in machine learning and AI, fuelled by large-scale data availability …

Championing responsible innovation: reflections from the CDEI Advisory Board

Text on beige background reads reflections from the outgoing CDEI Advisory Board

The Centre for Data Ethics and Innovation leads the Government’s work to enable trustworthy innovation using data and artificial intelligence. At the CDEI, we help organisations across the public and private sectors to innovate, by developing tools to give organisations …

Six lessons for an AI assurance profession to learn from other domains - part three: features of effective certification schemes

Posted by: and , Posted on: - Categories: Algorithms, Artificial intelligence, Assurance

We are looking at professionalisation and certification as part of our programme of work to support the vision laid out in our roadmap to an effective AI assurance ecosystem. As discussed in part one, it will be helpful to learn …

Six lessons for an AI assurance profession to learn from other domains - part two: conditions for effective certification

Posted by: and , Posted on: - Categories: Algorithms, Artificial intelligence, Assurance

Lesson two: Broad community building is crucial  Community building that emphasises skills, communication, and diversity is crucial for ensuring that certification is reliable and accountable. Other sectors, like cybersecurity and healthcare, as well as cross-sector communities organised around ESG and …

Six lessons for an AI assurance profession to learn from other domains - part one: how can certification support trustworthy AI?

The UK government's recently published approach to AI regulation sets out a proportionate and adaptable framework that manages risk and enhances trust while also allowing innovation to flourish. The framework also highlights the critical role of tools for trustworthy AI, …

Working with the ICO to encourage the adoption of PETs

Posted by: , Posted on: - Categories: Algorithms, Artificial intelligence, Data, Ethical innovation

Last year, the CDEI launched a responsible data access programme to address the challenges organisations face to access data they need in a responsible way. A key component of this programme is our work to encourage adoption of Privacy-Enhancing Technologies …

Improving responsible access to demographic data to address bias

Posted by: and , Posted on: - Categories: Algorithms, Artificial intelligence, Bias, Data, Demographic data, Intermediaries, Trust

Following our review into bias in algorithmic decision-making, the CDEI has been exploring challenges around access to demographic data for detecting and mitigating bias in AI systems, and considering potential solutions to address these challenges.  Today we are publishing our …