Skip to main content

Privacy-Preserving Federated Learning: Understanding the Costs and Benefits

Posted by: and , Posted on: - Categories: Data, Data collection, Data-driven technology, Data-sharing

Privacy Enhancing Technologies (PETs) could enable organisations to collaboratively use sensitive data in a privacy-preserving manner and, in doing so, create new opportunities to harness the power of data for research and development of trustworthy innovation. However, research DSIT commissioned …

How the ‘Introduction to AI assurance’ guide is supporting government’s innovative approach to AI regulation

Posted by: , and , Posted on: - Categories: Assurance

DSIT’s Responsible Technology Adoption (RTA) Unit is pleased to publish its Introduction to AI assurance. This guidance is an accessible introduction that aims to support organisations to better understand how AI assurance techniques can be used to ensure the safe …

Privacy Attacks in Federated Learning

This post is part of a series on privacy-preserving federated learning. The series is a collaboration between CDEI and the US National Institute of Standards and Technology (NIST). Learn more and read all the posts published to date on the …

The UK-US Blog Series on Privacy-Preserving Federated Learning: Introduction

This post is the first in a series on privacy-preserving federated learning. The series is a collaboration between CDEI and the US National Institute of Standards and Technology (NIST). Advances in machine learning and AI, fuelled by large-scale data availability …

Championing responsible innovation: reflections from the CDEI Advisory Board

Text on beige background reads reflections from the outgoing CDEI Advisory Board

The Centre for Data Ethics and Innovation leads the Government’s work to enable trustworthy innovation using data and artificial intelligence. At the CDEI, we help organisations across the public and private sectors to innovate, by developing tools to give organisations …

Six lessons for an AI assurance profession to learn from other domains - part three: features of effective certification schemes

Posted by: and , Posted on: - Categories: Algorithms, Artificial intelligence, Assurance

We are looking at professionalisation and certification as part of our programme of work to support the vision laid out in our roadmap to an effective AI assurance ecosystem. As discussed in part one, it will be helpful to learn …

Six lessons for an AI assurance profession to learn from other domains - part two: conditions for effective certification

Posted by: and , Posted on: - Categories: Algorithms, Artificial intelligence, Assurance

Lesson two: Broad community building is crucial  Community building that emphasises skills, communication, and diversity is crucial for ensuring that certification is reliable and accountable. Other sectors, like cybersecurity and healthcare, as well as cross-sector communities organised around ESG and …

Six lessons for an AI assurance profession to learn from other domains - part one: how can certification support trustworthy AI?

The UK government's recently published approach to AI regulation sets out a proportionate and adaptable framework that manages risk and enhances trust while also allowing innovation to flourish. The framework also highlights the critical role of tools for trustworthy AI, …

Working with the ICO to encourage the adoption of PETs

Posted by: , Posted on: - Categories: Algorithms, Artificial intelligence, Data, Ethical innovation

Last year, the CDEI launched a responsible data access programme to address the challenges organisations face to access data they need in a responsible way. A key component of this programme is our work to encourage adoption of Privacy-Enhancing Technologies …