Skip to main content

Responsible innovation

Introducing the Model for Responsible Innovation

Posted by: , and , Posted on: - Categories: Algorithms, Responsible innovation, Trustworthy innovation

Last week, the Responsible Tech Adoption Unit (RTA) published our Model for Responsible Innovation, a practical framework for addressing the ethical risks associated with developing and deploying projects that use AI or data-driven technology. The Model sets out a vision …

How AI assurance can support trustworthy AI in recruitment

A pair of hands is shown in the foreground and a laptop on a desk in the background.

Today, DSIT’s Responsible Technology Adoption Unit (RTA) is pleased to publish our guidance on Responsible AI in recruitment. This guidance aims to help organisations responsibly procure and deploy AI systems for use in recruitment processes. The guidance identifies key considerations …

The UK-US Blog Series on Privacy-Preserving Federated Learning: Introduction

This post is the first in a series on privacy-preserving federated learning. The series is a collaboration between CDEI and the US National Institute of Standards and Technology (NIST). Advances in machine learning and AI, fuelled by large-scale data availability …

Championing responsible innovation: reflections from the CDEI Advisory Board

Text on beige background reads reflections from the outgoing CDEI Advisory Board

The Centre for Data Ethics and Innovation leads the Government’s work to enable trustworthy innovation using data and artificial intelligence. At the CDEI, we help organisations across the public and private sectors to innovate, by developing tools to give organisations …

Six lessons for an AI assurance profession to learn from other domains - part one: how can certification support trustworthy AI?

The UK government's recently published approach to AI regulation sets out a proportionate and adaptable framework that manages risk and enhances trust while also allowing innovation to flourish. The framework also highlights the critical role of tools for trustworthy AI, …

Fairness Innovation Challenge: Call for Use Cases

Building and using AI systems fairly can be challenging, but is hugely important if the potential benefits from better use of AI are to be achieved.  Recognising this, the government's recent white paper “A pro-innovation approach to AI regulation” proposes …

Driving responsible innovation in self-driving vehicles

Self-driving vehicles have the potential to radically transform the UK’s roads. But to enable their benefits and achieve the government’s ambition to ‘make the UK the best place in the world to deploy connected and automated vehicles’, developers and manufacturers …

Exploring the role of data intermediaries in supporting responsible data sharing

Posted by: , and , Posted on: - Categories: Data, Intermediaries, Responsible innovation

At the Centre for Data Ethics and Innovation (CDEI), we aim to drive trustworthy innovation in data and data-driven technologies, including AI. Exploring, developing, and promoting mechanisms for supporting responsible data access and sharing across the economy is therefore a …