Protecting Trained Models in Privacy-Preserving Federated Learning
![](https://rtau.blog.gov.uk/wp-content/uploads/sites/236/2024/07/Protecting-Trained-Models-in-Privacy-Preserving-Federated-Learning-620x349.png)
Find out how you can protect individual's data after it's been used in training models through output privacy.
Find out how you can protect individual's data after it's been used in training models through output privacy.
Privacy Enhancing Technologies (PETs) have become an increasingly important policy priority for governments, multilateral organisations, and the data privacy expert community. PETs refer to a range of digital technologies and techniques that enable the collection, processing, analysis, and sharing of …
This post is part of a series on privacy-preserving federated learning. The series is a collaboration between the Responsible Technology Adoption Unit (RTA) and the US National Institute of Standards and Technology (NIST). Learn more and read all the posts …
This post is part of a series on privacy-preserving federated learning. The series is a collaboration between the Responsible Technology Adoption Unit (RTA) and the US National Institute of Standards and Technology (NIST). Learn more and read all the posts …
This post is part of a series on privacy-preserving federated learning. The series is a collaboration between CDEI and the US National Institute of Standards and Technology (NIST). Learn more and read all the posts published to date on the …
Access to data is an obvious requirement for data-driven innovation, but many innovators struggle to access the data they need. In a recent CDEI survey, 86% of vendors of AI and data-driven technologies stated that a number of data-related factors …
The CDEI believes that the government should introduce a mandatory transparency obligation on all public sector organisations using algorithms that have a significant influence on significant decisions affecting individuals. Our report published last week suggests definitions for these terms. But whilst a transparent approach is vital to building a trustworthy environment, we should not assume that greater transparency from public sector organisations will inevitably lead to greater trust in the public sector.
This report draws together the findings and recommendations from a broad range of work. We have focused on the use of algorithms in significant decisions about individuals, looking across four sectors (recruitment, financial services, policing and local government), and making cross-cutting recommendations that aim to help build the right systems so that algorithms improve, rather than worsen, decision-making.
The Responsible Technology Adoption Unit uses its blog to share regular updates about its work. It has published articles on a range of issues relating to the use of data and AI.
Recent Comments