Protecting Trained Models in Privacy-Preserving Federated Learning
![](https://rtau.blog.gov.uk/wp-content/uploads/sites/236/2024/07/Protecting-Trained-Models-in-Privacy-Preserving-Federated-Learning-620x349.png)
Find out how you can protect individual's data after it's been used in training models through output privacy.
Find out how you can protect individual's data after it's been used in training models through output privacy.
This post is part of a series on privacy-preserving federated learning. The series is a collaboration between the Responsible Technology Adoption Unit (RTA) and the US National Institute of Standards and Technology (NIST). Learn more and read all the posts …
In our second post we described attacks on models and the concepts of input privacy and output privacy. ln our previous post, we described horizontal and vertical partitioning of data in privacy-preserving federated learning (PPFL) systems. In this post, we …
This post is part of a series on privacy-preserving federated learning. The series is a collaboration between the Responsible Technology Adoption Unit (RTA) and the US National Institute of Standards and Technology (NIST). Learn more and read all the posts …
This post is part of a series on privacy-preserving federated learning. The series is a collaboration between CDEI and the US National Institute of Standards and Technology (NIST). Learn more and read all the posts published to date on the …
This post is the first in a series on privacy-preserving federated learning. The series is a collaboration between CDEI and the US National Institute of Standards and Technology (NIST). Advances in machine learning and AI, fuelled by large-scale data availability …
The Responsible Technology Adoption Unit uses its blog to share regular updates about its work. It has published articles on a range of issues relating to the use of data and AI.
Recent Comments