This post is part of a series on privacy-preserving federated learning. The series is a collaboration between the Responsible Technology Adoption Unit (RTA) and the US National Institute of Standards and Technology (NIST). Learn more and read all the posts …
Hear from some of the winners of the UK-US PETs Prize Challenges on scalability challenges and solutions in privacy-preserving federated learning.
Featuring: Dr. Mat Weldon - ONS Dr. Michael Fenton – Trūata Dr. Xiaowei Huang – University of Liverpool Dr. Yi Dong – University of Liverpool In this post, we talk with Dr. Xiaowei Huang and Dr. Yi Dong (University of …
Find out how you can protect individual's data after it's been used in training models through output privacy.
This post is part of a series on privacy-preserving federated learning. The series is a collaboration between the Responsible Technology Adoption Unit (RTA) and the US National Institute of Standards and Technology (NIST). Learn more and read all the posts …
In our second post we described attacks on models and the concepts of input privacy and output privacy. ln our previous post, we described horizontal and vertical partitioning of data in privacy-preserving federated learning (PPFL) systems. In this post, we …
This post is part of a series on privacy-preserving federated learning. The series is a collaboration between the Responsible Technology Adoption Unit (RTA) and the US National Institute of Standards and Technology (NIST). Learn more and read all the posts …
This post is part of a series on privacy-preserving federated learning. The series is a collaboration between CDEI and the US National Institute of Standards and Technology (NIST). Learn more and read all the posts published to date on the …
This post is the first in a series on privacy-preserving federated learning. The series is a collaboration between CDEI and the US National Institute of Standards and Technology (NIST). Advances in machine learning and AI, fuelled by large-scale data availability …
Recent Comments