Protecting Trained Models in Privacy-Preserving Federated Learning
Protecting Trained Models in Privacy-Preserving Federated Learning
Find out how you can protect individual's data after it's been used in training models through output privacy.
Find out how you can protect individual's data after it's been used in training models through output privacy.
This post is the first in a series on privacy-preserving federated learning. The series is a collaboration between CDEI and the US National Institute of Standards and Technology (NIST). Advances in machine learning and AI, fuelled by large-scale data availability …
Today marks the publication of the third wave of the Centre for Data Ethics and Innovation (CDEI) Public Attitudes to Data and AI tracker survey. The CDEI leads the Government’s work to enable trustworthy innovation using data and AI as …
The Centre for Data Ethics and Innovation leads the Government’s work to enable trustworthy innovation using data and artificial intelligence. At the CDEI, we help organisations across the public and private sectors to innovate, by developing tools to give organisations …
We are looking at professionalisation and certification as part of our programme of work to support the vision laid out in our roadmap to an effective AI assurance ecosystem. As discussed in part one, it will be helpful to learn …
Lesson two: Broad community building is crucial Community building that emphasises skills, communication, and diversity is crucial for ensuring that certification is reliable and accountable. Other sectors, like cybersecurity and healthcare, as well as cross-sector communities organised around ESG and …
The UK government's recently published approach to AI regulation sets out a proportionate and adaptable framework that manages risk and enhances trust while also allowing innovation to flourish. The framework also highlights the critical role of tools for trustworthy AI, …
Last year, the CDEI launched a responsible data access programme to address the challenges organisations face to access data they need in a responsible way. A key component of this programme is our work to encourage adoption of Privacy-Enhancing Technologies …
Following our review into bias in algorithmic decision-making, the CDEI has been exploring challenges around access to demographic data for detecting and mitigating bias in AI systems, and considering potential solutions to address these challenges. Today we are publishing our …
Building and using AI systems fairly can be challenging, but is hugely important if the potential benefits from better use of AI are to be achieved. Recognising this, the government's recent white paper “A pro-innovation approach to AI regulation” proposes …
Today, we are pleased to announce the launch of DSIT’s Portfolio of AI Assurance Techniques. The portfolio features a range of case studies illustrating various AI assurance techniques being used in the real-world to support the development of trustworthy AI. …
Recent Comments