Protecting Trained Models in Privacy-Preserving Federated Learning
Protecting Trained Models in Privacy-Preserving Federated Learning
Find out how you can protect individual's data after it's been used in training models through output privacy.
Find out how you can protect individual's data after it's been used in training models through output privacy.
In this blog, we highlight how, as proposed in the UK government’s National Data Strategy, the CDEI is increasingly working in partnership with public sector bodies and industry on live projects, and building out its capability to help the government …
Since the Centre for Data Ethics and Innovation (CDEI) was established two years ago, engaging with the public has been a core component of our work. For the CDEI to advise on best practice in responsible innovation, we need a …
In our recently published review into bias in algorithmic decision-making, we explored the regulatory context in which algorithmic decisions take place, which includes equality law, human rights law, discrimination law and sector specific regulations. The main piece of legislation that …
The Race Disparity Unit (RDU) and Centre for Data Ethics and Innovation (CDEI) began a partnership in March 2019 at the start of the CDEI’s review into bias in algorithmic decision-making. The RDU is a UK government unit which collates, …
The CDEI believes that the government should introduce a mandatory transparency obligation on all public sector organisations using algorithms that have a significant influence on significant decisions affecting individuals. Our report published last week suggests definitions for these terms. But whilst a transparent approach is vital to building a trustworthy environment, we should not assume that greater transparency from public sector organisations will inevitably lead to greater trust in the public sector.
This report draws together the findings and recommendations from a broad range of work. We have focused on the use of algorithms in significant decisions about individuals, looking across four sectors (recruitment, financial services, policing and local government), and making cross-cutting recommendations that aim to help build the right systems so that algorithms improve, rather than worsen, decision-making.
Today we have published an update report on our work with the Behavioural Insights Team (BIT) and Doteveryone, that follows on from our review of online targeting. Since March, we have been working to understand how, by changing how technology …
Almost all (13 of 16) of this month’s entries were related to healthcare, with the majority of those specifically looking at use-cases in hospitals. Given that the UK faces an ongoing public health crisis and is entering a second-wave of coronavirus infections, it is not surprising that these use-cases are the most prevalent at this time.
The number of brand new use-cases that we are seeing each month has seen a downturn since we began compiling the COVID-19 repository, although we are continuing to find further examples of the existing entries that we have been tracking, indicating that existing use-cases are being adopted more widely.
The primary purpose of the majority of use-cases has been to support the local response and mitigate the effects of lockdown. However, we are starting to see examples of use-cases designed to build future resilience and aid the recovery; these have been particularly prominent in the transport sector. For example, the Commonplace Mapping Tool which allows users to highlight pinch points across Glasgow City Centre, where measures such as pavement widening and new cycle lanes could be introduced to help people maintain physical distancing.
Recent Comments