Today, DSIT’s Responsible Technology Adoption Unit (RTA) is pleased to publish our guidance on Responsible AI in recruitment. This guidance aims to help organisations responsibly procure and deploy AI systems for use in recruitment processes. The guidance identifies key considerations and assurance mechanisms that can be used to ensure that AI systems are aligned with the UK’s proposed regulatory principles.
AI in recruitment
AI-enabled systems are becoming increasingly embedded across the recruitment and hiring lifecycle. These systems offer a range of potential benefits for organisations, as they can:
- improve the efficiency of applicant screening and interviews
- improve the quality and diversity of applicants
- improve the applicant experience by offering chatbot support
- better job search for candidates exposed to more tailored adverts
- streamline salary negotiations
- produce better and more scalable recruitment insights
While these technologies create a range of benefits, they also pose novel risks, including:
- risks bias and discrimination
- lack of scientific validity
- concerns about legal compliance with UK legislation
- digital exclusion
Governance measures and safeguards are needed to maximise the benefits of these technologies, while mitigating potential risks and harms. This can be achieved through the introduction of AI assurance.
AI Assurance for recruitment
AI assurance involves processes to measure, evaluate and communicate whether an AI system is trustworthy, and does what it says on the tin. In 2022, HR and recruitment was identified as a key sector of interest for the RTA due to the distinct challenges it faced from increased AI adoption for example, risks posed by the technology around fairness and human rights. The RTA engaged extensively with organisations in the recruitment sector to identify familiarity and engagement with AI assurance. This research, published in our Industry temperature check: barriers and enablers to AI assurance report, identified several barriers to the adoption of AI assurance, including:
- Lack of knowledge and skills
- Lack of internal/external demand
- Lack of awareness of available assurance mechanisms
Prior to this, the RTA co-authored the Data driven tools in recruitment guidance with the Recruitment and Employment Confederation. Alongside the publication of the Industry temperature check there have been two major developments in the policy landscape since the original guidance launched:
- The UK has developed our AI Governance framework, published in our A pro-innovation approach to AI regulation white paper.
- The RTA published the Introduction to AI assurance, which demonstrates the roles of assurance mechanisms to operationalise and implement the high-level governance framework in practice.
To reflect these developments in the policy landscape, we are publishing updated guidance on responsible AI in recruitment. This work aims to support organisations procuring and deployment AI systems for recruitment, by identifying how assurance mechanisms can be used across the procurement and deployment lifecycle, to ensure alignment with the UK’s proposed regulatory principles.
This guidance outlines key considerations that organisations should think about across each stage of the procurement and deployment lifecycle: pre-procurement, during-procurement, pre-deployment and live operation. The guidance then identifies assurance mechanisms are also identified, which can be used to address these considerations.
What’s next?
Organisations seeking to procure and deploy AI-enabled systems for recruitment should integrate relevant considerations outlined in this guidance into existing business processes.
The RTA will work with industry bodies to ensure adoption of this guidance and monitor uptake of AI assurance techniques. We’d like to hear from organisations with ideas or opportunities for future collaboration, or who have insights or resources to share. You can get in touch at ai-assurance@dsit.gov.uk.
Recent Comments