Skip to main content

https://rtau.blog.gov.uk/2021/04/16/user-needs-for-ai-assurance/

User needs for AI assurance

This is the second in a series of three blogs on AI assurance, which explore the key concepts and practical challenges for developing an AI assurance ecosystem. The first blog focused on the need for AI assurance and the CDEI’s plans to develop an AI assurance roadmap. This blog considers different user needs for assurance and the potential tensions arising from conflicting interests. 

User roles and their needs

There are multiple users of algorithmic assurance, with different needs that reflect their role in developing, procuring, using or regulating AI systems. To work out what the overall ecosystem needs to look like, it will be important to think clearly about what those needs are, and how they differ across groups. 

We have found it helpful to group user roles as follows:

  • Government policymakers who want to encourage responsible adoption of AI systems need an ecosystem of assurance tools to have confidence that AI will be deployed responsibly, in compliance with laws and regulations, and in a way that does not unduly hinder economic growth.
  • Regulators who see increasing use of AI systems in their scope need to encourage, test and confirm that AI systems are compliant with their regulations.
  • Executives using or considering whether to develop, buy or use AI systems need (1) to ensure that the AI systems they buy and deploy meet minimum standards (2) to assess and manage the Environmental, Social and Corporate Governance (ESG) risks of AI systems, and in some cases decide whether or not to adopt them at all (3) to communicate their compliance and risk management to regulators, boards, frontline users and affected individuals.
  • Developers who build AI systems for internal or external customers need (1) to ensure their development is both compliant and responsible (2) to communicate their AI system’s compliance to regulators, executives, frontline users and affected individuals.
  • Frontline users who use AI systems and their outputs to support their activities need (1) to know enough about an AI system’s compliance to have confidence using it and (2) to communicate the AI system’s compliance to affected individuals. 
  • Individuals affected by AI systems, for example a citizen, consumer or employee, need to understand enough about the use of AI systems in decisions affecting them to exercise their rights.

Of course, this is a simplified model, and single organisations will often fulfill multiple roles. For example, government has a role as policymaker, but also major roles as an executive buying and deploying technology, as a developer and a frontline user.

The need for cross user alignment 

Where different users' interests, goals and responsibilities diverge, tensions may arise between their different assurance needs. 

To resolve tensions between user interests, clarify standards and ensure complementary regulatory regimes, an assurance ecosystem must be built on a consensus of how the roles, responsibilities and interests of different assurance users should be aligned. 

Our assurance roadmap will focus not only on the interaction between methods of assurance within an ecosystem, but also on the alignment of users within an overall model of assurance. The roadmap will need to translate between developer, executive and regulator needs for assurance and clarify the relationships between different kinds of standards. 

Clarification is required both to align different users with commonly accepted standards (e.g. standards for compliance audit), as well as to improve regulatory coordination. For example, the use of AI systems can cut across the purview of multiple sector-based regulators, resulting in ambiguity over which regulator has ultimate responsibility. 

Potential tensions

Beyond supporting the varying needs of different users, an effective assurance ecosystem will need to handle and where possible, resolve the differences in objectives, information and incentives that emerge between the users identified above. 

Below are two examples highlighting where tensions might arise between different stakeholders in an AI assurance ecosystem: 

  1. Trade-offs between risk minimisation and encouraging innovation: Governments, regulators, developers, and executives face the risks and benefits of these technologies differently, and often have incentives that are in tension. To resolve this, a balanced approach is needed to consider the risks and benefits across the ecosystem. The ideal arrangement is one where assurance acts to give greater confidence to both developers and purchasers of AI systems and enables wider, safer adoption. 
  2. Accepting responsibility for good AI decisions: This primarily involves executives and developers. Developers and executives may have the same goal - for example to have fair and safe outcomes - but each would like the other party to ultimately be accountable. Developers may say that they are creating a tool and it’s up to executives to use it correctly. Executives may say they are using a tool and it’s up to developers to make it work well. In practice, both may be responsible, but regulators have not yet provided the guidance necessary to resolve this tension. This tension is particularly pertinent in the AI context where executives often procure AI technology rather than building tools in-house.

In the third blog in the series, we focus on the need for clarification and user alignment, by exploring different types of assurance in more detail and considering the role of standards in an assurance ecosystem. 

Open call  

The CDEI is engaging with stakeholders in the public sector, industry, and academia to build insights into assurance tools and the assurance needs of those who rely on AI. We would like to hear from individuals and organisations who are developing or adopting AI systems, as well as those developing assurance tools or working on similar issues in AI assurance. By engaging with a range of stakeholders, we hope to be able to draw out common insights and identify areas where clarity and consensus around AI assurance has the potential to bring about significant benefits. Please get in touch with us at ai.assurance@cdei.gov.uk or via the comments section below. 

Sharing and comments

Share this page