Skip to main content

https://rtau.blog.gov.uk/2021/04/17/134/

Types of assurance in AI and the role of standards

This is the third in a series of three blogs on AI assurance, which explore the key concepts and practical challenges for developing an AI assurance ecosystem. The first blog focused on current confusion around AI assurance tools and the need for AI assurance. The second considered different user needs for assurance and the potential tensions arising from conflicting user interests. This third and final blog explores different types of assurance in more detail and considers the role and alignment of standards in an assurance ecosystem. 

Among commonly discussed forms of assurance, the CDEI has identified two broad groupings or ‘families’ of assurance models:

  • Compliance Assurance compares a system (a service, product, or combination) to an existing set of standards. 
  • Risk Assurance asks an open-ended question about how a system works. There is often confusion between types assurance in the AI debate (e.g. ‘audit’ used loosely to describe both types).

Both types of methods can be mutually reinforcing in an assurance ecosystem. Compliance Assurance enables consistency, while Risk Assurance accommodates greater ambiguity and context specificity. Independent third parties (both setting standards and performing assurance) are essential for Compliance Assurance, while in Risk Assurance, third parties may provide expertise, but independence is not required.

Type 1: Compliance Assurance

This group of assurance methods aims to test or confirm whether a system, organisation or individual complies with a given standard. 

Compliance Assurance is necessary for communicating between different stakeholders using or affected by AI technology, whether the technology and the individuals or institutions using it are trustworthy.

Executives and developers need to communicate their compliance with standards, laws and regulations to allow frontline users to have confidence in using a system. Similarly, ministers need assurance that AI systems are compliant with laws and regulations, whilst affected individuals need to know enough about a system to exercise their rights.  

This type of assurance includes:

  • Formal verification: Establishes whether a system satisfies some requirements using the formal methods of mathematics.
  • Business and compliance audit: Originally used to validate the accuracy of financial statements, particularly that they are free from material fraud or error. This idea has now been extended to other regulatory areas such as tax (HM Revenue and Customs - HMRC) and data protection (Information Commissioner's Office - ICO). 
  • Certification: Used to assure the quality of systems or the quality of  services within professions, and can be part of a more formal, legally binding licensing process. Certification typically applies to products or services such as ‘kitemark’ schemes for meeting quality or safety standards, but can also apply to individuals performing services, for example, medical doctors must receive a Certificate of Completion of Training (CCT) before practicing. Certification compares the process, product or person to a set of established standards.
  • Accreditation: Ensures that those who carry out testing, certification and inspection are competent to do so.

Type 2: Risk Assurance

This group of assurance tools is used to ask how an AI system works, in order to identify, assess and measure the risks/harms of a technology or system. Unlike Compliance Assurance tools, Risk Assurance tools are more open ended and require significant judgement about the kinds of risks that need to be considered. 

Risk Assurance ultimately requires judgement from the person (e.g. executive) conducting assurance. Third parties can help provide expertise or a different perspective, but independence is not strictly required.

Examples of Risk Assurance include:

  • Impact assessment: Comes from public policy and social science research and used to anticipate the effect of a policy/program on health, economic status, the environment, or other outcomes. 
  • Bias audit: The functionality of a system is tested by submitting inputs and observing outputs. This form of audit comes from social science and discrimination research. For example, classic sociology experiments where two resumes of equal qualification are submitted, one with a white British sounding name, one with a South Asian sounding name. 
  • Impact evaluation: Of similar origin and practice to impact assessment (above), but conducted after a program or policy has been implemented in a more retrospective manner. 
  • Ongoing testing (including red teaming): Comes from a strategy in defence. Also popularised in behavioral insight literature, which finds that having a “devil’s advocate” in the room is good for avoiding groupthink which leads to poor decision-making.  

How are Compliance Assurance and Risk Assurance complementary? 

The Compliance/Risk Assurance distinction is useful because it provides increased clarity for users to select the right tools for the right tasks. 

The current discourse sometimes mistakenly calls on risk assurance tools like impact assessments to achieve the goals of Compliance, leading to complex and burdensome efforts to address common challenges. Meanwhile, sometimes compliance mechanisms like audits are discussed as if they can achieve loftier goals - an exercise which may be better suited to Risk Assurance tools like impact assessments. 

Additionally, the distinction highlights the skills and resourcing implications of various mechanisms of assurance. Open-ended Risk Assurance methods rely on making good judgements about how much evaluation and evidence is needed, an exercise that is easier to achieve in larger organisations with dedicated compliance resources. Meanwhile in Compliance Assurance, this task is performed by external service providers and standards setting bodies, which allows for more standardised assessment methods, though they are less able to tackle the nuances of a particular context. 

From an efficiency perspective it might make sense for Compliance Assurance methods to address basic quality and safety issues that occur across many contexts, while allowing impact assessments to focus on the context-specific issues that make good use of the detailed knowledge of the organisation that is deploying the system.

Both families of tools are important and ideally serve mutually reinforcing purposes within an assurance ecosystem. A developer, for instance, should have robust internal mechanisms for detecting and mitigating bias, such as via red teaming exercises and internal impact assessments. Those activities should then be put to the test by an external, or third-party body using a method from the first set of models. Relatedly, the existence of third-party auditing or certification etc. may incentivise developers and executives to implement stronger internal assurance processes. 

The role of standards in Compliance Assurance

Compliance Assurance relies on standards at different levels:

  • Performance/safety standards are required for conformity assessments
  • Certification standards must be established for systems
  • Audit standards are required for information that is reported (particularly what to measure and how)
  • Professional standards are necessary to accredit service providers 

The main barriers to Compliance Assurance are the lack of commonly accepted standards, both in the standards that AI systems should be held to, and the professional standards of what should be covered by an AI audit or certification. There are multiple initiatives to address this challenge (e.g. by ISO/IEC, IEEE, etc.) but it is not clear that these efforts are focused on creating unambiguous, measurable standards that are suitable for assessing compliance.

Risk assurance is by its nature more customised and context specific. On the one hand, this allows it to reflect the unique risks for particular industries and use-cases. However, it also requires significant judgement in identifying and assessing risks, and risks misunderstandings where there are different concerns or risks identified by the party performing the risk assessment

Different types of standards 

An important distinction between Compliance and Risk Assurance is the relationship with standards. A simple reading of this distinction would be that Compliance Assurance requires unambiguous standards, while Risk Assurance does not. 

In reality, standards can also play a less formal role in setting common language and norms. For example, GDPR has created a common language for assessing and managing privacy risk and conducting DPIAs, even if falling short of creating clear standards that can be unambiguously addressed. It is worth considering whether a similar situation is also inevitable for fairness and bias in Risk Assurance. There may be value in having a number of commonly accepted standard approaches to the measurement of bias, providing the freedom for stakeholders to determine which is most relevant in any particular context. However, it is unlikely that a single concept of fairness is likely to gain complete acceptance in all contexts.

Open call 

The CDEI is engaging with stakeholders in the public sector, industry, and academia to build insights into assurance tools and the assurance needs of those who rely on AI. We would like to hear from individuals and organisations who are developing or adopting AI systems, as well as those developing assurance tools or working on similar issues in AI assurance. By engaging with a range of stakeholders, we hope to be able to draw out common insights and identify areas where clarity and consensus around AI assurance has the potential to bring about significant benefits. Please get in touch with us at ai.assurance@cdei.gov.uk or via the comments section below.

Sharing and comments

Share this page