Skip to main content

https://rtau.blog.gov.uk/2023/07/12/six-lessons-for-an-ai-assurance-profession-to-learn-from-other-domains-part-one-how-can-certification-support-trustworthy-ai/

Six lessons for an AI assurance profession to learn from other domains - part one: how can certification support trustworthy AI?

The UK government's recently published approach to AI regulation sets out a proportionate and adaptable framework that manages risk and enhances trust while also allowing innovation to flourish. The framework also highlights the critical role of tools for trustworthy AI, including assurance techniques and technical standards, in enabling the responsible adoption of AI. The CDEI’s AI assurance programme is championing the use and development of these tools to enable the development of an AI assurance ecosystem that is ethical, trustworthy, and effective. As part of this the CDEI is beginning work to promote dialogue on potential paths to an accountable AI assurance profession.

Certification holds promise as one of a wider set of tools for trustworthy AI. In building an accountable AI assurance profession, assurance providers (both the organisations as a whole and the individual professionals themselves) could be certified to evidence their expertise and therefore their trustworthiness. As the AI assurance ecosystem and certification develop, we will need to borrow from experience in other sectors, comparing common features of certification and building on existing knowledge to learn lessons from more mature certification models.

Early in 2023 we spoke to experts across a broad range of sectors to understand what works and doesn’t work in other certification models. We sought views to reflect the varied subject matter and unique challenges of different domains, including cybersecurity, aerospace, sustainability, nuclear safety, bioethics, and medical devices. We covered topics like comparative maturity, conditions for success, and failure modes.

Main takeaways

  • Context is key: drivers like regulation and market forces, and governance elements like assurance standards and techniques, will influence the role of certification and how it matures over time.
  • Broad community building is crucial for reliable, accountable certification.
  • In a changing environment, balance between flexibility and robustness is essential.
  • Therefore, existing effective certification schemes are adaptable, managing this balance appropriately. They are also transparent and interoperable.
  • To be effective, certification schemes require a broad range of stakeholder views.
  • Continual monitoring and evaluation can manage complexity.

We will cover these six “lessons learned” in three separate blog posts. This first post looks at the governance context for certification. The second post builds on this, exploring the wider factors needed for certification schemes to be effective—the enabling conditions— taking into account the role of community, and how to balance robustness and flexibility. The third post looks at similarities between effective certification schemes, including specific common characteristics, the range of stakeholders and incentives, and managing complexity through monitoring and evaluation.

Across all six lessons, it is important to keep in mind that the process of maturity for certification schemes is continuous. In other sectors—for example sustainability and cybersecurity—certification has continued to mature over time with existing schemes being adjusted, new ones introduced, and others discontinued.

Lesson one: Context is key

Certification is one of many governance tools, so it is important to consider it within its broader context. The wider governance landscape, including principles, standards, and conformity assessment techniques, must develop before certification can be effective. Across the range of sectors and schemes we considered, certification was consistently one of the final governance elements to mature.

However, that is not to say we should wait and watch these developments passively. Instead, by starting the dialogue about certification early, there is more time to explore with diverse stakeholders which model will work best for an effective future AI assurance profession. By starting now, dialogue will take place in parallel with the development of the wider landscape, rather than afterwards, helping to ensure alignment between emerging assurance techniques and technical standards, and the most promising certification model.

The development and adoption of certification may be driven by a combination of factors. In some sectors like aerospace, nuclear safety, food safety, and medical devices, “hard requirements” are imposed by top-down regulation, creating a need for certification schemes that organisations and individuals can use to demonstrate compliance with rules. However, other factors can drive development of certification either as an alternative to, or alongside, regulation. In particular, market forces can encourage certification, as differentiation and brand recognition create competitive advantages and incentives for voluntary certification to demonstrate compliance with good practice, norms or standards, positively affecting consumers’ trust.

There are some important questions about what role voluntary certification schemes should play in both the long and short term. Certification evaluates whether something meets a certain standard. However many  standards for AI are still being developed and agreed upon. For example ISO/IEC 42001 (AI Management System) is in approval stage, ISO/IEC 42005 (AI Impact Assessment) is in committee stage, and ISO/IEC AWI TS 6254 (Objectives and approaches for explainability of ML models and AI systems) is in pre-draft stage. As such for the time being "soft" voluntary certification schemes may not be sufficiently developed for establishing and communicating trust.

In the longer term, clearer requirements might help make certification more effective. In fields like aerospace and nuclear power, where safety considerations have driven top-down rules, accidents are extremely rare. However, safety is just one part of AI that needs to be assured. Other elements of AI assurance might be better addressed through different types of certification. For instance, voluntary certification focused on elements like fairness and explainability could work alongside top-down rules based on safety and robustness. Top-down regulatory rules on specific principles could also co-exist with voluntary schemes that focus on the same principles and go beyond the baseline set by the regulatory rule.

The broader context for certification will continue to emerge and develop further over time. However, in the immediate term we should consider and seek consensus on whether encouraging voluntary certification now can help create and mature effective schemes that can be used in the future—taking an iterative approach to certification, aligned with the UK government's adaptable approach to AI regulation.

Sharing and comments

Share this page

Leave a comment

We only ask for your email address so we know you're a real person

By submitting a comment you understand it may be published on this public website. Please read our privacy notice to see how the GOV.UK blogging platform handles your information.