TGS


The need for effective AI assurance

Data-driven technologies, such as artificial intelligence (AI), have the potential to bring about significant benefits for our economy and society. However, they also introduce risks that need to be managed. 

As these technologies are more widely adopted, there is an increasing need for a range of actors, including regulators, developers, executives, and frontline users, to check that these tools are functioning as expected, in a way that is compliant with standards (including regulation), and to demonstrate this to others. However, these actors often have limited information, or lack the appropriate specialist knowledge, to ensure that AI systems are trustworthy. To address this information gap, an effective AI assurance ecosystem is required. 

Assurance as a service draws originally from the accounting profession, but has since been adapted to cover many areas such as cyber security and quality management. In these areas, mature ecosystems of assurance products and services enable people to understand whether systems are trustworthy. These products and services include: process and technical standards; repeatable audits; certification schemes; advisory and training services. 

Such an ecosystem is emerging for AI, with a range of companies starting to offer assurance services. A number of possible assurance techniques have been proposed and regulators are beginning to set out how AI might be assured (for example the ICO’s Auditing Framework for AI). However, this ecosystem is currently fragmented, and there have been several calls for better coordination, including from the Committee on Standards in Public Life. Our recently published review into bias in algorithmic decision-making also pointed to the need for an ecosystem of industry standards and professional services to help organisations address algorithmic bias in the UK and beyond. 

Going forward, an effective AI assurance ecosystem will play an enabling role, unlocking both the economic and social benefits of these technologies through the coordination of AI assurance tools and user responsibilities. Playing this crucial role in the development and deployment of AI, AI assurance is likely to become a significant economic activity in its own right and is an area in which the UK, with particular strengths in legal and professional services, has the potential to excel in. 

What do we mean by AI assurance?

Assurance covers a number of governance mechanisms for third parties to develop trust in the compliance and risk of a system or organisation. In the AI context, assurance tools and services are necessary to provide trustworthy information about how a product is performing on issues such as fairness, safety or reliability, and, where appropriate, ensuring compliance with relevant standards.

For comparison, consider an example of a more mature assurance ecosystem: a supermarket selling a branded ready meal to a customer. The supermarket and the customer need to be able to trust that what they’re selling and buying meets a set of commonly agreed standards - in this case food safety and nutrition standards. That, at a minimum, the product is safe to eat, and that both the supermarket and customer have clarity about what ingredients have gone into making it. For this assurance to be reliable, supermarkets and manufacturers need to rely on standards and assurance further up the supply chain, such as agricultural standards and quality management in manufacturing. 

There are some similarities in the context of AI. If an employer were to use an off-the-shelf AI tool when recruiting for new staff, both the employer and applicants should be able to trust that this AI tool is safe, and may need to understand how it has been developed (including which dataset/s it has been trained on). To enable meaningful trust in AI, comprehensive assurance is required throughout the supply chain and use of AI systems. 

Existing models of assurance

We have reviewed existing models of assurance from various fields such as finance, health and safety, and construction, asking how these models might transfer effectively to the AI context. These models include:

Audit: External processes of review or examination. Audit is used in many other contexts, but in the AI context we are seeing the word used inconsistently to describe two types of activities that are quite different in practice. Business and compliance audit: Originally used to validate the accuracy of financial statements, particularly that they are free from fraud or error. This idea has now been extended to other regulatory areas such as tax (HM Revenue and Customs - HMRC) and data protection (Information Commissioner's Office - ICO). Bias audit: The functionality of a system is tested by submitting inputs and observing outputs. This form of audit comes from social science and discrimination research. For example, where two resumes of equal qualification are submitted, one with a white British sounding name, one with a South Asian sounding name. Certification: A process where an independent body attests that a product, service, organisation or individual has been tested against, and met, objective standards of quality or performance. Certification typically applies to products or services, such as food, pharmaceuticals and electrical equipment, but can also apply to individuals or organisations.  Accreditation: Ensures that those who carry out testing, certification and inspection are competent to do so. Accreditation is often performed by a regulator or independent accreditation body. For example, the United Kingdom Accreditation Service (UKAS) gives accreditation to asbestos testing labs in accordance with World Health Organisation (WHO) standards. Accreditation often applies to bodies or people that issue certifications, but can also apply to systems themselves, particularly in the security context.  Impact assessment: Comes from public policy and social science research and used to anticipate the effect of a policy/program on environmental, equality, data protection, or other outcomes. Impact assessments are often done by the same organisation running the program, though potentially with the help of external advisors. In some contexts these impact assessments are made mandatory by regulations.

An AI assurance ecosystem will require the coordination of these and other tools appropriately applied to the AI context. 

Our approach

In 2020, we launched a programme of work on AI assurance, which aims to assess how assurance approaches used in other sectors could be applied to AI, the current maturity and adoption of these assurance tools in addressing compliance and ethical risks in AI, as well as the role of standards to support this. As part of this programme, we have worked with a team of researchers at UCL to investigate the audit and assurance of AI systems in industry. Building on their recent survey of AI auditing approaches, the UCL team have carried out four pilots to expand the evidence base around the use of AI systems in industry and to address the gap of empirically informed discussions on standards and regulation in AI assurance.

Through our work, we aim to build a common understanding around the requirements for AI assurance, as there are currently conflicting ideas about what different assurance requirements mean in practice, which has led to confusion around what constitutes the responsible use of AI. Our work on AI assurance is complementary to the Open Data Institute's (ODI) work to explore the role of data assurance approaches, like audit and certification, in assessing, building and demonstrating trust in data and data practices.

The main output of this work will be an AI assurance roadmap that sets out our view of the current ecosystem, as well as how it should develop to enable organisations to innovate with confidence, while minimising the risks. In doing so, we hope to help industry, regulators, standards bodies, and government, think through their own roles in this emerging ecosystem. 

Our work in this area supports the government's ambition to unleash the power of data and data-driven technologies, including AI, for the benefit of our society and economy. The government’s upcoming National AI Strategy will focus on the need for ethical, safe and trustworthy development and deployment of AI; building an effective AI assurance ecosystem will be a key part of this. Moreover, in its recently published National Data Strategy, the government set out an ambition to secure a pro-growth and trusted data regime; in order to realise this, we need to ensure that the public have confidence and trust in how data is being used, and how data-driven systems operate.

Developing an AI assurance roadmap

To address gaps in the assurance landscape, we are developing a roadmap which will: 

(a) Lay out the set of activities needed to build a mature assurance ecosystem. 

(b) Identify the roles and responsibilities of different stakeholders across these activities.

By laying out this set of activities and identifying roles and responsibilities, the roadmap seeks to address crucial gaps in the assurance landscape by:

Building common understanding around the different types of AI assurance and how they contribute to responsible innovation. Translating between developer, executive and regulator needs for assurance, particularly on fairness. Clarifying the relationship between different kinds of standards.

The CDEI is engaging with stakeholders across the public sector, industry, and academia, to ensure that the roadmap takes into account a wide range of views. 

We would like to hear from individuals and organisations who are developing or adopting AI systems, as well as those developing assurance tools or working on similar issues in AI assurance. Please get in touch with us at ai.assurance@cdei.gov.uk or via the comments section below. 

This is the first in a series of three blogs on AI assurance. The second blog will consider different user needs for AI assurance and the potential tensions which arise between conflicting user interests. Meanwhile, the third blog will explore different types of assurance in more detail and consider the role of standards in an assurance ecosystem.

About the CDEI

The CDEI was set up by the government in 2018 to advise on the governance of AI and data-driven technology. We are led by an independent Board of experts from across industry, civil society, academia and government. Publications from the CDEI do not represent government policy or advice.

https://cdei.blog.gov.uk/2021/04/15/the-need-for-effective-ai-assurance/

seen at 20:50, 17 April in Centre for Data Ethics and Innovation Blog.
Email this to a friend.