Skip to main content

https://rtau.blog.gov.uk/2022/06/15/enabling-the-responsible-use-of-ai-in-defence/

Enabling the responsible use of AI in defence

Posted by: , Posted on: - Categories: Artificial intelligence, Data, Trustworthy innovation

The Ministry of Defence (MoD) has today published the UK’s Defence AI Strategy, outlining how the UK will prioritise research, development and experimentation to revolutionise the UK’s Armed Forces capabilities. Additionally, the MoD has published a policy on the ‘Ambitious, safe and responsible’ use of AI, which includes a set of ethical principles that we developed in partnership with the MoD to guide how the military and defence sector adopt AI. This blog sets out how we have been working with the MoD to ensure ethical considerations are at the heart of the UK’s use of AI for military purposes. 

Our partnership with the MoD forms part of our work to enable trustworthy data-driven innovation in the public sector, aligned with Mission 3 of the National Data Strategy.

Background on AI ethics and defence

Emerging technology is increasingly central to defence strategy. The Integrated Review of Security, Defence, Development and Foreign Policy placed “securing strategic advantage through science and technology” at the core of its strategic framework. In a world of increasing nation state competition, competing approaches to the development and use of technology are becoming expressions of geopolitical ideology. 

The use of data-driven technologies in defence can facilitate better decision-making at pace, enhance the organisation of large and complex operations, and enable humans to be removed from dangerous roles through autonomy. However, the use of AI presents ethical challenges, many of which are heightened when applied to the high stakes defence context. These include, but are not limited to: the relative unpredictability of AI, responsibility gaps when delegating to autonomous systems, and potential reductions of human control. There is also a concern that AI could enable weapons to operate with no human involvement.

Given the potential risks associated with the use of AI in defence, it’s essential that it is developed and used responsibly. As a world leader in AI, the UK is well placed to influence norms and standards around how AI should be used for military purposes.

Partnering with the MoD

Over the last 18 months, the CDEI has partnered with the MoD to develop approaches to ensuring that AI is used in defence in an ethical and responsible manner. 

It was crucial to hear from the widest possible range of stakeholders, including those critical of defence, to ensure that the ethical approach we developed was truly representative. We therefore began by assisting with a widespread consultation of experts in AI, defence and military ethics, engaging with over 100 stakeholders from across the AI ethics and defence landscapes. As part of this, we hosted a series of workshops using a futures methodology. In these sessions, experts from academia, industry and defence co-created potential future scenarios for the use of AI in the military, and explored what the most effective government responses might be.

Our research indicated that an ethical framework for the use of AI in defence needed to build on existing military doctrine, international laws around armed conflict and accountability mechanisms. It also suggested that any framework would need to be embedded in all aspects of AI-related activity, covering the entire lifecycle of systems, from procurement to development to deployment.

Following our initial research, we identified three key workstreams: 

  1. The development of ethical principles for defence to set out the MoD’s position on what responsible AI looks like in a military context and guide its approach to AI across the full range of use-cases, from the back office to the battlefield.
  2. The creation of an ethics advisory panel for the MoD, to provide external advice and scrutiny of its position.
  3. Comprehensive implementation of the ethical principles across all AI development and deployment in the MoD.

These form the core of the MoD’s wider approach to the safe and responsible use of AI, incorporating the MoD’s wider efforts on safety, testing and evaluation. 

Ethical principles to guide AI use in defence

The ethical principles, developed with the MoD, are designed to: 

  • Provide direction to defence around the responsible development and use of AI.
  • Form the core of the UK’s efforts to create shared norms for AI in defence on the international stage. 
  • Define the characteristics of AI and AI-enabled systems, which teams across defence will be expected to follow. 

The principles strengthen the UK’s role as a global leader in the responsible use of AI in defence. The framework aims to harmonise with the approaches taken by allies and inform international standard-setting discussions. A summary of the principles can be found below. They can also be read in full in Annex A of the MoD’s paper on ensuring safe and responsible AI. 

Summery of ethical principles 

Human centricity: The impact of AI-enabled systems on humans must be assessed and considered, for a full range of effects both positive and negative across the entire system lifecycle.

Responsibility: Human responsibility for AI-enabled systems must be clearly established, ensuring accountability for their outcomes, with clearly defined means by which human control is exercised throughout their lifecycles.

Understanding: AI-enabled systems, and their outputs, must be appropriately understood by relevant individuals, with mechanisms to enable this understanding made an explicit part of system design.

Bias and harm mitigation: Those responsible for AI-enabled systems must proactively mitigate the risk of unexpected or unintended biases or harms resulting from these systems, whether through their original rollout, or as they learn, change or are redeployed.

Reliability: AI-enabled systems must be demonstrably safe, reliable, robust and secure, and these qualities regularly monitored, audited and evaluated.

Establishing an ethics advisory panel

A key part of ensuring that defence maintains a responsible approach to AI ethics is providing suitable scrutiny to its ongoing policy. Knowing this, we helped the MoD to create an ethics advisory panel to advise the MoD on responsible AI use, and provide constructive challenge. The panel scrutinises the MoD’s ongoing approach to responsible AI, however it is advisory only, and has no formal decision-making powers. To find out more about the panel, including its membership, see Annex B of the MoD’s paper on ensuring safe and responsible AI. 

Putting principles into practice 

Developing principles, policy and governance on the responsible use of AI in defence is not enough to ensure ethical outcomes. To be effective, these approaches must be implemented across the whole of the organisation. We will continue to work with the MoD to support the practical application of the principles across the defence portfolio.

If you would like to speak to us about this project, please get in touch at cdei@cdei.gov.uk.

Sharing and comments

Share this page

1 comment

  1. Comment by William (Will) Kennedy-Long posted on

    The MOD Defence AI Playbook published in January 2024, states (abridged) “LLMs are a new technology, and the risks are not well understood. Our commitment to be ambitious, but also safe and responsible, means we must tread carefully. There, for example are obvious information security risks in using cloud-hosted LLMs; potential intellectual property risks; and ethical or reputational risks of using LLMs for some applications.”

    LMs and LLMs are a most certainly NOT a new technology!

    Check the history:

    1. Early Foundations between the 1950s-1980s: Early foundational work established basic concepts in computational linguistics and rule-based systems, setting the groundwork for the development of language models (LMs).
    2. Statistical Models and NLP 1990s: The shift to statistical models improved the accuracy of language models by leveraging probabilistic approaches to handle linguistic variability and ambiguity.
    3. Deep Learning and Word Embeddings 2010s: The integration of deep learning and word embeddings allowed language models to capture semantic relationships and contextual meanings, leading to more nuanced text generation.
    4. Attention Mechanisms and Transformers 2017 to the Present: The introduction of attention mechanisms and transformer architectures dramatically increased the efficiency and effectiveness of LMs, enabling the training of larger and more powerful models.
    5. Large-Scale LLMs: Models like OpenAI's GPT-3 2020: The emergence of large-scale language models like GPT-3 showcased the potential of LLMs to perform a wide range of natural language tasks with remarkable proficiency and minimal task-specific training.
    6. The Future: Who knows!