BlogJuly 31, 2024

How is AI Being Introduced Safely in Defense?

July 31, 2024
AIDefense
Share

We have regularly written about the benefits that artificial intelligence (AI) will bring to defense and how it will provide a battlefield advantage for US and allied forces. But we understand that there are also concerns about AI in defense, and it is key for both industry and defense organizations to address these concerns and mitigate risks associated with deploying AI during combat. In this blog, we explore how the risks of military use of AI is being addressed and the initiatives underway to ensure the safe deployment of such technologies.

As with any technology that is new and potentially revolutionary, there’s always a concern about the negative impacts it could have on people and wider societies. That’s no different for AI, and there is an understandable concern surrounding this technology as it develops and its use becomes more widespread.

There are a myriad of ethical and legal dilemmas associated with AI, and we may just be scratching the surface of these as AI matures. For example, AI has shown that it can deliver biased results across demographic groups, such as medical diagnoses that perform worse for women and people of color (source: MIT), as well as racial discrimination in facial recognition technology (source: Harvard University). Generative AI and so-called “deep fakes” have also aided disinformation campaigns and state propaganda.

Concerns around AI are not new, and even before the generative AI boom that took place in 2023, there were concerns surrounding what was possible through leveraging AI, including unintended consequences.

This was particularly the case with machines and technology that could be deployed by the military, and how increasing autonomy could eventually give rise to so-called ‘killer robots’ that were free of human control and could ‘think’, target individuals, and dictate lethal outcomes using algorithms rather than have a ‘human in the loop’.

Despite AI now becoming a part of everyday life, the concerns over the military use of AI have not subsided.

Check Out Our Disruptive Tech Whitepaper

Read the Whitepaper

What are the concerns of AI in defense?

The rise of large weaponized drones during the Global War on Terror in the early 2000s, including the MQ-9 Predator, coincided with an increasing discourse over the use of autonomous systems in warfare.

While drones such as the MQ-9 were human-in-the-loop machines, many organizations began to voice concerns over the potential rise of “killer robots” – or more formally Lethal Autonomous Weapon Systems (LAWS) – that would no longer require a human operator to employ a weapon system. Critics argue that this would increase the risk that civilians and other non-combatants could be targeted by machines, which would violate international law.

While this scenario may have been limited by technology a decade or so ago, the rapid development of AI and autonomous systems that has been seen over recent years now means that these concerns around LAWS, and the potential deployment of such systems, are much closer to reality.

In 2023, a US Air Force Colonel responsible for testing AI told the Aeronautical Society that during a simulated test, an AI pilot conducting suppression of enemy air defenses (SEAD) mission ultimately turned on its operator, as it was “interfering in its higher mission” of destroying surface-to-air missiles. “The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator,” said the Colonel.   

Despite these deadly possibilities, the US has not prohibited the development or deployment of LAWS (although it currently does not field any), especially as there are worries that a potential foe such as China could deploy such systems to gain a battlefield advantage over allied forces. LAWS could also be a critical force multiplier in contested environments against a peer threat, which is another reason that the US and others are reluctant to ban them outright.

The DoD’s approach to AI weapons – DoD Directive 3000.09

Last year, the US Department of Defense updated its DoD Directive 3000.09 Autonomy in Weapon Systems to address the “dramatic advances in technology happening all around us” and ensure the US remains “the global leader of not only developing and deploying new systems, but also safety”, said Deputy Secretary of Defense Dr. Kathleen Hicks.

This directive has been established to minimize the probability and consequences of failure in autonomous and semi-autonomous weapon systems, which could lead to unintended engagements similar to what was described by the USAF Colonel.

The requirements as set out in the directive state that:

  • Any autonomous or semi-autonomous weapon system must allow commanders and operators to exercise an appropriate level of human judgment when using force.
  • Laws of war, treaties and rules of engagement must be adhered to when authorizing the use of, directing the use of, or operating an autonomous or semi-autonomous weapon system.
  • A system must demonstrate that it has the performance, capability, reliability and effectiveness to work in real-world situations.
  • Any system incorporating AI capabilities must be designed, developed and deployed in line with the  DoD AI Ethical Principles and the DoD Responsible AI (RAI) Strategy and Implementation Pathway.

Last year, the US also launched the “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” at the Hague, which now has 54 endorsing states. The declaration consists of “non-legally binding guidelines” that influence best practices for responsible military use of AI.

This includes having AI systems that are auditable, have well-defined use cases, can detect and avoid unintended behaviors, and that any “high-consequence” application undergoes senior-level review.

NATO’s Approach to AI

Similar to the Political Declaration led by the US, NATO members have also committed ‘Principles of Responsible Use’ as a key component of the alliance’s AI strategy.

This ensures that any AI application that has been developed and has the potential to be deployed will adhere to six principles, which are:

  • Lawfulness – AI will be developed and used in accordance with national and international law, including humanitarian and human rights law.
  • Responsibility and Accountability – this principle states that ‘clear human responsibility’ shall apply to ensure accountability.
  • Explainability and Traceability – AI must be understandable and transparent, including the use of review methodologies, sources and procedures.
  • Reliability – AI must have explicit and well-defined use cases, and testing and assurance within the use cases will be carried out throughout the lifecycle to ensure safety and security.
  • Governability – AI applications must be used for their intended purpose, and there must be the ability to deactivate or disengage systems when they demonstrate unintended behaviors.
  • Bias Mitigation – Steps must be taken to minimize unintended bias in the development and use of AI applications and data sets.

There is no doubt that AI in defense will bring significant advantages for military forces, however, the ethical, legal and operational challenges cannot be ignored either and risks are still present.

The concerns surrounding AI for military applications, especially the development and deployment of LAWS and so-called “killer robots”, which have the ability to deploy weapon systems and operate without a human, have driven the requirement for more robust frameworks and principles that guide the responsible deployment of AI.

Through organizations including NATO and international agreements, the defense sector can be a force for good in the deployment of AI and ensure that it enhances security, rather than compromising it. This is a delicate balance that defense has to manage closely, and with the rapid development of AI, there will be ongoing work to ensure that current guidelines and principles remain appropriate and operationally relevant.

Let's Talk

Have questions or want to learn more? Contact our team and let’s talk!

How Can We Support You?