AI

Global Leaders Strive for Responsible Military AI: What Does It Entail?

17 September 2024

|

Zaker Adham

Last week, approximately 2,000 government officials and experts from around the globe convened at the REAIM (Responsible Artificial Intelligence in the Military Domain) summit in Seoul, South Korea. This was the second event of its kind, following the inaugural summit in the Netherlands in February 2023.

During this year's summit, 61 countries endorsed a "Blueprint for Action" to govern the development and use of artificial intelligence (AI) in military contexts. However, 30 countries, including China, attended the summit but did not endorse the blueprint.

The blueprint represents a significant, albeit modest, step forward. Yet, there remains a lack of consensus on what constitutes responsible AI use and how to implement it effectively in military operations.

AI in Military Contexts

The use of AI in military operations has surged in recent years, particularly in the Russia-Ukraine and Israel-Palestine conflicts. Israel has deployed AI-enabled systems like "Gospel" and "Lavender" to assist in critical military decisions, such as targeting locations and individuals. These systems utilize vast amounts of data, including addresses, phone numbers, and chat group memberships.

The "Lavender" system drew attention earlier this year due to concerns about its effectiveness and legality, particularly regarding its training data and target classification. Both Russia and Ukraine also leverage AI for military decision-making, using sources like satellite imagery, social media content, and drone surveillance.

AI's ability to analyze data rapidly provides military officials with tactical advantages, enabling quicker decision-making during conflicts. However, the misuse of AI systems poses significant risks, including potential harm to civilians.

Defining Responsible AI in the Military

There is no universal agreement on what "responsible" AI entails. Some researchers argue that the technology itself should be fair and unbiased, while others emphasize responsible practices in AI design, development, and use. The blueprint endorsed at the Seoul summit aligns with the latter view, advocating for compliance with national and international laws and emphasizing human judgment and control in AI deployment.

Steps for Responsible Military AI

Summit discussions focused on practical steps governments can take to ensure responsible AI use in the military. Suggestions included regional AI regulation agreements and learning from past global challenges like nuclear non-proliferation and environmental protection.

Eighteen months after the first summit, governments and other stakeholders have begun implementing risk-mitigation processes and toolkits for military AI. The blueprint outlines several concrete steps, including:

•  Conducting universal legal reviews for AI-enabled military capabilities

•  Promoting dialogue on responsible AI at national, regional, and international levels

•  Ensuring human involvement in AI development and deployment

Despite progress, a universal understanding of responsible military AI remains elusive. The pressure is now on for the next summit (date not yet announced) to address these issues. The Netherlands has established an expert body to promote a consistent global approach to military AI.

While AI offers significant benefits, it is crucial to mitigate the risks, especially in the military domain.