The Times Australia
The Times World News

.

Global powers are grappling with ‘responsible’ use of military AI. What would that look like?

  • Written by Zena Assaad, Senior Lecturer, School of Engineering, Australian National University

Last week, some 2,000 government officials and experts from around the world met for the REAIM[1] (Responsible Artificial Intelligence in the Military Domain) summit in Seoul, South Korea. This was the second event of its kind, with the first one held in the Netherlands in February 2023.

During this year’s summit, 61 countries endorsed a “Blueprint for Action”[2] for governing the development and use of artificial intelligence (AI) by the military.

However, 30 countries sent a government representative to the summit but didn’t endorse the blueprint, including China[3].

The blueprint is an important, if modest, development. But there is still a gap in the understanding of what constitutes responsible use of AI and how this translates into concrete actions in the military domain.

How is AI currently used in military contexts?

Military use of AI has increased over the last few years, notably in the Russia-Ukraine and Israel-Palestine conflicts.

Israel has used AI-enabled systems known as “Gospel” and “Lavender” to help it make key military decisions, such as which locations and people to target with bombs. The systems use large amounts of data[4], including people’s addresses, phone numbers and membership of chat groups[5].

The “Lavender” system in particular made headlines earlier this year when critics questioned its efficacy and legality. There was particular concern around its training data and how it classified targets[6].

Both Russia and Ukraine also use AI to support military decision making[7]. Satellite imagery, social media content and drone surveillance are just some of the many information sources which generate copious volumes of data.

AI can analyse this data much more quickly than humans could. The results are incorporated into existing “kill chains” – the process of locating, tracking, targeting and engaging targets[8].

It means military officials can make faster decisions during active armed conflict, providing tactical advantages. However, the misuse of AI systems can also result in potential harm.

Civil society and non-governmental organisations such as the International Committee of the Red Cross have warned about the risks. For example, algorithmic bias can exacerbate the risk to civilians[9] during active warfare.

What is responsible AI in the military domain?

There is no consensus on what constitutes “responsible” AI.

Some researchers argue the technology itself can be responsible[10]. In this case, “responsible” would mean having built-in fairness and freedom from bias.

Other studies refer to the practices around AI[11] – such as design, development and use – being responsible. These would mean practices that are lawful, traceable, reliable and focused on mitigating bias.

The blueprint endorsed at the recent summit in Seoul aligns with the latter interpretation. It advocates that anyone using AI in the military must comply with relevant national and international laws.

Suited man speaking at lectern.
More than 2,000 government officials and experts attended the REAIM summit in Seoul to discuss the responsible use of AI in the military domain. Jeon Heon-Kyun/EPA[12]

It also highlights the importance of human roles[13] in the development, deployment and use of AI in the military domain. This includes ensuring human judgement and control over the use of force are responsibly and safely managed.

This is an important distinction, because many narratives around AI falsely imply an absence of human involvement and responsibility.

What can governments do to use military AI responsibly?

Discussions at the summit focused heavily on concrete steps governments can take to support responsible use of military AI.

As military AI use is currently increasing, we need interim steps to deal with it. One suggestion was to strike AI regulation agreements within different regions, rather than taking longer to reach a global, universal consensus.

To improve global cooperation on military AI, we could also heed lessons from previous global challenges – such as nuclear non-proliferation, saving the ozone layer and keeping outer space and Antarctica demilitarised.

Eighteen months since the inaugural summit last year, governments and other responsible parties have started putting into place risk-mitigation processes and toolkits for military AI[14].

The blueprint reflects the progress since then, and the ideas discussed at the summit. It proposes a number of tangible steps[15], which include:

  • universal legal reviews for AI-enabled military capabilities
  • promoting dialogue on developing measures to ensure responsible AI in the military domain at the national, regional and international levels
  • maintaining appropriate human involvement in the development, deployment and use of AI in the military domain.

However, progress is slow because we still don’t have a universal understanding of what responsible military AI actually looks like.

The need to cut thorough these issues is now putting pressure on the next summit (not yet announced). The Netherlands has also set up an expert body[16] to further a globally consistent approach to military AI.

Humanity can benefit from AI tools. But we urgently need to ensure the risks they pose don’t proliferate, especially in the military domain.

References

  1. ^ REAIM (www.reaim2024.kr)
  2. ^ 61 countries endorsed a “Blueprint for Action” (www.reuters.com)
  3. ^ including China (www.reuters.com)
  4. ^ use large amounts of data (inkstickmedia.com)
  5. ^ addresses, phone numbers and membership of chat groups (www.972mag.com)
  6. ^ how it classified targets (www.972mag.com)
  7. ^ use AI to support military decision making (www.cnas.org)
  8. ^ the process of locating, tracking, targeting and engaging targets (mitchellaerospacepower.org)
  9. ^ exacerbate the risk to civilians (blogs.icrc.org)
  10. ^ the technology itself can be responsible (link.springer.com)
  11. ^ the practices around AI (www.tandfonline.com)
  12. ^ Jeon Heon-Kyun/EPA (photos.aap.com.au)
  13. ^ the importance of human roles (thedefensepost.com)
  14. ^ toolkits for military AI (cset.georgetown.edu)
  15. ^ It proposes a number of tangible steps (thereadable.co)
  16. ^ set up an expert body (hcss.nl)

Read more https://theconversation.com/global-powers-are-grappling-with-responsible-use-of-military-ai-what-would-that-look-like-237216

Times Magazine

DIY Is In: How Aussie Parents Are Redefining Birthday Parties

When planning his daughter’s birthday, Rich opted for a DIY approach, inspired by her love for drawing maps and giving clues. Their weekend tradition of hiding treats at home sparked the idea, and with a pirate ship playground already chosen as t...

When Touchscreens Turn Temperamental: What to Do Before You Panic

When your touchscreen starts acting up, ignoring taps, registering phantom touches, or freezing entirely, it can feel like your entire setup is falling apart. Before you rush to replace the device, it’s worth taking a deep breath and exploring what c...

Why Social Media Marketing Matters for Businesses in Australia

Today social media is a big part of daily life. All over Australia people use Facebook, Instagram, TikTok , LinkedIn and Twitter to stay connected, share updates and find new ideas. For businesses this means a great chance to reach new customers and...

Building an AI-First Culture in Your Company

AI isn't just something to think about anymore - it's becoming part of how we live and work, whether we like it or not. At the office, it definitely helps us move faster. But here's the thing: just using tools like ChatGPT or plugging AI into your wo...

Data Management Isn't Just About Tech—Here’s Why It’s a Human Problem Too

Photo by Kevin Kuby Manuel O. Diaz Jr.We live in a world drowning in data. Every click, swipe, medical scan, and financial transaction generates information, so much that managing it all has become one of the biggest challenges of our digital age. Bu...

Headless CMS in Digital Twins and 3D Product Experiences

Image by freepik As the metaverse becomes more advanced and accessible, it's clear that multiple sectors will use digital twins and 3D product experiences to visualize, connect, and streamline efforts better. A digital twin is a virtual replica of ...

The Times Features

What Makes Certain Rings or Earrings Timeless Versus Trendy?

Timeless rings and earrings are defined by designs that withstand the test of time, quality craftsmanship, and versatility. Trendy pieces, on the other hand, often stand testimony ...

Italian Street Kitchen: A Nation’s Favourite with Expansion News on Horizon

Successful chef brothers, Enrico and Giulio Marchese, weigh in on their day-to-day at Australian foodie favourite, Italian Street Kitchen - with plans for ‘ambitious expansion’ to ...

What to Expect During a Professional Termite Inspection

Keeping a home safe from termites isn't just about peace of mind—it’s a vital investment in the structure of your property. A professional termite inspection is your first line o...

Booty and the Beasts - The Podcast

Cult TV Show Back with Bite as a Riotous New Podcast  The show that scandalised, shocked and entertained audiences across the country, ‘Beauty and the Beast’, has returned in ...

A Guide to Determining the Right Time for a Switchboard Replacement

At the centre of every property’s electrical system is the switchboard – a component that doesn’t get much attention until problems arise. This essential unit directs electrici...

Après Skrew: Peanut Butter Whiskey Turns Australia’s Winter Parties Upside Down

This August, winter in Australia is about to get a lot nuttier. Skrewball Whiskey, the cult U.S. peanut butter whiskey that’s taken the world by storm, is bringing its bold brand o...