The Times Australia
The Times World News

.

When self-driving cars crash, who's responsible? Courts and insurers need to know what's inside the 'black box'

  • Written by Aaron J. Snoswell, Post-doctoral Research Fellow, Computational Law & AI Accountability, Queensland University of Technology
When self-driving cars crash, who's responsible? Courts and insurers need to know what's inside the 'black box'

The first serious accident involving a self-driving car in Australia occurred in March this year. A pedestrian suffered life-threatening injuries when hit by a Tesla Model 3[1] in “autopilot” mode.

In the US, the highway safety regulator is investigating a series of accidents[2] where Teslas on autopilot crashed into first-responder vehicles with flashing lights during traffic stops.

A highway car crash at night with emergency lights flashing
A Tesla model 3 collides with a stationary emergency responder vehicle in the US. NBC / YouTube[3]

The decision-making processes of “self-driving” cars are often opaque and unpredictable[4] (even to their manufacturers), so it can be hard to determine who should be held accountable for incidents such as these. However, the growing field of “explainable AI” may help provide some answers.

Read more: Who (or what) is behind the wheel? The regulatory challenges of driverless cars[5]

Who is responsible when self-driving cars crash?

While self-driving cars are new, they are still machines made and sold by manufacturers. When they cause harm, we should ask whether the manufacturer (or software developer) has met their safety responsibilities.

Modern negligence law comes from the famous case of Donoghue v Stevenson[6], where a woman discovered a decomposing snail in her bottle of ginger beer. The manufacturer was found negligent, not because he was expected to directly predict or control the behaviour of snails, but because his bottling process was unsafe.

By this logic, manufacturers and developers of AI-based systems like self-driving cars may not be able to foresee and control everything the “autonomous” system does, but they can take measures to reduce risks. If their risk management, testing, audits and monitoring practices are not good enough, they should be held accountable.

How much risk management is enough?

The difficult question will be “How much care and how much risk management is enough?” In complex software, it is impossible to test for every bug[7] in advance. How will developers and manufacturers know when to stop?

Fortunately, courts, regulators and technical standards bodies have experience in setting standards of care and responsibility for risky but useful activities.

Standards could be very exacting, like the European Union’s draft AI regulation[8], which requires risks to be reduced “as far as possible” without regard to cost. Or they may be more like Australian negligence law, which permits less stringent management for less likely or less severe risks, or where risk management would reduce the overall benefit of the risky activity.

Legal cases will be complicated by AI opacity

Once we have a clear standard for risks, we need a way to enforce it. One approach could be to give a regulator powers to impose penalties (as the ACCC does in competition cases, for example).

Individuals harmed by AI systems must also be able to sue. In cases involving self-driving cars, lawsuits against manufacturers will be particularly important.

However, for such lawsuits to be effective, courts will need to understand in detail the processes and technical parameters of the AI systems.

Manufacturers often prefer not to reveal such details for commercial reasons. But courts already have procedures to balance commercial interests with an appropriate amount of disclosure to facilitate litigation.

A greater challenge may arise when AI systems themselves are opaque “black boxes[9]”. For example, Tesla’s autopilot functionality relies on “deep neural networks[10]”, a popular type of AI system in which even the developers can never be entirely sure how or why it arrives at a given outcome.

‘Explainable AI’ to the rescue?

Opening the black box of modern AI systems is the focus of a new[11] wave[12] of computer science and humanities scholars[13]: the so-called “explainable AI” movement.

The goal is to help developers and end users understand how AI systems make decisions, either by changing how the systems are built or by generating explanations after the fact.

In a classic example[14], an AI system mistakenly classifies a picture of a husky as a wolf. An “explainable AI” method reveals the system focused on snow in the background of the image, rather than the animal in the foreground.

(Right) An image of a husky in front of a snowy background. (Left) An 'explainable AI' method shows which parts of the image the AI system focused on when classifying the image as a wolf.
Explainable AI in action: an AI system incorrectly classifies the husky on the left as a ‘wolf’, and at right we see this is because the system was focusing on the snow in the background of the image. Ribeiro, Singh & Guestrin[15]

How this might be used in a lawsuit will depend on various factors, including the specific AI technology and the harm caused. A key concern will be how much access the injured party is given to the AI system.

The Trivago case

Our new research[16] analysing an important recent Australian court case provides an encouraging glimpse of what this could look like.

In April 2022, the Federal Court penalised global hotel booking company Trivago $44.7 million for misleading customers about hotel room rates on its website and in TV advertising, after a case brought on by competition watchdog the ACCC[17]. A critical question was how Trivago’s complex ranking algorithm chose the top ranked offer for hotel rooms.

The Federal Court set up rules for evidence discovery with safeguards to protect Trivago’s intellectual property, and both the ACCC and Trivago called expert witnesses to provide evidence explaining how Trivago’s AI system worked.

Even without full access to Trivago’s system, the ACCC’s expert witness was able to produce compelling evidence that the system’s behaviour was not consistent with Trivago’s claim of giving customers the “best price”.

This shows how technical experts and lawyers together can overcome AI opacity in court cases. However, the process requires close collaboration and deep technical expertise, and will likely be expensive.

Regulators can take steps now to streamline things in the future, such as requiring AI companies to adequately document their systems.

The road ahead

Vehicles with various degrees of automation[18] are becoming more common, and fully autonomous taxis and buses are being tested both in Australia[19] and overseas[20].

Keeping our roads as safe as possible will require close collaboration between AI and legal experts, and regulators, manufacturers, insurers, and users will all have roles to play.

Read more: 'Self-driving' cars are still a long way off. Here are three reasons why[21]

References

  1. ^ hit by a Tesla Model 3 (www.9news.com.au)
  2. ^ series of accidents (www.skynettoday.com)
  3. ^ NBC / YouTube (www.youtube.com)
  4. ^ opaque and unpredictable (journals.sagepub.com)
  5. ^ Who (or what) is behind the wheel? The regulatory challenges of driverless cars (theconversation.com)
  6. ^ Donoghue v Stevenson (legalheritage.sclqld.org.au)
  7. ^ impossible to test for every bug (jolt.law.harvard.edu)
  8. ^ draft AI regulation (op.europa.eu)
  9. ^ black boxes (doi.org)
  10. ^ deep neural networks (www.louisbouchard.ai)
  11. ^ new (facctconference.org)
  12. ^ wave (eaamo.org)
  13. ^ scholars (www.aies-conference.com)
  14. ^ a classic example (dl.acm.org)
  15. ^ Ribeiro, Singh & Guestrin (dx.doi.org)
  16. ^ new research (aaronsnoswell.github.io)
  17. ^ competition watchdog the ACCC (www.accc.gov.au)
  18. ^ various degrees of automation (theconversation.com)
  19. ^ in Australia (news.redland.qld.gov.au)
  20. ^ overseas (electrek.co)
  21. ^ 'Self-driving' cars are still a long way off. Here are three reasons why (theconversation.com)

Read more https://theconversation.com/when-self-driving-cars-crash-whos-responsible-courts-and-insurers-need-to-know-whats-inside-the-black-box-180334

Times Magazine

DIY Is In: How Aussie Parents Are Redefining Birthday Parties

When planning his daughter’s birthday, Rich opted for a DIY approach, inspired by her love for drawing maps and giving clues. Their weekend tradition of hiding treats at home sparked the idea, and with a pirate ship playground already chosen as t...

When Touchscreens Turn Temperamental: What to Do Before You Panic

When your touchscreen starts acting up, ignoring taps, registering phantom touches, or freezing entirely, it can feel like your entire setup is falling apart. Before you rush to replace the device, it’s worth taking a deep breath and exploring what c...

Why Social Media Marketing Matters for Businesses in Australia

Today social media is a big part of daily life. All over Australia people use Facebook, Instagram, TikTok , LinkedIn and Twitter to stay connected, share updates and find new ideas. For businesses this means a great chance to reach new customers and...

Building an AI-First Culture in Your Company

AI isn't just something to think about anymore - it's becoming part of how we live and work, whether we like it or not. At the office, it definitely helps us move faster. But here's the thing: just using tools like ChatGPT or plugging AI into your wo...

Data Management Isn't Just About Tech—Here’s Why It’s a Human Problem Too

Photo by Kevin Kuby Manuel O. Diaz Jr.We live in a world drowning in data. Every click, swipe, medical scan, and financial transaction generates information, so much that managing it all has become one of the biggest challenges of our digital age. Bu...

Headless CMS in Digital Twins and 3D Product Experiences

Image by freepik As the metaverse becomes more advanced and accessible, it's clear that multiple sectors will use digital twins and 3D product experiences to visualize, connect, and streamline efforts better. A digital twin is a virtual replica of ...

The Times Features

Italian Street Kitchen: A Nation’s Favourite with Expansion News on Horizon

Successful chef brothers, Enrico and Giulio Marchese, weigh in on their day-to-day at Australian foodie favourite, Italian Street Kitchen - with plans for ‘ambitious expansion’ to ...

What to Expect During a Professional Termite Inspection

Keeping a home safe from termites isn't just about peace of mind—it’s a vital investment in the structure of your property. A professional termite inspection is your first line o...

Booty and the Beasts - The Podcast

Cult TV Show Back with Bite as a Riotous New Podcast  The show that scandalised, shocked and entertained audiences across the country, ‘Beauty and the Beast’, has returned in ...

A Guide to Determining the Right Time for a Switchboard Replacement

At the centre of every property’s electrical system is the switchboard – a component that doesn’t get much attention until problems arise. This essential unit directs electrici...

Après Skrew: Peanut Butter Whiskey Turns Australia’s Winter Parties Upside Down

This August, winter in Australia is about to get a lot nuttier. Skrewball Whiskey, the cult U.S. peanut butter whiskey that’s taken the world by storm, is bringing its bold brand o...

450 people queue for first taste of Pappa Flock’s crispy chicken as first restaurant opens in Queensland

Queenslanders turned out in flocks for the opening of Pappa Flock's first Queensland restaurant, with 450 people lining up to get their hands on the TikTok famous crispy crunchy ch...