The Times Australia
Google AI
The Times World News

.

AI is creating fake legal cases and making its way into real courtrooms, with disastrous results

  • Written by Michael Legg, Professor of Law, UNSW Sydney & Vicki McNamara, Senior Research Associate, Centre for the Future of the Legal Profession
AI is creating fake legal cases and making its way into real courtrooms, with disastrous results

We’ve seen deepfake, explicit images of celebrities[1], created by artificial intelligence (AI). AI has also played a hand in creating music[2], driverless race cars[3] and spreading misinformation[4], among other things.

It’s hardly surprising, then, that AI also has a strong impact on our legal systems.

It’s well known that courts must decide disputes based on the law, which is presented by lawyers to the court as part of a client’s case. It’s therefore highly concerning that fake law, invented by AI, is being used in legal disputes.

Not only does this pose issues of legality and ethics, it also threatens to undermine faith and trust in global legal systems.

Read more: Lawyers are rapidly embracing AI: here's how to avoid an ethical disaster[5]

How do fake laws come about?

There is little doubt that generative AI is a powerful tool with transformative potential for society, including many aspects of the legal system. But its use comes with responsibilities and risks.

Lawyers are trained to carefully apply professional knowledge and experience, and are generally not big risk-takers. However, some unwary lawyers (and self-represented[6] litigants) have been caught out by artificial intelligence.

ChatGPT on a smartphone screen in front of the same website on a laptop screen
Generative AI tools, like ChatGPT, can provide incorrect information. Shutterstock[7]

AI models are trained on massive data sets. When prompted by a user, they can create new content (both text and audiovisual).

Although content generated this way can look very convincing, it can also be inaccurate. This is the result of the AI model attempting to “fill in the gaps” when its training data is inadequate or flawed, and is commonly referred to as “hallucination[8]”.

In some contexts, generative AI hallucination is not a problem. Indeed, it can be seen as an example of creativity.

But if AI hallucinated or created inaccurate content that is then used in legal processes, that’s a problem – particularly when combined with time pressures on lawyers and a lack of access to legal services for many.

This potent combination can result in carelessness and shortcuts in legal research and document preparation, potentially creating reputational issues for the legal profession and a lack of public trust in the administration of justice.

It’s happening already

The best known generative AI “fake case” is the 2023 US case Mata v Avianca[9], in which lawyers submitted a brief containing fake extracts and case citations to a New York court. The brief was researched using ChatGPT.

The lawyers, unaware that ChatGPT can hallucinate, failed to check that the cases actually existed. The consequences were disastrous. Once the error was uncovered, the court dismissed their client’s case, sanctioned the lawyers for acting in bad faith, fined them and their firm, and exposed their actions to public scrutiny.

Read more: AI is everywhere – including countless applications you've likely never heard of[10]

Despite adverse publicity, other fake case examples continue to surface. Michael Cohen, Donald Trump’s former lawyer, gave his own lawyer cases generated by Google Bard, another generative AI chatbot. He believed they were real (they were not) and that his lawyer would fact check them (he did not). His lawyer included the cases[11] in a brief filed with the US Federal Court.

Fake cases have also surfaced in recent matters in Canada[12] and the United Kingdom[13].

If this trend goes unchecked, how can we ensure that the careless use of generative AI does not undermine the public’s trust in the legal system? Consistent failures by lawyers to exercise due care when using these tools has the potential to mislead and congest the courts, harm clients’ interests, and generally undermine the rule of law.

A man in a suit leaves a courtroom Michael Cohen’s lawyer was caught up in a court case involving fake AI case law. Sarah Yenesel/EPA

What’s being done about it?

Around the world, legal regulators and courts have responded in various ways.

Several US state bars and courts have issued guidance, opinions or orders on generative AI use, ranging from responsible adoption to an outright ban.

Law societies in the UK and British Columbia, and the courts of New Zealand, have also developed guidelines.

In Australia, the NSW Bar Association has a generative AI guide[14] for barristers. The Law Society of NSW[15] and the Law Institute of Victoria[16] have released articles on responsible use in line with solicitors’ conduct rules.

Many lawyers and judges, like the public, will have some understanding of generative AI and can recognise both its limits and benefits. But there are others who may not be as aware. Guidance undoubtedly helps.

But a mandatory approach is needed. Lawyers who use generative AI tools cannot treat it as a substitute for exercising their own judgement and diligence, and must check the accuracy and reliability of the information they receive.

Read more: Do you trust AI to write the news? It already is – and not without issues[17]

In Australia, courts should adopt practice notes or rules that set out expectations when generative AI is used in litigation. Court rules can also guide self-represented litigants, and would communicate to the public that our courts are aware of the problem and are addressing it.

The legal profession could also adopt formal guidance to promote the responsible use of AI by lawyers. At the very least, technology competence should become a requirement of lawyers’ continuing legal education in Australia.

Setting clear requirements for the responsible and ethical use of generative AI by lawyers in Australia will encourage appropriate adoption and shore up public confidence in our lawyers, our courts, and the overall administration of justice in this country.

References

  1. ^ celebrities (www.nytimes.com)
  2. ^ creating music (theconversation.com)
  3. ^ driverless race cars (theconversation.com)
  4. ^ misinformation (theconversation.com)
  5. ^ Lawyers are rapidly embracing AI: here's how to avoid an ethical disaster (theconversation.com)
  6. ^ self-represented (reason.com)
  7. ^ Shutterstock (www.shutterstock.com)
  8. ^ hallucination (www.csiro.au)
  9. ^ Mata v Avianca (law.justia.com)
  10. ^ AI is everywhere – including countless applications you've likely never heard of (theconversation.com)
  11. ^ included the cases (www.reuters.com)
  12. ^ Canada (www.cbc.ca)
  13. ^ the United Kingdom (www.legalfutures.co.uk)
  14. ^ generative AI guide (inbrief.nswbar.asn.au)
  15. ^ Law Society of NSW (lsj.com.au)
  16. ^ Law Institute of Victoria (www.liv.asn.au)
  17. ^ Do you trust AI to write the news? It already is – and not without issues (theconversation.com)

Read more https://theconversation.com/ai-is-creating-fake-legal-cases-and-making-its-way-into-real-courtrooms-with-disastrous-results-225080

Times Magazine

Navman MiVue™ True 4K PRO Surround honest review

If you drive a car, you should have a dashcam. Need convincing? All I ask that you do is search fo...

Australia’s supercomputers are falling behind – and it’s hurting our ability to adapt to climate change

As Earth continues to warm, Australia faces some important decisions. For example, where shou...

Australia’s electric vehicle surge — EVs and hybrids hit record levels

Australians are increasingly embracing electric and hybrid cars, with 2025 shaping up as the str...

Tim Ayres on the AI rollout’s looming ‘bumps and glitches’

The federal government released its National AI Strategy[1] this week, confirming it has dropped...

Seven in Ten Australian Workers Say Employers Are Failing to Prepare Them for AI Future

As artificial intelligence (AI) accelerates across industries, a growing number of Australian work...

Mapping for Trucks: More Than Directions, It’s Optimisation

Daniel Antonello, General Manager Oceania, HERE Technologies At the end of June this year, Hampden ...

The Times Features

When Holiday Small Talk Hurts Inclusion at Work

Dr. Tatiana Andreeva, Associate Professor in Management and Organisational Behaviour, Maynooth U...

Human Rights Day: The Right to Shelter Isn’t Optional

It is World Human Rights Day this week. Across Australia, politicians read declarations and clai...

In awkward timing, government ends energy rebate as it defends Wells’ spendathon

There are two glaring lessons for politicians from the Anika Wells’ entitlements affair. First...

Australia’s Coffee Culture Faces an Afternoon Rethink as New Research Reveals a Surprising Blind Spot

Australia’s celebrated coffee culture may be world‑class in the morning, but new research* sugge...

Reflections invests almost $1 million in Tumut River park to boost regional tourism

Reflections Holidays, the largest adventure holiday park group in New South Wales, has launched ...

Groundbreaking Trial: Fish Oil Slashes Heart Complications in Dialysis Patients

A significant development for patients undergoing dialysis for kidney failure—a group with an except...

Worried after sunscreen recalls? Here’s how to choose a safe one

Most of us know sunscreen is a key way[1] to protect areas of our skin not easily covered by c...

Buying a property soon? What predictions are out there for mortgage interest rates?

As Australians eye the property market, one of the biggest questions is where mortgage interest ...

Last-Minute Christmas Holiday Ideas for Sydney Families

Perfect escapes you can still book — without blowing the budget or travelling too far Christmas...