The Times Australia
The Times World News

.

advanced AI is exciting, but incredibly dangerous in criminals' hands

  • Written by Brendan Walker-Munro, Senior Research Fellow, The University of Queensland
advanced AI is exciting, but incredibly dangerous in criminals' hands

The generative AI[1] industry will be worth about A$22 trillion by 2030[2], according to the CSIRO. These systems – of which ChatGPT is currently the best known – can[3] write essays and code, generate music and artwork, and have entire conversations. But what happens when they’re turned to illegal uses?

Last week, the streaming community was rocked by a headline[4] that links back to the misuse of generative AI. Popular Twitch streamer Atrioc issued an apology video, teary eyed, after being caught viewing pornography with the superimposed faces of other women streamers.

The “deepfake” technology needed to Photoshop a celebrity’s head on a porn actor’s body[5] has been around for a while, but recent advances have made it much harder to detect.

And that’s the tip of the iceberg. In the wrong hands, generative AI could do untold damage. There’s a lot we stand to lose, should laws and regulation fail to keep up.

The same tools used to make deepfake porn videos can be used to fake a US president’s speech. Credit: Buzzfeed.

Read more: Text-to-audio generation is here. One of the next big AI disruptions could be in the music industry[6]

From controversy to outright crime

Last month, generative AI app Lensa came under fire[7] for allowing its system to create fully nude and hyper-sexualised images from users’ headshots. Controversially, it also whitened the skin of women of colour and made their features more European[8].

The backlash was swift. But what’s relatively overlooked is the vast potential to use artistic generative AI in scams. At the far end of the spectrum, there are reports of these tools being able to fake fingerprints and facial scans[9] (the method most of us use to lock our phones).

Criminals are quickly finding new ways to use generative AI to improve the frauds they already perpetrate. The lure of generative AI in scams comes from its ability to find patterns in large amounts of data[10].

Cybersecurity has seen a rise in “bad bots”: malicious automated programs that mimic human behaviour to conduct crime[11]. Generative AI will make these even more sophisticated and difficult to detect.

Ever received a scam text[12] from the “tax office” claiming you had a refund waiting[13]? Or maybe you got a call claiming a warrant was out for your arrest[14]?

In such scams, generative AI could be used to improve the quality of the texts or emails[15], making them much more believable. For example, in recent years we’ve seen AI systems being used to[16] impersonate important figures in “voice spoofing” attacks.

Then there are romance scams[17], where criminals pose as romantic interests and ask their targets for money to help them out of financial distress. These scams are already widespread and often lucrative. Training AI on actual messages between intimate partners could help create a scam chatbot that’s indistinguishable from a human[18].

Generative AI could also allow cybercriminals to more selectively target vulnerable people. For instance, training a system on information stolen from major companies, such as in the Optus or Medibank hacks last year, could help criminals target[19] elderly people, people with disabilities, or people in financial hardship.

Further, these systems can be used to improve computer code[20], which some cybersecurity experts say will make malware and viruses easier to create and harder to detect for antivirus software[21].

The technology is here, and we aren’t prepared

Australia’s[22] and New Zealand’s[23] governments have published frameworks relating to AI, but they aren’t binding rules. Both countries’ laws relating to privacy, transparency and freedom from discrimination aren’t up to the task, as far as AI’s impact is concerned. This puts us behind the rest of the world.

The US has had a legislated National Artificial Intelligence Initiative[24] in place since 2021. And since 2019 it has been illegal in California[25] for a bot to interact with users for commerce or electoral purposes without disclosing it’s not human.

The European Union is also well on the way to enacting the world’s first AI law[26]. The AI Act bans certain types of AI programs posing “unacceptable risk” – such as those used by China’s social credit system[27] – and imposes mandatory restrictions on “high risk” systems.

Although asking ChatGPT to break the law[28] results in warnings that “planning or carrying out a serious crime can lead to severe legal consequences”, the fact is there’s no requirement for these systems to have a “moral code” programmed into them[29].

There may be no limit to what they can be asked to do, and criminals will likely figure out workarounds for any rules intended to prevent their illegal use. Governments need to work closely with the cybersecurity industry to regulate generative AI without stifling innovation, such as by requiring ethical considerations[30] for AI programs.

The Australian government should use the upcoming Privacy Act review[31] to get ahead of potential threats from generative AI to our online identities. Meanwhile, New Zealand’s Privacy, Human Rights and Ethics Framework[32] is a positive step.

We also need to be more cautious as a society about believing what we see online, and remember that humans are traditionally bad[33] at being able to detect fraud.

Can you spot a scam?

As criminals add generative AI tools to their arsenal, spotting scams will only get trickier. The classic tips[34] will still apply. But beyond those, we’ll learn a lot from assessing the ways in which these tools fall short.

Generative AI is bad at critical reasoning and conveying emotion[35]. It can even be tricked into giving wrong answers[36]. Knowing when and why this happens could us help develop effective methods to catch cybercriminals using AI for extortion.

There are also tools being developed to detect AI[37] outputs from tools such as ChatGPT. These could go a long way towards preventing AI-based cybercrime if they prove to be effective.

Read more: Being bombarded with delivery and post office text scams? Here's why — and what can be done[38]

References

  1. ^ generative AI (www.mckinsey.com)
  2. ^ A$22 trillion by 2030 (www.csiro.au)
  3. ^ can (www.businessinsider.com)
  4. ^ by a headline (afkgaming.com)
  5. ^ porn actor’s body (www.bbc.com)
  6. ^ Text-to-audio generation is here. One of the next big AI disruptions could be in the music industry (theconversation.com)
  7. ^ Lensa came under fire (futurism.com)
  8. ^ more European (www.wired.com)
  9. ^ fake fingerprints and facial scans (fortune.com)
  10. ^ large amounts of data (www.abc.net.au)
  11. ^ to conduct crime (www.forbes.com)
  12. ^ a scam text (theconversation.com)
  13. ^ refund waiting (www.ato.gov.au)
  14. ^ out for your arrest (www.cyber.gov.au)
  15. ^ texts or emails (www.techtarget.com)
  16. ^ used to (www.wsj.com)
  17. ^ romance scams (www.accc.gov.au)
  18. ^ from a human (thediplomat.com)
  19. ^ could help criminals target (gilescrouch.medium.com)
  20. ^ improve computer code (www.zdnet.com)
  21. ^ to detect for antivirus software (www.techtarget.com)
  22. ^ Australia’s (www.industry.gov.au)
  23. ^ New Zealand’s (www.data.govt.nz)
  24. ^ National Artificial Intelligence Initiative (www.ai.gov)
  25. ^ illegal in California (leginfo.legislature.ca.gov)
  26. ^ first AI law (artificialintelligenceact.eu)
  27. ^ social credit system (www.abc.net.au)
  28. ^ to break the law (www.bleepingcomputer.com)
  29. ^ programmed into them (www.theregister.com)
  30. ^ requiring ethical considerations (www.lexisnexis.com.au)
  31. ^ Privacy Act review (www.ag.gov.au)
  32. ^ Framework (www.data.govt.nz)
  33. ^ are traditionally bad (www.apa.org)
  34. ^ classic tips (theconversation.com)
  35. ^ critical reasoning and conveying emotion (www.angmohdan.com)
  36. ^ giving wrong answers (www.reddit.com)
  37. ^ detect AI (www.techlearning.com)
  38. ^ Being bombarded with delivery and post office text scams? Here's why — and what can be done (theconversation.com)

Read more https://theconversation.com/scams-deepfake-porn-and-romance-bots-advanced-ai-is-exciting-but-incredibly-dangerous-in-criminals-hands-199004

Times Magazine

Building an AI-First Culture in Your Company

AI isn't just something to think about anymore - it's becoming part of how we live and work, whether we like it or not. At the office, it definitely helps us move faster. But here's the thing: just using tools like ChatGPT or plugging AI into your wo...

Data Management Isn't Just About Tech—Here’s Why It’s a Human Problem Too

Photo by Kevin Kuby Manuel O. Diaz Jr.We live in a world drowning in data. Every click, swipe, medical scan, and financial transaction generates information, so much that managing it all has become one of the biggest challenges of our digital age. Bu...

Headless CMS in Digital Twins and 3D Product Experiences

Image by freepik As the metaverse becomes more advanced and accessible, it's clear that multiple sectors will use digital twins and 3D product experiences to visualize, connect, and streamline efforts better. A digital twin is a virtual replica of ...

The Decline of Hyper-Casual: How Mid-Core Mobile Games Took Over in 2025

In recent years, the mobile gaming landscape has undergone a significant transformation, with mid-core mobile games emerging as the dominant force in app stores by 2025. This shift is underpinned by changing user habits and evolving monetization tr...

Understanding ITIL 4 and PRINCE2 Project Management Synergy

Key Highlights ITIL 4 focuses on IT service management, emphasising continual improvement and value creation through modern digital transformation approaches. PRINCE2 project management supports systematic planning and execution of projects wit...

What AI Adoption Means for the Future of Workplace Risk Management

Image by freepik As industrial operations become more complex and fast-paced, the risks faced by workers and employers alike continue to grow. Traditional safety models—reliant on manual oversight, reactive investigations, and standardised checklist...

The Times Features

Flipping vs. Holding: Which Investment Strategy Is Right for You?

Are you wondering whether flipping a property or holding onto it is the better investment strategy? The answer isn’t one-size-fits-all. Both strategies have distinct advantages a...

Why Everyone's Talking About Sea Moss - And Should You Try It Too?

Sea moss - a humble marine plant that’s been used for centuries - is making a major comeback in modern wellness circles. And it’s not just a trend. With growing interest from athle...

A Guide to Smarter Real Estate Accounting: What You Might Be Overlooking

Real estate accounting can be a complex terrain, even for experienced investors and property managers. From tracking rental income to managing property expenses, the financial in...

What Is the Dreamtime? Understanding Aboriginal Creation Stories Through Art

Aboriginal culture is built on the deep and important meaning of Dreamtime, which links beliefs and history with the elements that make life. It’s not just myths; the Dreamtime i...

How Short-Term Lenders Offer Long-Lasting Benefits in Australia

In the world of personal and business finance, short-term lenders are often viewed as temporary fixes—quick solutions for urgent cash needs. However, in Australia, short-term len...

Why School Breaks Are the Perfect Time to Build Real Game Skills

School holidays provide uninterrupted time to focus on individual skill development Players often return sharper and more confident after structured break-time training Holid...