The Times Australia
The Times World News

.

Deepfake, AI or real? It’s getting harder for police to protect children from sexual exploitation online

  • Written by Terry Goldsworthy, Associate Professor in Criminal Justice and Criminology, Bond University
Deepfake, AI or real? It’s getting harder for police to protect children from sexual exploitation online

Artificial intelligence (AI), now an integral part of our everyday lives, is becoming increasingly accessible and ubiquitous. Consequently, there’s a growing trend of AI advancements being exploited for criminal activities.

One significant concern is the ability AI provides to offenders to produce images[1] and videos depicting real or deepfake child sexual exploitation material.

This is particularly important here in Australia. The CyberSecurity Cooperative Research Centre has identified the country as the third-largest market[2] for online sexual abuse material.

So, how is AI being used to create child sexual exploitation material? Is it becoming more common? And importantly, how do we combat this crime to better protect children?

Spreading faster and wider

In the United States, the Department of Homeland Security[3] refers to AI-created child sexual abuse material as being:

the production, through digital media, of child sexual abuse material and other wholly or partly artificial or digitally created sexualised images of children.

The agency has recognised a variety[4] of ways in which AI is used to create this material. This includes generated images or videos that contain real children, or using deepfake technologies, such as de-aging[5] or misuse of a person’s innocent images (or audio or video) to generate offending content.

Deepfakes[6] refer to hyper-realistic multimedia content generated using AI techniques and algorithms. This means any given material could be partially or completely fake.

The Department of Homeland Security has also found guides on how to use AI to generate child sexual exploitation material on the dark web[7].

The child safety technology company Thorn[8] has also identified a range of ways AI is used in creating this material. It noted in a report[9] that AI can impede victim identification. It can also create new ways to victimise and revictimise children.

Concerningly, the ease with which the technology can be used helps generate more demand. Criminals can then share information about how to make this material (as the Department of Homeland Security found), further proliferating the abuse.

How common is it?

In 2023, an Internet Watch Foundation investigation revealed alarming statistics[10]. Within a month, a dark web forum hosted 20,254 AI-generated images. Analysts assessed that 11,108 of these images were most likely criminal. Using UK laws, they identified 2,562 that satisfied the legal requirements for child sexual exploitation material. A further 416 were criminally prohibited images.

Similarly, the Australian Centre to Counter Child Exploitation, set up in 2018, received more than 49,500 reports[11] of child sexual exploitation material in the 2023–2024 financial year, an increase of about 9,300 over the previous year.

About 90% of deepfake materials[12] online are believed to be explicit. While we don’t exactly know how many include children, the previous statistics indicate many would.

A defocused computer screen with sexually explicit imagery
Australia has recorded thousands of reports of child sexual exploitation. Shutterstock[13]

These data highlight the rapid proliferation of AI in producing realistic and damaging child sexual exploitation material that is difficult to distinguish from genuine images.

This has become a significant national concern. The issue was particularly highlighted during the COVID pandemic when there was a marked increase in the production and distribution of exploitation material.

This trend has prompted an inquiry and a subsequent submission[14] to the Parliamentary Joint Committee on Law Enforcement by the Cyber Security Cooperative Research Centre. As AI technologies become even more advanced and accessible, the issue will only get worse.

Detective Superintendent Frank Rayner from the research centre has said[15]:

the tools that people can access online to create and modify using AI are expanding and they’re becoming more sophisticated, as well. You can jump onto a web browser and enter your prompts in and do text-to-image or text-to-video and have a result in minutes.

Making policing harder

Traditional methods of identifying child sexual exploitation material, which rely on recognising known images and tracking their circulation, are inadequate[16] in the face of AI’s ability to rapidly generate new, unique content.

Moreover, the growing realism of AI-generated exploitation material is adding to the workload of the victim identification unit of the Australian Federal Police. Federal Police Commander Helen Schneider has said[17]

it’s sometimes difficult to discern fact from fiction and therefore we can potentially waste resources looking at images that don’t actually contain real child victims. It means there are victims out there that remain in harmful situations for longer.

However, emerging strategies[18] are being developed to address these challenges.

One promising approach involves leveraging AI technology itself[19] to combat AI-generated content. Machine learning algorithms can be trained to detect subtle anomalies and patterns specific to AI-generated images, such as inconsistencies in lighting, texture or facial features the human eye might miss.

AI technology can also be used to detect exploitation material, including content[20] that was previously hidden. This is done by gathering large data sets from across the internet, which is then assessed by experts.

Collaboration is key

According to Thorn[21], any response to the use of AI in child sexual exploitation material should involve AI developers and providers, data hosting platforms, social platforms and search engines. Working together would help minimise the possibility of generative AI being further misused.

In 2024, major social media companies such as Google, Meta and Amazon came together to form an alliance to fight the use of AI for such abusive material. The chief executives of the major social media companies[22] also faced a US senate committee on how they are preventing online child sexual exploitation and the use of AI to create these images.

The collaboration between technology companies and law enforcement is essential in the fight against the further proliferation of this material. By leveraging their technological capabilities and working together proactively, they can address this serious national concern more effectively than working on their own.

References

  1. ^ produce images (www.dhs.gov)
  2. ^ third-largest market (cybersecuritycrc.org.au)
  3. ^ Department of Homeland Security (www.dhs.gov)
  4. ^ a variety (www.sciencedirect.com)
  5. ^ de-aging (www.respeecher.com)
  6. ^ Deepfakes (asistdl.onlinelibrary.wiley.com)
  7. ^ dark web (theconversation.com)
  8. ^ Thorn (www.thorn.org)
  9. ^ report (www.nist.gov)
  10. ^ alarming statistics (www.iwf.org.uk)
  11. ^ more than 49,500 reports (www.abc.net.au)
  12. ^ 90% of deepfake materials (www.abc.net.au)
  13. ^ Shutterstock (www.shutterstock.com)
  14. ^ submission (cybersecuritycrc.org.au)
  15. ^ has said (www.abc.net.au)
  16. ^ are inadequate (www.abc.net.au)
  17. ^ said (www.abc.net.au)
  18. ^ emerging strategies (stacks.stanford.edu)
  19. ^ leveraging AI technology itself (www.thorn.org)
  20. ^ content (www.sciencedirect.com)
  21. ^ Thorn (www.thorn.org)
  22. ^ chief executives of the major social media companies (time.com)

Read more https://theconversation.com/deepfake-ai-or-real-its-getting-harder-for-police-to-protect-children-from-sexual-exploitation-online-232820

Times Magazine

Building an AI-First Culture in Your Company

AI isn't just something to think about anymore - it's becoming part of how we live and work, whether we like it or not. At the office, it definitely helps us move faster. But here's the thing: just using tools like ChatGPT or plugging AI into your wo...

Data Management Isn't Just About Tech—Here’s Why It’s a Human Problem Too

Photo by Kevin Kuby Manuel O. Diaz Jr.We live in a world drowning in data. Every click, swipe, medical scan, and financial transaction generates information, so much that managing it all has become one of the biggest challenges of our digital age. Bu...

Headless CMS in Digital Twins and 3D Product Experiences

Image by freepik As the metaverse becomes more advanced and accessible, it's clear that multiple sectors will use digital twins and 3D product experiences to visualize, connect, and streamline efforts better. A digital twin is a virtual replica of ...

The Decline of Hyper-Casual: How Mid-Core Mobile Games Took Over in 2025

In recent years, the mobile gaming landscape has undergone a significant transformation, with mid-core mobile games emerging as the dominant force in app stores by 2025. This shift is underpinned by changing user habits and evolving monetization tr...

Understanding ITIL 4 and PRINCE2 Project Management Synergy

Key Highlights ITIL 4 focuses on IT service management, emphasising continual improvement and value creation through modern digital transformation approaches. PRINCE2 project management supports systematic planning and execution of projects wit...

What AI Adoption Means for the Future of Workplace Risk Management

Image by freepik As industrial operations become more complex and fast-paced, the risks faced by workers and employers alike continue to grow. Traditional safety models—reliant on manual oversight, reactive investigations, and standardised checklist...

The Times Features

Ricoh Launches IM C401F A4 Colour MFP to Boost Speed and Security in Hybrid Workplaces

Ricoh, a leading provider of smart workplace technology, today launched the RICOH IM C401F, an enterprise-grade A4 colour desktop multifunction printer (MFP) designed for Austral...

Why Diversification Still Matters in a Volatile Economy

Market volatility, geopolitical conflicts, inflation fears—these are only some of the wild cards that render the current financial environment a tightrope to walk. Amidst all thi...

Specialised nutrition gains momentum in supporting those living with early Alzheimer's disease

With high public interest in Alzheimer’s disease, there is growing awareness of the important role nutrition plays in supporting memory and cognitive function in people diagnosed...

From clinics to comfort: how sleep retreats are redefining care in Australia

Australia is amid a sleep health crisis. Nearly 40% of adults report inadequate sleep, and the consequences are far-reaching, impacting everything from cardiovascular health to...

Is our mental health determined by where we live – or is it the other way round? New research sheds more light

Ever felt like where you live is having an impact on your mental health? Turns out, you’re not imagining things. Our new analysis[1] of eight years of data from the New Zeal...

Going Off the Beaten Path? Here's How to Power Up Without the Grid

There’s something incredibly freeing about heading off the beaten path. No traffic, no crowded campsites, no glowing screens in every direction — just you, the landscape, and the...