The Times Australia
Fisher and Paykel Appliances
The Times World News

.

A new tool helps catch nasty comments – even when they’re disguised

  • Written by Johnny Chan, Lecturer, Business School, University of Auckland, Waipapa Taumata Rau



People determined to spread toxic messages online have taken to masking their words to bypass automated moderation filters.

A user might replace letters with numbers or symbols, for example, writing “Y0u’re st00pid” instead of “You’re stupid”.

Another tactic involves combining words, such as “IdiotFace”. Doing this masks the harmful intent from systems that look for individual toxic words.

Similarly, harmful terms can be altered with spaces or additional characters, such as “h a t e ” or “h@te”, effectively slipping through keyword-based filters.

While the intent remains harmful, traditional moderation tools often overlook such messages. This leaves users — particularly vulnerable groups — exposed to their negative impact.

To address this, we have developed a novel pre-processing technique[1] designed to help moderation tools more effectively handle the subtle complexities of hidden toxicity.

An intelligent assistant

Our tool works in conjunction with existing moderation. It acts as an intelligent assistant, preparing content for deeper and more accurate evaluation by restructuring and refining input text.

By addressing common tricks users employ to disguise harmful intent, it ensures moderation systems are more effective. The tool performs three key functions.

  1. It first simplifies the text. Irrelevant elements, such as excessive punctuation or extraneous characters, are removed to make text straightforward and ready for evaluation.

  2. It then standardises what is written. Variations in spelling, phrasing and grammar are resolved. This includes interpreting deliberate misspellings (“h8te” for “hate”).

  3. Finally, it looks for patterns. Recurring strategies such as breaking up toxic words (“I d i o t”), or embedding them within benign phrases, are identified and normalised to reveal the underlying intent.

These steps can break apart compound words like “IdiotFace” or normalise modified phrases like “Y0u’re st00pid”. This makes harmful content visible to traditional filters.

Importantly, our work is not about reinventing the wheel but ensuring the existing wheel functions as effectively as it should, even when faced with disguised toxic messages.

Girl in the dark looking sadly at a phone.
Our new tool cleans up toxic comments that have been hidden behind misspellings and extra characters. ClarkandCompany/Getty Images[2]

Catching subtle forms of toxicity

The applications of this tool extend across a wide range of online environments. For social media platforms, it enhances the ability to detect harmful messages, creating a safer space for users. This is particularly important for protecting younger audiences, who may be more vulnerable to online abuse.

By catching subtle forms of toxicity, the tool helps to prevent harmful behaviours like bullying from persisting unchecked.

Businesses can also use this technology to safeguard their online presence. Negative campaigns or covert attacks on brands often employ subtle and disguised messaging to avoid detection. By processing such content before it is moderated, the tool ensures that businesses can respond swiftly to any reputational threats.

Additionally, policymakers and organisations that monitor public discourse can benefit from this system. Hidden toxicity, particularly in polarised discussions, can undermine efforts to maintain constructive dialogue.

The tool provides a more robust way for identifying problematic content and ensuring that debates remain respectful and productive.

Better moderation

Our tool marks an important advance in content moderation. By addressing the limitations of traditional keyword-based filters, it offers a practical solution to the persistent issue of hidden toxicity.

Importantly, it demonstrates how small but focused improvements can make a big difference in creating safer and more inclusive online environments. As digital communication continues to evolve, tools like ours will play an increasingly vital role in protecting users and fostering positive interactions.

While this research addresses the challenges of detecting hidden toxicity within text, the journey is far from over.

Future advances will likely delve deeper into the complexities of context—analysing how meaning shifts depending on conversational dynamics, cultural nuances and intent.

By building on this foundation, the next generation of content moderation systems could uncover not just what is being said but also the circumstances in which it is said, paving the way for safer and more inclusive online spaces.

References

  1. ^ novel pre-processing technique (methods-x.com)
  2. ^ ClarkandCompany/Getty Images (www.gettyimages.com.au)

Read more https://theconversation.com/unmasking-hidden-online-hate-a-new-tool-helps-catch-nasty-comments-even-when-theyre-disguised-244636

Active Wear

Times Magazine

World Kindness Day: Commentary from Kath Koschel, founder of Kindness Factory.

What does World Kindness Day mean to you as an individual, and to the Kindness Factory as an organ...

In 2024, the climate crisis worsened in all ways. But we can still limit warming with bold action

Climate change has been on the world’s radar for decades[1]. Predictions made by scientists at...

End-of-Life Planning: Why Talking About Death With Family Makes Funeral Planning Easier

I spend a lot of time talking about death. Not in a morbid, gloomy way—but in the same way we d...

YepAI Joins Victoria's AI Trade Mission to Singapore for Big Data & AI World Asia 2025

YepAI, a Melbourne-based leader in enterprise artificial intelligence solutions, announced today...

Building a Strong Online Presence with Katoomba Web Design

Katoomba web design is more than just creating a website that looks good—it’s about building an onli...

September Sunset Polo

International Polo Tour To Bridge Historic Sport, Life-Changing Philanthropy, and Breath-Taking Beau...

The Times Features

How early is too early’ for Hot Cross Buns to hit supermarket and bakery shelves

Every year, Australians find themselves in the middle of the nation’s most delicious dilemmas - ...

Ovarian cancer community rallied Parliament

The fight against ovarian cancer took centre stage at Parliament House in Canberra last week as th...

After 2 years of devastating war, will Arab countries now turn their backs on Israel?

The Middle East has long been riddled by instability. This makes getting a sense of the broader...

RBA keeps interest rates on hold, leaving borrowers looking further ahead for relief

As expected, the Reserve Bank of Australia (RBA) has kept the cash rate steady at 3.6%[1]. Its b...

Crystalbrook Collection Introduces ‘No Rings Attached’: Australia’s First Un-Honeymoon for Couples

Why should newlyweds have all the fun? As Australia’s crude marriage rate falls to a 20-year low, ...

Echoes of the Past: Sue Carter Brings Ancient Worlds to Life at Birli Gallery

Launching November 15 at 6pm at Birli Gallery, Midland, Echoes of the Past marks the highly anti...

Why careless adoption of AI backfires so easily

Artificial intelligence (AI) is rapidly becoming commonplace, despite statistics showing[1] th...

How airline fares are set and should we expect lower fares any time soon?

Airline ticket prices may seem mysterious (why is the same flight one price one day, quite anoth...

What is the American public’s verdict on the first year of Donald Trump’s second term as President?

In short: the verdict is decidedly mixed, leaning negative. Trump’s overall job-approval ra...