The Times Australia
The Times World News

.

Can human moderators ever really rein in harmful online content? New research says yes

  • Written by Marian-Andrei Rizoiu, Senior Lecturer in Behavioral Data Science, University of Technology Sydney
Can human moderators ever really rein in harmful online content? New research says yes

Social media platforms have become the “digital town squares” of our time, enabling communication and the exchange of ideas on a global scale. However, the unregulated nature of these platforms has allowed the proliferation of harmful content such as misinformation, disinformation and hate speech.

Regulating the online world has proven difficult, but one promising avenue is suggested by the European Union’s Digital Services Act, passed in November 2022. This legislation mandates “trusted flaggers” to identify certain kinds of problematic content to platforms, who must then remove it within 24 hours.

Will it work, given the fast pace and complex viral dynamics of social media environments? To find out, we modelled the effect of the new rule, in research[1] published in the Proceedings of the National Academy of Sciences.

Our results show this approach can indeed reduce the spread of harmful content. We also suggest some insights into how the rules can be implemented in the most effective way.

Understanding the spread of harmful content

We used a mathematical model of information spread to analyse how harmful content is disseminated through social networks.

In the model, each harmful post is treated as a “self-exciting point process[2]”. This means it draws more people into the discussion over time and generates further harmful posts, similar to a word-of-mouth process.

The intensity of a post’s self-propagation decreases over time. However, if left unchecked, its “offspring” can generate more offspring, leading to exponential growth.

A constellation of lights in a dark room, with a group of people silhouetted against the light.
Social media posts spread online through a process much like word of mouth. Robynne Hu / Unsplash[3]

The potential for harm reduction

In our study, we used two key measures to assess the effectiveness of the kind of moderation set out in the Digital Services Act: potential harm and content half-life.

A post’s potential harm represents the number of harmful offspring it generates. Content half-life denotes the amount of time required for half of all the post’s offspring to be generated.

We found moderation by the rules of the Digital Services Act can effectively reduce harm, even on platforms with short content half-lives, such as X (formerly known as Twitter). While faster moderation is always more effective, we found that moderating even after 24 hours could still reduce the number of harmful offspring by up to 50%.

The role of reaction time and harm reduction

The reaction time required for effective content moderation increases with both the content half-life and potential harm. To put it another way, for content that is longer-lived and generates large numbers of harmful offspring, intervening later can still prevent many harmful subsequent posts.

This suggests the approach of the Digital Services Act can effectively combat harmful content, even on fast-paced platforms like X.

We also found the amount of harm reduction increases for content with greater potential harm. While apparently counterintuitive, this indicates moderation is effective when it targets the offspring of offspring generation – that is, when it breaks the word-of-mouth cycle.

Making the most of moderation efforts

Prior research has shown tools based on artificial intelligence struggle to detect online harmful content. The authors of such content are aware of the detection tools, and adapt their language to avoid detection.

Read more: Can ideology-detecting algorithms catch online extremism before it takes hold?[4]

The Digital Services Act moderation approach relies on manual tagging of posts by “trusted flaggers”, who will have limited time and resources.

To make the most of their efforts, flaggers should focus their efforts on content with high potential harm for which our research shows that moderation is most effective. We estimate the potential harm of a post at its creation by extrapolating its expected number of offspring from previously observed discussions.

Implementing the Digital Services Act

Social media platforms already employ content moderation teams, and our research suggests the major platforms at least already have enough staff to enforce the Digital Services Act legislation. There are, however, questions about the cultural awareness of the existing staff as some of these teams are based in different countries to the majority of content posters they are moderating.

The success of the legislation will lie in appointing trusted flaggers with sufficient cultural and language knowledge, developing practical reporting tools for harmful content, and ensuring timely moderation.

Our study’s framework will provide policymakers with valuable guidance in drafting mechanisms for content moderation that prioritise efforts and reaction times effectively.

A healthier and safer digital public square

As social media platforms continue to shape public discourse, addressing the challenges posed by harmful content is crucial. Our research on the effectiveness of moderating harmful online content offers valuable insights for policymakers.

By understanding the dynamics of content spread, optimising moderation efforts, and implementing regulations like the Digital Services Act, we can strive for a healthier and safer digital public square where harmful content is mitigated, and constructive dialogue thrives.

Read more: The 'digital town square'? What does it mean when billionaires own the online spaces where we gather?[5]

Read more https://theconversation.com/can-human-moderators-ever-really-rein-in-harmful-online-content-new-research-says-yes-209882

Times Magazine

What AI Adoption Means for the Future of Workplace Risk Management

Image by freepik As industrial operations become more complex and fast-paced, the risks faced by workers and employers alike continue to grow. Traditional safety models—reliant on manual oversight, reactive investigations, and standardised checklist...

From Beach Bops to Alpine Anthems: Your Sonos Survival Guide for a Long Weekend Escape

Alright, fellow adventurers and relaxation enthusiasts! So, you've packed your bags, charged your devices, and mentally prepared for that glorious King's Birthday long weekend. But hold on, are you really ready? Because a true long weekend warrior kn...

Effective Commercial Pest Control Solutions for a Safer Workplace

Keeping a workplace clean, safe, and free from pests is essential for maintaining productivity, protecting employee health, and upholding a company's reputation. Pests pose health risks, can cause structural damage, and can lead to serious legal an...

The Science Behind Reverse Osmosis and Why It Matters

What is reverse osmosis? Reverse osmosis (RO) is a water purification process that removes contaminants by forcing water through a semi-permeable membrane. This membrane allows only water molecules to pass through while blocking impurities such as...

Foodbank Queensland celebrates local hero for National Volunteer Week

Stephen Carey is a bit bananas.   He splits his time between his insurance broker business, caring for his young family, and volunteering for Foodbank Queensland one day a week. He’s even run the Bridge to Brisbane in a banana suit to raise mon...

Senior of the Year Nominations Open

The Allan Labor Government is encouraging all Victorians to recognise the valuable contributions of older members of our community by nominating them for the 2025 Victorian Senior of the Year Awards.  Minister for Ageing Ingrid Stitt today annou...

The Times Features

Great Barrier Reef operator Passions of Paradise

A series of sustainability firsts and a commitment to global best practice standards for more than 20 years has earned Cairns-based Great Barrier Reef operator Passions of Para...

5 Questions to Ask Before Getting Blepharoplasty in Gold Coast

(Source) Blepharoplasty, or eyelid surgery, removes extra skin or fat from around the eyes. It’s often done for cosmetic reasons, but it can also help with vision if sagging eye...

How Smart Home Integration is Enhancing SIL Accommodation in 2025

(Source) The concept of "home" is intensely personal, a sanctuary where we experience security, comfort, and a sense of being in control. For people living with disability, acco...

How to Know If You’re Actually on Track for a Comfortable Retirement

Image by Drazen Zigic on Freepik It’s the kind of question that sits in the back of your mind, especially as you tick past your 30s or 40s: Am I actually saving enough for retire...

Onsite Caterer vs a Full Service Venue: 9 important things to explore

Choosing between an external catering company and an all-inclusive venue is a major decision that affects cost, flexibility, food quality, and the overall event experience. Venue...

The Hidden Vision Problem Impacting Mid Life Australians Every Day

New research from Specsavers reveals millions of Australians are living with an undiagnosed condition that could be putting their safety at risk. For many Australians aged 35 ...