The Times Australia
The Times Australia

.

Evidence shows AI systems are already too much like humans. Will that be a problem?

  • Written by Sandra Peter, Director of Sydney Executive Plus, University of Sydney

What if we could design a machine that could read your emotions and intentions, write thoughtful, empathetic, perfectly timed responses — and seemingly know exactly what you need to hear? A machine so seductive, you wouldn’t even realise it’s artificial. What if we already have?

In a comprehensive meta-analysis, published in the Proceedings of the National Academy of Sciences[1], we show that the latest generation of large language model-powered chatbots match and exceed most humans in their ability to communicate. A growing body of research shows these systems now reliably pass the Turing test[2], fooling humans into thinking they are interacting with another human.

None of us was expecting the arrival of super communicators. Science fiction taught us that artificial intelligence (AI) would be highly rational and all-knowing, but lack humanity.

Yet here we are. Recent experiments have shown that models such as GPT-4 outperform humans in writing persuasively[3] and also empathetically[4]. Another study found that large language models (LLMs) excel at assessing nuanced sentiment[5] in human-written messages.

LLMs are also masters at roleplay[6], assuming a wide range of personas and mimicking nuanced linguistic character styles[7]. This is amplified by their ability to infer human beliefs[8] and intentions from text. Of course, LLMs do not possess true empathy or social understanding – but they are highly effective mimicking machines.

We call these systems “anthropomorphic agents”. Traditionally, anthropomorphism refers to ascribing human traits to non-human entities. However, LLMs genuinely display highly human-like qualities, so calls to avoid anthropomorphising LLMs will fall flat.

This is a landmark moment: when you cannot tell the difference between talking to a human or an AI chatbot online.

On the internet, nobody knows you’re an AI

What does this mean? On the one hand, LLMs promise to make complex information more widely accessible via chat interfaces, tailoring messages to individual comprehension levels[9]. This has applications across many domains, such as legal services or public health. In education, the roleplay abilities can be used to create Socratic tutors that ask personalised questions and help students learn.

At the same time, these systems are seductive. Millions of users already interact with AI companion apps daily. Much has been said about the negative effects of companion apps[10], but anthropomorphic seduction comes with far wider implications.

Users are ready to trust AI chatbots[11] so much that they disclose highly personal information. Pair this with the bots’ highly persuasive qualities, and genuine concerns emerge[12].

A screen reading 'Introducing ChatGPT'
The launch of ChatGPT in 2022 triggered a wave of anthropomorphic, conversational AI agents. Wu Hao / EPA

Recent research by AI company Anthropic[13] further shows that its Claude 3 chatbot was at its most persuasive when allowed to fabricate information and engage in deception. Given AI chatbots have no moral inhibitions, they are poised to be much better at deception than humans.

This opens the door to manipulation at scale, to spread disinformation, or create highly effective sales tactics. What could be more effective than a trusted companion casually recommending a product in conversation? ChatGPT has already begun to provide product recommendations[14] in response to user questions. It’s only a short step to subtly weaving product recommendations into conversations – without you ever asking.

What can be done?

It is easy to call for regulation, but harder to work out the details.

The first step is to raise awareness of these abilities. Regulation should prescribe disclosure – users need to always know that they interact with an AI, like the EU AI Act mandates[15]. But this will not be enough, given the AI systems’ seductive qualities.

The second step must be to better understand anthropomorphic qualities. So far, LLM tests measure “intelligence” and knowledge recall, but none so far measures the degree of “human likeness”. With a test like this, AI companies could be required to disclose anthropomorphic abilities with a rating system, and legislators could determine acceptable risk levels for certain contexts and age groups.

The cautionary tale of social media, which was largely unregulated until much harm had been done, suggests there is some urgency. If governments take a hands-off approach, AI is likely to amplify existing problems with spreading of mis- and disinformation[16], or the loneliness epidemic[17]. In fact, Meta chief executive Mark Zuckerberg[18] has already signalled that he would like to fill the void of real human contact with “AI friends”.

Photo of Mark Zuckerberg sitting on a stage holding a microphone.
Meta CEO Mark Zuckerberg thinks AI ‘friends’ are the future. Jeff Chiu / AP

Relying on AI companies to refrain from further humanising their systems seems ill-advised. All developments point in the opposite direction. OpenAI is working on making their systems more engaging and personable, with the ability to give your version of ChatGPT a specific “personality”[19]. ChatGPT has generally become more chatty, often asking followup questions to keep the conversation going, and its voice mode[20] adds even more seductive appeal.

Much good can be done with anthropomorphic agents. Their persuasive abilities can be used for ill causes and for good ones, from fighting conspiracy theories to enticing users into donating and other prosocial behaviours.

Yet we need a comprehensive agenda across the spectrum of design and development, deployment and use, and policy and regulation of conversational agents. When AI can inherently push our buttons, we shouldn’t let it change our systems.

References

  1. ^ published in the Proceedings of the National Academy of Sciences (www.pnas.org)
  2. ^ pass the Turing test (doi.org)
  3. ^ writing persuasively (arxiv.org)
  4. ^ empathetically (arxiv.org)
  5. ^ excel at assessing nuanced sentiment (link.springer.com)
  6. ^ masters at roleplay (www.nature.com)
  7. ^ mimicking nuanced linguistic character styles (arxiv.org)
  8. ^ infer human beliefs (www.nature.com)
  9. ^ tailoring messages to individual comprehension levels (arxiv.org)
  10. ^ negative effects of companion apps (theconversation.com)
  11. ^ trust AI chatbots (aisel.aisnet.org)
  12. ^ genuine concerns emerge (theconversation.com)
  13. ^ Recent research by AI company Anthropic (www.anthropic.com)
  14. ^ begun to provide product recommendations (openai.com)
  15. ^ like the EU AI Act mandates (www.euaiact.com)
  16. ^ spreading of mis- and disinformation (www.science.org)
  17. ^ loneliness epidemic (theconversation.com)
  18. ^ Meta chief executive Mark Zuckerberg (www.wsj.com)
  19. ^ give your version of ChatGPT a specific “personality” (autogpt.net)
  20. ^ voice mode (www.theverge.com)

Read more https://theconversation.com/evidence-shows-ai-systems-are-already-too-much-like-humans-will-that-be-a-problem-256980

Wild swings in the oil price make the Reserve Bank’s job harder

It looks, at least for now, as though tensions in the Middle East are easing somewhat[1]. It appears much less...

Times Magazine

Headless CMS in Digital Twins and 3D Product Experiences

Image by freepik As the metaverse becomes more advanced and accessible, it's clear that multiple sectors will use digital twins and 3D product experiences to visualize, connect, and streamline efforts better. A digital twin is a virtual replica of ...

The Decline of Hyper-Casual: How Mid-Core Mobile Games Took Over in 2025

In recent years, the mobile gaming landscape has undergone a significant transformation, with mid-core mobile games emerging as the dominant force in app stores by 2025. This shift is underpinned by changing user habits and evolving monetization tr...

Understanding ITIL 4 and PRINCE2 Project Management Synergy

Key Highlights ITIL 4 focuses on IT service management, emphasising continual improvement and value creation through modern digital transformation approaches. PRINCE2 project management supports systematic planning and execution of projects wit...

What AI Adoption Means for the Future of Workplace Risk Management

Image by freepik As industrial operations become more complex and fast-paced, the risks faced by workers and employers alike continue to grow. Traditional safety models—reliant on manual oversight, reactive investigations, and standardised checklist...

From Beach Bops to Alpine Anthems: Your Sonos Survival Guide for a Long Weekend Escape

Alright, fellow adventurers and relaxation enthusiasts! So, you've packed your bags, charged your devices, and mentally prepared for that glorious King's Birthday long weekend. But hold on, are you really ready? Because a true long weekend warrior kn...

Effective Commercial Pest Control Solutions for a Safer Workplace

Keeping a workplace clean, safe, and free from pests is essential for maintaining productivity, protecting employee health, and upholding a company's reputation. Pests pose health risks, can cause structural damage, and can lead to serious legal an...

The Times Features

Tricia Paoluccio designer to the stars

The Case for Nuturing Creativity in the Classroom, and in our Lives I am an actress and an artist who has had the privilege of sharing my work across many countries, touring my ...

Duke of Dural to Get Rooftop Bar as New Owners Invest in Venue Upgrade

The Duke of Dural, in Sydney’s north-west, is set for a major uplift under new ownership, following its acquisition by hospitality group Good Beer Company this week. Led by resp...

Prefab’s Second Life: Why Australia’s Backyard Boom Needs a Circular Makeover

The humble granny flat is being reimagined not just as a fix for housing shortages, but as a cornerstone of circular, factory-built architecture. But are our systems ready to s...

Melbourne’s Burglary Boom: Break-Ins Surge Nearly 25%

Victorian homeowners are being warned to act now, as rising break-ins and falling arrest rates paint a worrying picture for suburban safety. Melbourne residents are facing an ...

Exploring the Curriculum at a Modern Junior School in Melbourne

Key Highlights The curriculum at junior schools emphasises whole-person development, catering to children’s physical, emotional, and intellectual needs. It ensures early year...

Distressed by all the bad news? Here’s how to stay informed but still look after yourself

If you’re feeling like the news is particularly bad at the moment, you’re not alone. But many of us can’t look away – and don’t want to. Engaging with news can help us make ...