The Times Australia
The Times World News

.
The Times Real Estate

.

AI tools are generating convincing misinformation. Engaging with them means being on high alert

  • Written by Lisa M. Given, Professor of Information Sciences & Director, Social Change Enabling Impact Platform, RMIT University
AI tools are generating convincing misinformation. Engaging with them means being on high alert

AI tools can help us create content, learn about the world and (perhaps) eliminate the more mundane tasks[1] in life – but they aren’t perfect. They’ve been shown to hallucinate information, use other people’s work without consent, and embed social conventions, including apologies, to gain users’ trust.

For example, certain AI chatbots, such as “companion” bots, are often developed with the intent to have empathetic responses[2]. This makes them seem particularly believable. Despite our awe and wonder, we must be critical consumers of these tools – or risk being misled.

Read more: I tried the Replika AI companion and can see why users are falling hard. The app raises serious ethical questions[3]

Sam Altman, the CEO of OpenAI (the company that gave us the ChatGPT chatbot), has said he[4] is “worried that these models could be used for large-scale disinformation”. As someone who studies how humans use technology to access information, so am I.

A fake image depicting former US President Donald Trump being arrested.
A number of fake images of former US President Donald Trump being arrested have taken the internet by storm. Elliot Higgins/Midjourney[5]

Misinformation will grow with back-pocket AI

Machine-learning tools[6] use algorithms to complete certain tasks. They “learn” as they access more data and refine their responses accordingly. For example, Netflix uses AI to track the shows you like and suggest others for future viewing. The more cooking shows you watch, the more cooking shows Netflix recommends.

While many of us are exploring and having fun with new AI tools, experts emphasise[7] these tools are only as good as their underlying data – which we know to be flawed, biased and sometimes even designed to deceive. Where spelling errors once alerted us to email scams, or extra fingers flagged AI-generated images, system enhancements make it harder to tell fact from fiction.

These concerns are heightened by the growing integration of AI in productivity apps. Microsoft, Google[8] and Adobe[9] have announced AI tools will be introduced to a number of their services including Google Docs, Gmail, Word, PowerPoint, Excel, Photoshop and Illustrator.

Creating fake photos and deep-fake videos no longer requires specialist skills and equipment.

Running tests

I ran an experiment with the Dall-E 2 image generator to test whether it could produce a realistic image of a cat that resembled my own. I started with a prompt for “a fluffy white cat with a poofy tail and orange eyes lounging on a grey sofa”.

The result wasn’t quite right. The fur was matted, the nose wasn’t fully formed, and the eyes were cloudy and askew. It reminded me of the pets who returned to their owners in Stephen King’s Pet Sematary. Yet the design flaws made it easier for me to see the image for what it was: a system-generated output.

Image of a cat generated by Dall-E 2. Image generated by Dall-E 2 using the prompt: ‘a fluffy white cat with a poofy tail and orange eyes lounging on a grey sofa’.

I then requested the same cat “sleeping on its back on a hardwood floor”. The new image had few visible markers distinguishing the generated cat from my own. Almost anyone could be misled by such an image.

Image of a cat generated by Dall-E 2. Image generated by Dall-E 2 using the prompt: ‘a fluffy white cat with a poofy tail sleeping on its back on a hardwood floor’.

I then used ChatGPT to turn the lens on myself, asking: “What is Lisa Given best known for?” It started well, but then went on to list a number of publications that aren’t mine. My trust in it ended there.

Text generated by ChatGPT.' Text generated by ChatGPT using the prompt: ‘What is Lisa Given best known for?’

The chatbot started hallucinating, attributing others’ works to me. The book The Digital Academic: Critical Perspectives on Digital Technologies in Higher Education[10] does exist, but I didn’t write it. I also didn’t write Digital Storytelling in Health and Social Policy[11]. Nor am I the editor of Digital Humanities Quarterly[12].

When I challenged ChatGPT, its response was deeply apologetic, yet produced more errors. I didn’t write any of the books listed below, nor did I edit the journals. While I wrote one chapter of Information and Emotion[13], I didn’t co-edit the book and neither did Paul Dourish. My most popular book, Looking for Information[14], was omitted completely.

Text generated by ChatGPT. Following the prompt ‘Hmm… I don’t think Lisa Given wrote those books. Are you sure?’, ChatGPT made yet more errors.

Fact-checking is our main defence

As my coauthors and I explain in the latest edition of Looking for Information[15], the sharing of misinformation has a long history. AI tools represent the latest chapter in how misinformation (unintended inaccuracies) and disinformation (material intended to deceive) are spread. They allow this to happen quicker, on a grander scale and with the technology available in more people’s hands.

Last week, media outlets reported a concerning security flaw[16] in the Voiceprint feature used by Centrelink and the Australian Tax Office. This system, which allows people to use their voice to access sensitive account information, can be fooled by AI-generated voices. Scammers have also used fake voices[17] to target people on WhatsApp by impersonating their loved ones.

Advanced AI tools allow for the democratisation of knowledge access and creation, but they do have a price. We can’t always consult experts, so we have to make informed judgments ourselves. This is where critical thinking and verification skills are vital.

These tips can help you navigate an AI-rich information landscape.

1. Ask questions and verify with independent sources

When using an AI text generator, always check source material mentioned in the output. If the sources do exist, ask yourself whether they are presented fairly and accurately, and whether important details may have been omitted.

2. Be sceptical of content you come across

If you come across an image you suspect might be AI-generated, consider if it seems too “perfect”[18] to be real. Or perhaps a particular detail does not match the rest of the image (this is often a giveaway). Analyse the textures, details, colouring, shadows and, importantly, the context. Running a reverse image search can also be useful to verify sources.

If it is a written text you’re unsure about, check for factual errors and ask yourself whether the writing style and content match what you would expect from the claimed source.

3. Discuss AI openly in your circles

An easy way to prevent sharing (or inadvertently creating) AI-driven misinformation is to ensure you and those around you use these tools responsibly. If you or an organisation you work with will consider adopting AI tools, develop a plan for how potential inaccuracies will be managed, and how you will be transparent about tool use in the materials you produce.

Read more: AI image generation is advancing at astronomical speeds. Can we still tell if a picture is fake?[19]

References

  1. ^ mundane tasks (theconversation.com)
  2. ^ empathetic responses (www.theguardian.com)
  3. ^ I tried the Replika AI companion and can see why users are falling hard. The app raises serious ethical questions (theconversation.com)
  4. ^ has said he (www.theguardian.com)
  5. ^ Elliot Higgins/Midjourney (petapixel.com)
  6. ^ tools (www.forbes.com)
  7. ^ emphasise (dl.acm.org)
  8. ^ Microsoft, Google (theconversation.com)
  9. ^ Adobe (mumbrella.com.au)
  10. ^ The Digital Academic: Critical Perspectives on Digital Technologies in Higher Education (www.routledge.com)
  11. ^ Digital Storytelling in Health and Social Policy (www.routledge.com)
  12. ^ Digital Humanities Quarterly (www.digitalhumanities.org)
  13. ^ Information and Emotion (books.infotoday.com)
  14. ^ Looking for Information (books.emeraldinsight.com)
  15. ^ Looking for Information (books.emeraldinsight.com)
  16. ^ security flaw (www.theguardian.com)
  17. ^ also used fake voices (www.theage.com.au)
  18. ^ too “perfect” (arstechnica.com)
  19. ^ AI image generation is advancing at astronomical speeds. Can we still tell if a picture is fake? (theconversation.com)

Read more https://theconversation.com/ai-tools-are-generating-convincing-misinformation-engaging-with-them-means-being-on-high-alert-202062

The Times Features

Runway With a Hug: Gary Bigeni’s Colourful Comeback

By Cesar Ocampo Photographer | AFW 2025 Some designers you photograph once, admire from afar, and move on. But others — like Gary Bigeni — pull you in and never let go. Not becaus...

Tassie’s best pie enters NSW with the launch National Pies’ new fresh range

Fresh from Tasmanian Bakeries in Hobart, National Pies has just delivered Tassie’s best-selling pie to the ready meals aisles of Woolworths stores across NSW.  The delicious roll o...

IORDANES SPYRIDON GOGOS RUNWAY | AFW 2025

Fifth Collection by ISG | Words + Photography by Cesar Ocampo Some runway shows are about the clothes. Others are about the culture they carry. With Iordanes Spyridon Gogos, it’s ...

AJE Resort ‘26 — “IMPRESSION”

Photographed by Cesar Ocampo | AFW 2025 Day 3, Barangaroo Pier Pavilion There are runways, and then there are moments. Aje’s Resort ‘26 collection, IMPRESSION, wasn’t just a fashi...

Miimi & Jiinda: Weaving Culture, Connection, and Country into Every Thread

By Cesar Ocampo When I sat down with Melissa Greenwood and her mother, Lauren Jarrett—founders of the First Nations brand Miimi & Jiinda—I knew this wasn’t going to be your st...

American Express to Provide $3.95M in Support for Restaurants Worldwide with 2025 “Backing Small” Grant Programs

Sydney, Australia 14 May 2025 – Applications are now open to small business owners who qualify for one  of American Express’ signature grant programs in 2025: Backing Internati...

Times Magazine

Senior of the Year Nominations Open

The Allan Labor Government is encouraging all Victorians to recognise the valuable contributions of older members of our community by nominating them for the 2025 Victorian Senior of the Year Awards.  Minister for Ageing Ingrid Stitt today annou...

CNC Machining Meets Stage Design - Black Swan State Theatre Company & Tommotek

When artistry meets precision engineering, incredible things happen. That’s exactly what unfolded when Tommotek worked alongside the Black Swan State Theatre Company on several of their innovative stage productions. With tight deadlines and intrica...

Uniden Baby Video Monitor Review

Uniden has released another award-winning product as part of their ‘Baby Watch’ series. The BW4501 Baby Monitor is an easy to use camera for keeping eyes and ears on your little one. The camera is easy to set up and can be mounted to the wall or a...

Top Benefits of Hiring Commercial Electricians for Your Business

When it comes to business success, there are no two ways about it: qualified professionals are critical. While many specialists are needed, commercial electricians are among the most important to have on hand. They are directly involved in upholdin...

The Essential Guide to Transforming Office Spaces for Maximum Efficiency

Why Office Fitouts MatterA well-designed office can make all the difference in productivity, employee satisfaction, and client impressions. Businesses of all sizes are investing in updated office spaces to create environments that foster collaborat...

The A/B Testing Revolution: How AI Optimized Landing Pages Without Human Input

A/B testing was always integral to the web-based marketing world. Was there a button that converted better? Marketing could pit one against the other and see which option worked better. This was always through human observation, and over time, as d...

LayBy Shopping