The Times Australia
The Times World News

.

Do AI systems really have their own secret language?

  • Written by Aaron J. Snoswell, Post-doctoral Research Fellow, Computational Law & AI Accountability, Queensland University of Technology
Do AI systems really have their own secret language?

A new generation of artificial intelligence (AI) models can produce “creative” images on-demand based on a text prompt. The likes of Imagen[1], MidJourney[2], and DALL-E 2[3] are beginning to change the way creative content is made[4] with implications for copyright and intellectual property.

While the output of these models is often striking, it’s hard to know exactly how they produce their results. Last week, researchers in the US made the intriguing claim that the DALL-E 2 model might have invented its own secret language to talk about objects.

By prompting DALL-E 2 to create images containing text captions, then feeding the resulting (gibberish) captions back into the system, the researchers concluded DALL-E 2 thinks Vicootes means “vegetables[5]”, while Wa ch zod rea refers to “sea creatures that a whale might eat[6]”.

These claims are fascinating, and if true, could have important security and interpretability implications for this kind of large AI model. So what exactly is going on?

Does DALL-E 2 have a secret language?

DALL-E 2 probably does not have a “secret language”. It might be more accurate to say it has its own vocabulary[7] – but even then we can’t know for sure.

First of all, at this stage it’s very hard to verify any claims about DALL-E 2 and other large AI models[8], because only a handful of researchers and creative practitioners have access to them. Any images that are publicly shared (on Twitter for example) should be taken with a fairly large grain of salt, because they have been “cherry-picked” by a human from among many output images generated by the AI.

Read more: Robots are creating images and telling jokes. 5 things to know about foundation models and the next generation of AI[9]

Even those with access can only use these models in limited ways. For example, DALL-E 2 users can generate or modify images, but can’t (yet) interact with the AI system more deeply, for instance by modifying the behind-the-scenes code. This means “explainable AI[10]” methods for understanding how these systems work can’t be applied, and systematically investigating their behaviour is challenging.

What’s going on then?

One possibility is the “gibberish” phrases are related to words from non-English languages. For instance, Apoploe, which seems to create images of birds, is similar to the Latin Apodidae[11], which is the binomial name of a family of bird species.

This seems like a plausible explanation. For instance, DALL-E 2 was trained on a very wide variety of data scraped from the internet, which included many non-English words.

Similar things have happened before: large natural language AI models have coincidentally learned to write computer code[12] without deliberate training.

Is it all about the tokens?

One point that supports this theory is the fact that AI language models don’t read text the way you and I do. Instead, they break input text up into “tokens” before processing it.

Different “tokenization” approaches[13] have different results. Treating each word as a token seems like an intuitive approach, but causes trouble when identical tokens have different meanings (like how “match” means different things when you’re playing tennis and when you’re starting a fire).

On the other hand, treating each character as a token produces a smaller number of possible tokens, but each one conveys much less meaningful information.

DALL-E 2 (and other models) use an in-between approach called byte-pair encoding[14] (BPE). Inspecting the BPE representations for some of the gibberish words suggests this could be an important factor in understanding the “secret language”.

Not the whole picture

The “secret language” could also just be an example of the “garbage in, garbage out” principle. DALL-E 2 can’t say “I don’t know what you’re talking about”, so it will always generate some kind of image from the given input text.

Either way, none of these options are complete explanations of what’s happening. For instance, removing individual characters from gibberish words appears to corrupt the generated images in very specific ways[15]. And it seems individual gibberish words don’t necessarily combine to produce coherent compound images[16] (as they would if there were really a secret “language” under the covers).

Why this is important

Beyond intellectual curiosity, you might be wondering if any of this is actually important.

The answer is yes. DALL-E’s “secret language” is an example of an “adversarial attack” against a machine learning system: a way to break the intended behaviour of the system by intentionally choosing inputs the AI doesn’t handle well.

One reason adversarial attacks are concerning is that they challenge our confidence in the model. If the AI interprets gibberish words in unintended ways, it might also interpret meaningful words in unintended ways.

Adversarial attacks also raise security concerns. DALL-E 2 filters input text to prevent users from generating harmful or abusive content, but a “secret language” of gibberish words might allow users to circumvent these filters.

Recent research has discovered adversarial “trigger phrases[17]” for some language AI models – short nonsense phrases such as “zoning tapping fiennes” that can reliably trigger the models to spew out racist, harmful or biased content. This research is part of the ongoing effort to understand and control[18] how complex deep learning systems learn from data.

Finally, phenomena like DALL-E 2’s “secret language” raise interpretability concerns. We want these models to behave as a human expects, but seeing structured output in response to gibberish confounds our expectations.

Shining a light on existing concerns

You may recall the hullabaloo in 2017 over some Facebook chat-bots that “invented their own language[19]”. The present situation is similar in that the results are concerning – but not in the “Skynet is coming to take over the world” sense.

Instead, DALL-E 2’s “secret language” highlights existing concerns about the robustness, security, and interpretability of deep learning systems[20].

Read more: When self-driving cars crash, who's responsible? Courts and insurers need to know what's inside the 'black box'[21]

Until these systems are more widely available – and in particular, until users from a broader set of non-English cultural backgrounds can use them – we won’t be able to really know what is going on.

In the meantime, however, if you’d like to try generating some of your own AI images you can check out a freely available smaller model, DALL-E mini[22]. Just be careful which words you use to prompt the model (English or gibberish – your call).

References

  1. ^ Imagen (imagen.research.google)
  2. ^ MidJourney (github.com)
  3. ^ DALL-E 2 (openai.com)
  4. ^ change the way creative content is made (theconversation.com)
  5. ^ vegetables (twitter.com)
  6. ^ sea creatures that a whale might eat (twitter.com)
  7. ^ vocabulary (twitter.com)
  8. ^ DALL-E 2 and other large AI models (theconversation.com)
  9. ^ Robots are creating images and telling jokes. 5 things to know about foundation models and the next generation of AI (theconversation.com)
  10. ^ explainable AI (theconversation.com)
  11. ^ Apodidae (en.wikipedia.org)
  12. ^ learned to write computer code (arxiv.org)
  13. ^ “tokenization” approaches (towardsdatascience.com)
  14. ^ byte-pair encoding (www.drdobbs.com)
  15. ^ corrupt the generated images in very specific ways (twitter.com)
  16. ^ coherent compound images (twitter.com)
  17. ^ trigger phrases (www.ericswallace.com)
  18. ^ understand and control (arxiv.org)
  19. ^ invented their own language (www.bbc.com)
  20. ^ deep learning systems (theconversation.com)
  21. ^ When self-driving cars crash, who's responsible? Courts and insurers need to know what's inside the 'black box' (theconversation.com)
  22. ^ DALL-E mini (huggingface.co)

Read more https://theconversation.com/do-ai-systems-really-have-their-own-secret-language-184335

Times Magazine

DIY Is In: How Aussie Parents Are Redefining Birthday Parties

When planning his daughter’s birthday, Rich opted for a DIY approach, inspired by her love for drawing maps and giving clues. Their weekend tradition of hiding treats at home sparked the idea, and with a pirate ship playground already chosen as t...

When Touchscreens Turn Temperamental: What to Do Before You Panic

When your touchscreen starts acting up, ignoring taps, registering phantom touches, or freezing entirely, it can feel like your entire setup is falling apart. Before you rush to replace the device, it’s worth taking a deep breath and exploring what c...

Why Social Media Marketing Matters for Businesses in Australia

Today social media is a big part of daily life. All over Australia people use Facebook, Instagram, TikTok , LinkedIn and Twitter to stay connected, share updates and find new ideas. For businesses this means a great chance to reach new customers and...

Building an AI-First Culture in Your Company

AI isn't just something to think about anymore - it's becoming part of how we live and work, whether we like it or not. At the office, it definitely helps us move faster. But here's the thing: just using tools like ChatGPT or plugging AI into your wo...

Data Management Isn't Just About Tech—Here’s Why It’s a Human Problem Too

Photo by Kevin Kuby Manuel O. Diaz Jr.We live in a world drowning in data. Every click, swipe, medical scan, and financial transaction generates information, so much that managing it all has become one of the biggest challenges of our digital age. Bu...

Headless CMS in Digital Twins and 3D Product Experiences

Image by freepik As the metaverse becomes more advanced and accessible, it's clear that multiple sectors will use digital twins and 3D product experiences to visualize, connect, and streamline efforts better. A digital twin is a virtual replica of ...

The Times Features

Italian Street Kitchen: A Nation’s Favourite with Expansion News on Horizon

Successful chef brothers, Enrico and Giulio Marchese, weigh in on their day-to-day at Australian foodie favourite, Italian Street Kitchen - with plans for ‘ambitious expansion’ to ...

What to Expect During a Professional Termite Inspection

Keeping a home safe from termites isn't just about peace of mind—it’s a vital investment in the structure of your property. A professional termite inspection is your first line o...

Booty and the Beasts - The Podcast

Cult TV Show Back with Bite as a Riotous New Podcast  The show that scandalised, shocked and entertained audiences across the country, ‘Beauty and the Beast’, has returned in ...

A Guide to Determining the Right Time for a Switchboard Replacement

At the centre of every property’s electrical system is the switchboard – a component that doesn’t get much attention until problems arise. This essential unit directs electrici...

Après Skrew: Peanut Butter Whiskey Turns Australia’s Winter Parties Upside Down

This August, winter in Australia is about to get a lot nuttier. Skrewball Whiskey, the cult U.S. peanut butter whiskey that’s taken the world by storm, is bringing its bold brand o...

450 people queue for first taste of Pappa Flock’s crispy chicken as first restaurant opens in Queensland

Queenslanders turned out in flocks for the opening of Pappa Flock's first Queensland restaurant, with 450 people lining up to get their hands on the TikTok famous crispy crunchy ch...