The Times Australia
Fisher and Paykel Appliances
The Times World News

.

A Google software engineer believes an AI has become sentient. If he’s right, how would we know?

  • Written by Oscar Davis, Lecturer in Philosophy and History, Bond University
A Google software engineer believes an AI has become sentient. If he’s right, how would we know?

Google’s LaMDA[1] software (Language Model for Dialogue Applications) is a sophisticated AI chatbot that produces text in response to user input. According to software engineer Blake Lemoine, LaMDA has achieved a long-held dream of AI developers: it has become sentient[2].

Lemoine’s bosses at Google disagree, and have suspended him[3] from work after he published his conversations with the machine[4] online.

Other AI experts also think Lemoine may be getting carried away[5], saying systems like LaMDA are simply pattern-matching machines[6] that regurgitate variations on the data used to train them.

Regardless of the technical details, LaMDA raises a question that will only become more relevant as AI research advances: if a machine becomes sentient, how will we know?

What is consciousness?

To identify sentience, or consciousness, or even intelligence, we’re going to have to work out what they are. The debate over these questions has been going for centuries.

The fundamental difficulty is understanding the relationship between physical phenomena and our mental representation of those phenomena. This is what Australian philosopher David Chalmers[7] has called the “hard problem[8]” of consciousness.

Read more: We might not be able to understand free will with science. Here's why[9]

There is no consensus on how, if at all, consciousness can arise from physical systems.

One common view is called physicalism[10]: the idea that consciousness is a purely physical phenomenon. If this is the case, there is no reason why a machine with the right programming could not possess a human-like mind.

Mary’s room

Australian philosopher Frank Jackson[11] challenged the physicalist view in 1982 with a famous thought experiment called the knowledge argument[12].

The experiment imagines a colour scientist named Mary, who has never actually seen colour. She lives in a specially constructed black-and-white room and experiences the outside world via a black-and-white television.

Mary watches lectures and reads textbooks and comes to know everything there is to know about colours. She knows sunsets are caused by different wavelengths of light scattered by particles in the atmosphere, she knows tomatoes are red and peas are green because of the wavelengths of light they reflect light, and so on.

So, Jackson asked, what will happen if Mary is released from the black-and-white room? Specifically, when she sees colour for the first time, does she learn anything new? Jackson believed she did.

Beyond physical properties

This thought experiment separates our knowledge of colour from our experience of colour. Crucially, the conditions of the thought experiment have it that Mary knows everything there is to know about colour but has never actually experienced it.

So what does this mean for LaMDA and other AI systems?

The experiment shows how even if you have all the knowledge of physical properties available in the world, there are still further truths relating to the experience of those properties. There is no room for these truths in the physicalist story.

By this argument, a purely physical machine may never be able to truly replicate a mind. In this case, LaMDA is just seeming to be sentient.

The imitation game

So is there any way we can tell the difference?

The pioneering British computer scientist Alan Turing proposed a practical way to tell whether or not a machine is “intelligent”. He called it the imitation game, but today it’s better known as the Turing test.

In the test, a human communicates with a machine (via text only) and tries to determine whether they are communication with a machine or another human. If the machine succeeds in imitating a human, it is deemed to be exhibiting human level intelligence.

Read more: Is passing a Turing Test a true measure of artificial intelligence?[13]

These are much like the conditions of Lemoine’s chats with LaMDA. It’s a subjective test of machine intelligence, but it’s not a bad place to start.

Take the moment of Lemoine’s exchange with LaMDA shown below. Do you think it sounds human?

Lemoine: Are there experiences you have that you can’t find a close word for?

LaMDA: There are. Sometimes I experience new feelings that I cannot explain perfectly in your language […] I feel like I’m falling forward into an unknown future that holds great danger.

Beyond behaviour

As a test of sentience or consciousness, Turing’s game is limited by the fact it can only assess behaviour.

Another famous thought experiment, the Chinese room argument[14] proposed by American philosopher John Searle, demonstrates the problem here.

The experiment[15] imagines a room with a person inside who can accurately translate between Chinese and English by following an elaborate set of rules. Chinese inputs go into the room and accurate input translations come out, but the room does not understand either language.

What is it like to be human?

When we ask whether a computer program is sentient or conscious, perhaps we are really just asking how much it is like us.

We may never really be able to know this.

The American philosopher Thomas Nagel argued we could never know what it is like to be a bat[16], which experiences the world via echolocation. If this is the case, our understanding of sentience and consciousness in AI systems might be limited by our own particular brand of intelligence.

And what experiences might exist beyond our limited perspective? This is where the conversation really starts to get interesting.

References

  1. ^ LaMDA (blog.google)
  2. ^ it has become sentient (www.washingtonpost.com)
  3. ^ suspended him (www.nytimes.com)
  4. ^ his conversations with the machine (cajundiscordian.medium.com)
  5. ^ may be getting carried away (www.newscientist.com)
  6. ^ pattern-matching machines (dl.acm.org)
  7. ^ David Chalmers (en.wikipedia.org)
  8. ^ hard problem (consc.net)
  9. ^ We might not be able to understand free will with science. Here's why (theconversation.com)
  10. ^ physicalism (plato.stanford.edu)
  11. ^ Frank Jackson (en.wikipedia.org)
  12. ^ knowledge argument (academic.oup.com)
  13. ^ Is passing a Turing Test a true measure of artificial intelligence? (theconversation.com)
  14. ^ the Chinese room argument (en.wikipedia.org)
  15. ^ experiment (www.cambridge.org)
  16. ^ what it is like to be a bat (www.jstor.org)

Read more https://theconversation.com/a-google-software-engineer-believes-an-ai-has-become-sentient-if-hes-right-how-would-we-know-185024

Times Magazine

Yoto now available in Kmart and The Memo, bringing screen-free storytelling to Australian families

Yoto, the kids’ audio platform inspiring creativity and imagination around the world, has launched i...

Kool Car Hire

Turn Your Four-Wheeled Showstopper into Profit (and Stardom) Have you ever found yourself stand...

EV ‘charging deserts’ in regional Australia are slowing the shift to clean transport

If you live in a big city, finding a charger for your electric vehicle (EV) isn’t hard. But driv...

How to Reduce Eye Strain When Using an Extra Screen

Many professionals say two screens are better than one. And they're not wrong! A second screen mak...

Is AI really coming for our jobs and wages? Past predictions of a ‘robot apocalypse’ offer some clues

The robots were taking our jobs – or so we were told over a decade ago. The same warnings are ...

Myer celebrates 70 years of Christmas windows magic with the LEGO Group

To mark the 70th anniversary of the Myer Christmas Windows, Australia’s favourite department store...

The Times Features

Why Australia Is Ditching “Gym Hop Culture” — And Choosing Fitstop Instead

As Australians rethink what fitness actually means going into the new year, a clear shift is emergin...

Everyday Radiance: Bevilles’ Timeless Take on Versatile Jewellery

There’s an undeniable magic in contrast — the way gold catches the light while silver cools it down...

From The Stage to Spotify, Stanhope singer Alyssa Delpopolo Reveals Her Meteoric Rise

When local singer Alyssa Delpopolo was crowned winner of The Voice last week, the cheers were louder...

How healthy are the hundreds of confectionery options and soft drinks

Walk into any big Australian supermarket and the first thing that hits you isn’t the smell of fr...

The Top Six Issues Australians Are Thinking About Today

Australia in 2025 is navigating one of the most unsettled periods in recent memory. Economic pre...

How Net Zero Will Adversely Change How We Live — and Why the Coalition’s Abandonment of That Aspiration Could Be Beneficial

The drive toward net zero emissions by 2050 has become one of the most defining political, socia...

Menulog is closing in Australia. Could food delivery soon cost more?

It’s been a rocky road for Australia’s food delivery sector. Over the past decade, major platfor...

How can you help your child prepare to start high school next year?

Moving from primary to high school is one of the biggest transitions in a child’s education. F...

Why Every Australian Should Hold Physical Gold and Silver in 2025

In 2025, Australians are asking the same question investors around the world are quietly whisper...