Google AI
The Times Australia
The Times World News

.

Can AI read our minds? Probably not, but that doesn’t mean we shouldn’t be worried

  • Written by Sam Baron, Associate Professor, Philosophy of Science, The University of Melbourne
Can AI read our minds? Probably not, but that doesn’t mean we shouldn’t be worried

Earlier this year, Neuralink implanted a chip[1] inside the brain of 29-year-old US man Noland Arbaugh, who is paralysed from the shoulders down. The chip has enabled Arbaugh to move a mouse pointer on a screen just by imagining it moving.

In May 2023, US researchers also announced[2] a non-invasive way to “decode” the words someone is thinking from brain scans in combination with generative AI. A similar project sparked headlines[3] about a “mind-reading AI hat”.

Can neural implants and generative AI really “read minds”? Is the day coming when computers can spit out accurate real-time transcripts of our thoughts for anyone to read?

Such technology might have some benefits – particularly for advertisers looking for new sources of customer targeting data – but it would demolish the last bastion of privacy: the seclusion of our own minds. Before we panic, though, we should stop to ask: is what neural implants and generative AI can do really “reading minds”?

The brain and the mind

As far as we know, conscious experience arises from the activity of the brain. This means any conscious mental state should have what philosophers and cognitive scientists call a “neural correlate”: a particular pattern of nerve cells (neurons) firing in the brain.

So, for each conscious mental state you can be in – whether it’s thinking about the Roman Empire, or imagining a cursor moving – there is some corresponding pattern of activity in your brain.

So, clearly, if a device can track our brain states, it should be able to simply read our minds. Right?

Well, for real-time AI-powered mind-reading to be possible, we need to be able to identify precise, one-to-one correspondences between particular conscious mental states and brain states. And this may not be possible.

Rough matches

To read a mind from brain activity, one must know precisely which brain states correspond to particular mental states. This means, for example, one needs to distinguish the brain states that correspond to seeing a red rose from the ones that correspond to smelling a red rose, or touching a red rose, or imagining a red rose, or thinking that red roses are your mother’s favourite.

One must also distinguish all of those brain states from the brain states that correspond to seeing, smelling, touching, imagining or thinking about some other thing, like a ripe lemon. And so on, for everything else you can perceive, imagine or have thoughts about.

To say this is difficult would be an understatement.

Take face perception as an example. The conscious perception of a face involves[4] all sorts of neural activity.

Read more: Nobody knows how consciousness works – but top researchers are fighting over which theories are really science[5]

But a great deal of this activity seems to relate to processes that come before or after the conscious perception of the face – things like working memory, selective attention, self-monitoring, task planning and reporting.

Winnowing out those neural processes that are solely and specifically responsible for the conscious perception of a face is a herculean task, and one that current neuroscience is not close to solving.

Even if this task were accomplished, neuroscientists would still only have found the neural correlates of a certain type of conscious experience: namely, the general experience of a face. They wouldn’t thereby have found the neural correlates of the experiences of particular faces.

So, even if astonishing advances were to happen in neuroscience, the would-be mind-reader still wouldn’t necessarily be able to tell from a brain scan whether you are seeing Barack Obama, your mother, or a face you don’t recognise.

That wouldn’t be much to write home about, as far as mind-reading is concerned.

But what about AI?

But don’t recent headlines involving neural implants and AI show some mental states can be read, like imagining cursors move and engaging in inner speech?

Not necessarily. Take the neural implants first.

Neural implants are typically designed to help a patient perform a particular task: moving a cursor on a screen, for example. To do that, they don’t have to be able to identify exactly the neural processes that are correlated with the intention to move the cursor. They just need to get an approximate fix on the neural processes that tend to go along with those intentions, some of which might actually be underpinning other, related mental acts like task-planning, memory and so on.

Thus, although the success of neural implants is certainly impressive – and future implants are likely to collect more detailed information[6] about brain activity – it doesn’t show that precise one-to-one mappings between particular mental states and particular brain states have been identified. And so, it doesn’t make genuine mind-reading any more likely.

Detailed geometric drawing of a glowing blue cyberbrain of some sort.
It may not be possible to perfectly map brain states onto mental states. Maxim Gaigul / Shutterstock[7]

Now take the “decoding” of inner speech by a system comprised of a non-invasive brain scan plus generative AI, as reported in this study[8]. This system was designed to “decode” the contents of continuous narratives from brain scans, when participants were either listening to podcasts, reciting stories in their heads, or watching films. The system isn’t very accurate – but still, the fact it did better than random chance at predicting these mental contents is seriously impressive.

So, let’s imagine the system could predict continuous narratives from brain scans with total accuracy. Like the neural implant, the system would only be optimised for that task: it wouldn’t be effective at tracking any other mental activity.

Read more: How close are we to reading minds? A new study decodes language and meaning from brain scans[9]

How much mental activity could this system monitor? That depends: what proportion of our mental lives consists of imagining, perceiving or otherwise thinking about continuous, well-formed narratives that can be expressed in straightforward language?

Not much.

Our mental lives are flickering, lightning-fast, multiple-stream affairs, involving real-time percepts, memories, expectations and imaginings, all at once. It’s hard to see how a transcript produced by even the most fine-tuned brain scanner, coupled to the smartest AI, could capture all of that faithfully.

The future of mind reading

In the past few years, AI development has shown a tendency to vault over seemingly insurmountable hurdles. So it’s unwise to rule out the possibility of AI-powered mind-reading entirely.

But given the complexity of our mental lives, and how little we know about the brain – neuroscience is still in its infancy, after all – confident predictions about AI-powered mind-reading should be taken with a grain of salt.

Read more https://theconversation.com/can-ai-read-our-minds-probably-not-but-that-doesnt-mean-we-shouldnt-be-worried-227057

Times Magazine

Adobe Ushers in a New Era of Creativity with New Creative Agent and Generative AI Innovations in Adobe Firefly

Adobe (Nasdaq: ADBE) — the global technology leader that unleashes creativity, productivity and ...

CRO Tech Stack: A Technical Guide to Conversion Rate Optimization Tools

The fascinating thing is that the value of this website lies in the fact that creating a high-cali...

How Decentralised Applications Are Reshaping Enterprise Software in Australia

Australian businesses are experiencing a quiet revolution in how they manage data, execute agreeme...

Bambu Lab P2S 3D Printer Review: High-End Performance Meets Everyday Usability

After a full month of hands-on testing, the Bambu Lab P2S 3D printer has proven itself to be one...

Nearly Half of Disadvantaged Australian Schools Run Libraries on Less Than $1000 a Year

A new national snapshot from Dymocks Children’s Charities reveals outdated books, no librarians ...

Growing EV popularity is leading to queues at fast chargers. Could a kerbside charger network help?

The war on Iran has made crystal clear how shaky our reliance on fossil fuels is. It’s no surpri...

The Times Features

The Times Launches Dedicated Property Advertising Platf…

In a significant expansion of its digital media offering, The Times has formally launched TimesA...

Can I get a free flu shot? And will it cover ‘super K’?…

For many of us, flu can mean a nasty few weeks of illness. But for the very young and old, and...

Mother’s Day, The Lodge Dining Room

Her Day, The Lodge Way This Mother’s Day, The Lodge Dining Room presents a refined take on high...

The Albanese Government’s plan to impose a retrospectiv…

LABOR’S RETROSPECTIVE TAX GRAB RISKS 3 MILLION JOBS The Albanese Government’s plan to impose a retr...

Court outcome reinforces wildlife trafficking will not …

A 20-year-old man has been fined close to $50,000 and ordered to pay costs after pleading guilty t...

Businesses tap UOW PhD researchers to accelerate innova…

Industry internship program connects businesses with research talent to fast-track innovation an...

Olivia Colman, Kate Box to join an exclusive Live Q…

Photo credit : Photo Credit Mark De BlokFresh out of cinemas, JIMPA - the new film by acclaimed di...

Rental growth reaccelerates as cost to tenants reaches …

Australian renters are spending a record share of their gross median household income on housing c...

Worried about feeding your baby solid foods? Here’s wha…

When you have a baby, mealtimes can be messy and stressful. If you’re a new parent you may be...