Tech

Is Conscious-Seeming AI Already Here?

Emergent Behaviors & Sentience

At this year’s Digital-Life-Design conference in Munich, Anil Seth of the University of Sussex gave a talk on sentient AI and the matter of whether or not we can create artificial consciousness. Towards the end, he reached the topic of “conscious-seeming AI,” something I’ve been thinking about myself ever since watching that short interview with Realbotix’s Aria.

No, Aria isn’t exactly a mind-blowing robot. But the combination of a humanoid machine with a language learning model, allowing her to respond to questions from the interviewer and even attempt to flip her hair, really makes me wonder just how far off we are from seemingly conscious robots that can mimic human behavior.

According to Seth, “Conscious-seeming AI is either here already or will be very soon. There’s no technological or philosophical roadblock here. We only need things that are sufficiently seductive of our own biases.”

This, he says, raises some concerns, because even if we know a machine is not conscious, that still may not change how we feel about it. We’re very good at anthropomorphizing objects, after all.

We’ll have two choices, he says, in the case of a conscious-seeming AI. One, we can choose to care about it despite knowing it’s simply a machine, potentially even sacrificing human interests for the sake of it (which would be like caring about your toaster on an emotional level). Or two, we can choose to not care about the machines that seem conscious, which may negatively affect our own behavior and how we see ourselves as humans.

Seth’s presentation is only about 20 minutes long, so give it a listen! After that, read on for more musings on AI and consciousness.

Emergent Behaviors of Generative AI

So what about actual consciousness in AI?

Language Learning Models (LLMs), which I’ve had a lot of fun playing with, are a subset of the larger ‘Generative AI’ umbrella. These days, you can easily install one on your own PC. At their most basic, they work as simple token predictors, predicting the next best or most common word that would follow another word. However, according to AI consultant Henrik Kniberg, over time these models have begun to produce “emergent capabilities,” things they can do that has surprised even the very people who designed them. The AI can see patterns in its learning data and understand complex concepts.

Some of these emergent behaviors have involved solving long-standing math problems, or playing Chess, or roleplaying as different characters. The question is, could a future emergent behavior be the formation of consciousness?

About a year ago, a somewhat exasperated marketing writer for an AI company reached out, asking “Does anyone have a way to help people understand that just because ChatGPT sounds human, doesn’t mean it is human?” While some of the conversation was muddled by confusing intelligence with consciousness, it still led to many interesting points raised by commenters.

You might be surprised to find out that many felt the poster’s footing wasn’t as stable as he or she may have thought. The answer of whether or not current AI has reached or can reach something akin to “personhood” just is not that simple. At least not enough for everyone to agree upon it.

You might also remember the situation with Goggle’s LaMDA, in which an engineer claimed that the AI was, in fact, sentient, and went public with this idea. This ultimately led to the engineer being put on administrative leave.

Are We Special?

All of this leads me to a classic question: Can consciousness just sort of pop up in AI? The nature of consciousness itself is an ongoing debate in scientific and philosophical communities, so the answer right now is…we don’t know.

Philosophers such as Daniel Dennett and Tor Nørretranders muse that human consciousness is a “user illusion” created by the brain, which they believe fundamentally is a machine. This is known as the physicalist point of view, that everything can be boiled down to physical properties or the laws of physics, nothing extra or beyond.

In the same sense that consciousness may arise out of a kind of emergent illusion brought on by physical systems, could consciousness unexpectedly arise from generative AI, creating a user illusion for the AI itself? Or might that something extra occur, somehow, giving AI that spark of consciousness?

When it comes to conscious-seeming AI, going back to Anil Seth’s comments, the answer is either yes, it’s already happened, or it will soon. More troubling, I guess, is what a conscious-seeming AI might mean for how we view ourselves. If a Language Learning Model can understand complex concepts and pretend to be human, is what we’re doing all that special? In an article on Medium last year, James F. O’Brien, Professor of Computer Science at UC Berekely, pondered this question. He asked simply whether or not the things that LLMs can do are actually easy, or is it the case that “on some objective scale humans may not actually be that smart?”

What will AI show us about ourselves as we move forward?

advertisement

Rob Schwarz

Writer, blogger, and part-time peddler of mysterious tales. Editor-in-chief of Stranger Dimensions.