There’s an awful lot of guff, cant and nonsense spouted around the topic of “artificial intelligence” and “consciousness”, most of the product of ignorant journalists and over-excitable computer geeks. Elon Musk may be onto something when he warns us to throttle back on the spread of AI, but it’s not because some kind of Skynet-style all-powerful, god-like computer intelligence is going to take over the world.
The real threat is that clever-but-dumb computer programs will rob humans of too much of their agency. Sound fanciful? Consider how much of your simplest decision-making, such as what to veg out in front of the telly with, is being decided for you by computer algorithms. Consider how computer algorithms are being used to control what you may say in the public sphere.
But the algorithms are sinister – not because they’re intelligent – but because they’re dumb. How many times have you been Facebook jailed for an out-of-context comment or photo?
And what we call “Artificial Intelligence” isn’t: it’s just very sophisticated computer programming. But isn’t that the same thing? No.
Mistaking sophisticated computer programming for genuine intelligence goes back at least to Arthur C Clarke’s and Stanley Kubrick’s 2001: A Space Odyssey. Clarke claimed (in the novel) that HAL 9000’s ability to carry out a human-like conversation (known as the Turing Test) proved it was genuinely conscious. It doesn’t – as I’ll explain later.
Compounding the mistake, countless journalists have completely misunderstood the events of the film.
The HAL 9000 supercomputer deliberately decides to follow a completely different plan from the one programmed, killing part of the crew.
This is utterly wrong: HAL in fact faces an unresolvable conflict in his programming. He is told to obey the crews’ commands, and at the same time pursue a secret mission objective the crew don’t know about. So, the crew give orders that ultimately conflict with his programming. The only resolution available to the conflicted machine is to eliminate the source of conflicting commands: ie kill the crew.
But why does HAL’s conversational ability not demonstrate genuine consciousness? To answer that, we have to get to grips with the terms we’re using, such as consciousness. What is consciousness? As it happens, even that is the subject of much debate with no clear resolution.
Being conscious does not just mean being aware of the outside world, but also being aware of oneself and one’s relationship to their surroundings. And for this, according to [neuroscientist Moheb Costandi], the body is essential.
Philosophers use a technical term, qualia, to describe this. Qualia is quite simple: the “what it is like” subjectiveness of experience. In a famous thought-experiment, Mary is brought up in a world that’s entirely monochrome. It is explained to her what “red” is, but it is not until Mary sees her first rose that she really understands “red”.
Think also of something as simple as coriander. We can describe the taste and scent of coriander, but every person subjectively experiences it differently. To some people it is delicious, while to other people it’s disgusting.
We can also conceivably know everything there is to know about someone, their every experience since birth – but still, we cannot know what it is like to be “inside their head”. This “what it is like” in our heads is called the “Cartesian theatre”, after philosopher Rene Descartes.
Descartes […] believed that the mind and body are made of different substances: the body of a physical substance, and the mind of some mysterious, nonphysical material.
Even at the time, though, people like Countess of Elisabeth of Bohemia pointed out some obvious problems with Descartes substance dualism. For instance, if mind is separate from body, how do our thoughts so obviously motivate our bodily actions?
Modern brain research, however, suggests that the mind is made of matter and emerges from brain activity. Even so, many neuroscientists still study the brain in isolation, without taking the whole body into consideration.
There are just as many problems with Mind-Brain Identity theory, as it’s called. For instance, if brain processes are identical to thoughts, then we should have as intimate an understanding of brain processes as we do of our own thoughts. Yet, until very recent scientific developments, we were (and remain, in practical experience) quite ignorant of brain processes.
Still, that mind and brain have something to do with one another seems undeniable, even if we don’t (and maybe never will) really understand it.
But that doesn’t mean that “AI” is or can ever be conscious.
Before the creation of artificial intelligence, intelligence had always been considered closely related to consciousness. But AI has fully demonstrated that intelligence can exist without consciousness.
An example of artificial intelligence is calculating operations, predictions or even the ability to play a game of chess. We also think of digital assistants such as Siri or Alexa, which are based on deep learning, ie the ability to learn autonomously using algorithms, data and previous experiences.
MSN
Key here is the phrase, “calculating operations”. At root, that’s all AI really is: very, very fast and very, very complex calculations. That’s not the same as consciousness, because there’s no “what it is like” involved.
As for conversational ability, philosopher John Searle knocked that one on the head half a century ago. In his “Chinese Room” thought experiment, Searle imagined a subject with absolutely no knowledge of Chinese, locked in a little room. Chinese speakers can write questions on a slip of paper and insert them into a slot that feeds them to the subject. The subject then consults a table of “If, Then” commands: “If squiggle, then squoggle.” Note, though, that the subject has no idea what squiggle and squoggle actually mean. He’s just following a program.
This is what is happening with a computer, even an AI: it’s following a program. The program, the book of commands, can be as fearsomely complex as you want it to be, but at no point does the subject (the computer) actually understand any of it.
So, AI isn’t going to start thinking for itself. The real danger is that we’ll keep on letting it “think” for us. We’ll give up being thinking beings with agency, and let dumb-but-clever machines order us around.