Discussions of artificial intelligence often probe the mystical property of awareness and consciousness. The Turing test is an operational criterion for artificial intelligence: in the Turing test, a person converses with either another person or a computer, but they do not know which. If in a series of tests, the computer is indistinguishable from human, that computer is said to be intelligent. Such a test does not capture the entirety of intelligence, and even so has not been attained by any computer.
Suppose a computer did pass the Turing test. Could such a computer have true conscious understanding, or it is just simulating intelligence? Philosopher John Searle proposed an interesting thought experiment that claims to refute the possibility of a strong artificial intelligence that has a conscious understanding. Searle’s experiment is known as the Chinese room experiment.
In this experiment, there is an enclosed room with a person inside. Let’s call the person Bob. There is also a person outside the room, who we’ll call Alice. Alice speaks Chinese and passes sentences written in Chinese on pieces of paper through a slot, allowing Bob to receive the characters. Bob then looks up the characters in a computer program printed in a book. The instructions tell him which characters to write down in reply, which he sends back to Alice.
Because the program is so good, Alice thinks that Bob speaks Chinese. However, Bob does not know a word of Chinese and is merely following the instructions. In other words, there is no actual understanding. Since any such computer program could be placed in this experiment, it supposedly shows that any computer would have no more true and conscious understanding of what it is doing than Bob looking instructions up in a book.
The first time I read about this experiment, I found it very unconvincing. To illustrate the flaws in the argument, we can consider two variants: one version where the computer program is running on a computer and another version where the program is executed by Bob through the printed version.
These versions are supposed to be equivalent. If Bob is running the program by hand through printed books it seems even more clear that there is no understanding of Chinese inside the room, and it only gives a simulated appearance of it. However, I find this unbelievable. For, if Bob really has the ability to follow the instructions of the program through the book, however tedious that may be, why do we assume he does not gain knowledge of Chinese through that procedure? By following the lines of logic, I would think it would be quite hard for Bob not to notice patterns and gain some rudimentary knowledge of the information contained in the pieces of paper being passed through the room.
Furthermore, what if consciousness and awareness is precisely a phenomenon that arises when a sufficiently complex reasoning system exists? For example, if the program is running on a computer, then it seems like that system would be an example of where consciousness would arise because it is sufficiently complex to house it. The version of Bob with the books is supposed to refute that. However, why shouldn’t consciousness also inhabit that reasoning system?
The fact that Bob is in a room with books seems only to be unbelievable because it is a guy in a room with books. However, in practice, it would be impossible a human to actually execute such a complex program in less time than the age of the universe. If Bob were actually inhumanly fast to make the program believable, then executing the program on books with his mind should be enough to also cause an understanding of Chinese to arise, either out of an additional conscious presence through the execution of the computer program or through Bob’s own gradual but inevitable increase in understanding of Chinese because he is executing the program. In the latter case, it is like giving someone a pattern of numbers like 1,4,9,16,… and not telling them how the sequence was generated, but then the person comes to an understanding of how it was generated anyway because we recognize patterns.
It seems far more likely that a reasoning system complex enough to simulate carrying out a conversation on the level of appearing human would also need awareness to do so. There are billions of humans on the planet, and all of them have a biological computer in the form of a brain. Yet, it seems all of them have awareness and consciousness. How could that be if not for consciousness being an inevitability of our advanced reasoning capabilities?
I do not necessarily believe that true aware artificial intelligence is possible, only that Searle’s experiment does not provide evidence either way.