Turing on Consciousness

March 16, 2023

These days, ‘sentience’ and ‘AI’ often occur in the same article. It seems timely to ask what Alan Turing actually thought about the relation between these two topics. To that end, I’m going to look at what he said about this matter in his famous 1950 paper ‘Computing Machinery and Intelligence’ (Mind 59:433-460.)

Strictly speaking, he said nothing – i.e., he does not use the word ‘sentience’. He does, however, use the word ‘consciousness’ a few times, and, of the terms that occur in his paper, that one is the one that comes closest to what contemporaries usually mean by ‘sentience’. ‘Conscious(ness)’ occurs on pages 445, 447, 449 and 451 and I will look at each place.

Discussion of an objection labeled “The Argument from Consciousness” runs from 445 to 447. Turing’s source for this objection is “Professor Jefferson’s Lister Oration for 1949”. The passage Turing quotes from this work does not use the word ‘consciousness’, but does focus on absence in machines of feeling pleasure or grief, and inability to be “charmed by sex, be angry or depressed when it cannot get what it wants”.

One might have expected Turing to have addressed the question whether machines might have feelings. His response, however, does nothing of the sort. Instead, he says that “According to the most extreme form of this view the only way by which one could be sure that a machine thinks is to be the machine and to feel oneself thinking.” He then goes on to point out that if one applied this requirement to the question whether another human thinks, one would be led to solipsism. Finally, he dismisses that view by saying “Instead of arguing continually over this point it is usual to have the polite convention that everyone thinks.” [Both quotes from p. 446.]

This bit of discussion responds to an extreme view that is not expressed in what Turing quotes from Prof. Jefferson. It thus cries out for an explanation. The one that seems to me to be most consistent with the rest of the paper (as we shall see) is that Turing was serious about the title of his paper; in particular the term intelligence. He did not think of intelligence as including feeling (or what many would now call ‘phenomenal consciousness’) and addressing feeling would, from his point of view, be digressing from what he intended to talk about. 

The last paragraph of the discussion of the Argument from Consciousness is this:

“I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.”

And what question is that? Again, just look at Turing’s title: he is concerned with computing machinery and intelligence.

The word ‘unconscious’ occurs on 448, but it refers to applying a principle without realizing one is doing so. This occurrence offers nothing on the question of the possibility of consciousness in machines.

p. 448 has one paragraph about the inability of machines to enjoy strawberries and cream. Turing says: “Possibly a machine might be made to enjoy this delicious dish, but any attempt to make one do so would be idiotic.”

This clearly indicates that Turing not interested in, or making any claims about, phenomenal consciousness (e.g., taste qualities or pleasure). He has made computing machines. He’s claiming there will be some that are intelligent in the not too distant future. He is not claiming they can enjoy anything; he’s not even trying to get them to do that. To do so would be idiotic. Turing evidently does not think it’s idiotic to try to get machines to be intelligent.

What’s important about not being able to enjoy strawberries and cream, he says, is that it contributes to some other disabilities, e.g., to entering into genuine friendships.

The argument from consciousness is mentioned again near the bottom of p. 449, but not much is said. It seems to be a complaint that if there is a method for doing something the doing of it is “rather base”. – In any case, it’s clear that Turing’s brief remark here is in no way an indication that he thinks his machines have phenomenal consciousness, or that he’s trying to build one that does, or that he thinks he needs to show such a possibility in order to show the possibility of machine intelligence.

P. 451 also mentions the argument from consciousness. But then Turing says this is “a line of argument we must consider closed.” There is, again, no indication that he thinks machines are, or might become, phenomenally conscious.

“Consciousness” does not occur on p. 457, but a brief discussion will help support the interpretation I am offering. At this point, Turing is imagining a child machine being taught. He notes that reward and punishment are part of teaching children and anticipates that a learning machine will have to have a ‘punishment-signal’ and a ‘reward-signal’. These signals are defined by reference to their role in decreasing or increasing, respectively, the probability of repetition of events that shortly preceded them. Immediately after giving these definitions, Turing says “These definitions do not presuppose any feelings on the part of the machine.”.

In sum, the passages I’ve reviewed compel us to think that Turing clearly distinguished having intelligence from having phenomenal consciousness (being sentient, having qualitative experiences, having subjective experience, etc.). And he was clear that the project of making an intelligent machine was entirely distinct from trying to make a sentient (conscious, feeling) machine: the first was expected to succeed, the second was not even worth trying.


An Unusual Aphrodisiac

October 10, 2011

Imagine you’re a prehistoric heterosexual man who’s going into battle tomorrow. The thought that there’s a fair chance of your dying might so completely occupy your mind that you’d be uninterested in anything save, perhaps, sharpening your spear.

On the other hand, your attitude might be that if you’re going to be checking out tomorrow, you’d like to have one last time with a woman tonight.

We are more likely to be descendants of the second type of man than the first. So, we might expect that there would be a tendency among men for thoughts of their own death to raise their susceptibility to sexual arousal.

In contrast, women who were more erotically motivated when they believed their own death might be just around the corner would not generally have produced more offspring than their less susceptible sisters. So, there is no reason to expect that making thoughts of death salient should affect sexual preparedness in women.

These ideas have recently been tested in two studies by Omri Gillath and colleagues. Of course, they didn’t send anybody into battle. Instead, they used two methods – one conscious, one not – to make the idea of death salient.

In the first study, one group of participants wrote responses to questions about the emotions they had while thinking about their own death and events related to it. Another group responded to similarly phrased questions about dental pain. The point of this contrast was to distinguish whether an arousal (if found) was specific to death, or whether it was due more generally to dwelling on unpleasant topics.

After responding to the questions, participants were shown either five sexual pictures (naked women for men, naked men, for women) or five non-sexual pictures (sports cars for men, luxury houses for women). Previous studies had found that all the pictures were about equal for their respective groups on overall perceived attractiveness. Participants had all self-identified as heterosexual. They had five minutes to carefully examine their set of five pictures.

Participants were each connected to a device that measured their heart rate. The key result was that the men who answered the questions about death and viewed erotic pictures had a significantly higher average heart rate during the picture viewing than any other group. That means that, on average, they had a higher rate than other men who saw the same pictures, but had answered questions about dental pain. They also had a higher rate than other men who had answered questions about death, but then saw non-sexual pictures. And they had a higher rate than women who answered either question and viewed either pictures of naked men or non-sexual pictures.

In the second study, the death/pain saliency difference was induced by flashing the word “dead” (for half the participants) or the word “pain” (for the other half) before each item in a series of pictures. The presentation of the words was very brief (22 thousands of a second) and came between masks (strings of four X s). With the masks, that’s too short to recognize the word. The pictures either contained a person or did not. Half of the pictures that contained a person were sexual, half were not. Pictures remained visible until the participant responded.

The response was to move a lever if, but only if, the picture contained a person. The movement was either pulling the lever toward oneself, or pushing it away. There were 40 consecutive opportunities for pulling, and 40 for pushing; half of participants started with pulling, half started with pushing.

The logic of this experiment depends on a connection previously established by Chen and Bargh (1999) between rapidity of certain responses and the value of what is being responded to. Pulling brings things closer to you, and if what’s before your mind is something you like, then that will speed the pulling (relative to pulling in response to something you’d ordinarily try to avoid, or something toward which you are neutral).

The reasoning, then, is that those who had a higher degree of sexual preparedness should pull faster in response to erotic materials than those who were not so highly prepared. Gillath and colleagues hypothesized that participants who received the brief exposure to “dead” and then saw an erotic picture should be faster pullers than those who received a brief exposure to “pain” before an erotic picture.

And that is what they found – for men. There was no such result for women. Nor did the brief exposure to “dead” result in faster pulling after being presented with non-sexual pictures; the faster reaction times depended on both the exposure to “dead” and the sexual nature of the following picture.

These two studies are certainly interesting in relation to the evolutionary thinking that led them to be undertaken. But I also find them fascinating in relation to a more general point. The second study provides evidence that our brains can (a) make a distinction (between pain and death) and (b) relate it to another difference (sexual vs. non-sexual material) completely unconsciously and extremely rapidly. And the first study, although done at a much slower time scale and with consciousness of the materials used to manipulate mood (i.e., the writing about death vs. pain), showed an effect on heart rate, which is not something that was under participants’ control. The brain processes of which we are unaware (except when revealed in studies like these) are amazing indeed.

[O. Gillath, M. J. Landau, E. Selcuk and J. L. Goldenberg (2011) “Effects of low survivability cues and participant sex on physiological and behavioral responses to sexual stimuli”, Journal of Experimental Social Psychology 47:1219-1224. The previous study mentioned in the discussion of Study 2 is M. Chen and J. A. Bargh (1999) “Consequences of automatic evaluation: Immediate behavioral dispositions to approach or avoid the stimulus”, Personality and Social Psychology Bulletin 25:215-224. ]


Challenges for a Humanoid Robot

June 5, 2011

For most of June, I will be attending a conference in Japan and then visiting in China and Indonesia. So, I won’t be posting another blog entry until July.

But I’ve written an article under the above title that’s due to be posted on Monday, June 6th in the Forum section of the On The Human web site. In it, I distinguish several abilities that one might imagine for humanoid robots. I contrast the attitudes that we might have toward devices with different sets of these abilities, and compare possible attitudes toward humanoid robots with attitudes toward ourselves.

Please go to http://onthehuman.org and click on the Forum button.

Comments can be left at the end of the article during the two weeks after its posting. I’ve set aside some blocks of time in my travel schedule for responding, so you’ll be able not only to interact with me, but to see what others have to say, and my replies to them.


A Methodological Puzzle

December 20, 2010

A recent study by Wilson et al. argues for a function that is done by the prefrontal cortex (PFC) but is apparently not done exclusively by any one of its parts. This function is processing of temporally complex events. Temporally complex events are stimuli in which several features that are needed to learn a task are presented sequentially, and are not all available at any one time.

This result is particularly interesting because the authors argue for there being other functions that do seem to be located in different parts of the PFC. This is supported in several ways. One method involves selectively destroying parts of macaque monkey brains, and finding, for each part, a task on which performance is highly impaired by destruction of that part but much less impaired by destruction of the others.

The further function of the whole PFC (i.e., the processing of temporally complex events) is then shown by a task on which performance is only slightly impaired by destruction of each of the parts, but is severely impaired by destruction of the whole PFC.

“Here we have argued that the PFC as a whole has an overarching function that is not localized to any particular subregion, and we have proposed that this role is related to its involvement in the processing of temporally complex events.” (p. 538)

As good studies in science should do, this one raises interesting questions. One that intrigues me is this. How should we go about distinguishing functions? How do we tell whether we have found two independent functions, rather than one function that works by making use of another? Could it be that there is a function that (a) can be performed anywhere in the PFC and (b) is drawn upon in performing each of the functions that also require something further that can be done only in a specific part of the PFC?

I have no answer to offer to this question, but I think I can clarify it by reference to another case. There is a brain part (fusiform gyrus) that is often referred to as “the face processing area”. Work from Eric Cooper’s lab suggests, however, that this is an overly narrow description of what this area does. That’s because its activity seems required whenever we have to make finer discriminations that depend on relative distances and not just on general features of where things are. Faces are alike in having the nose between the eyes and above the mouth, so we have to be able to appreciate different distances between these features in order to recognize a particular person. But we also have to use this ability to distinguish, e.g., different makes and models of cars. A credenza and a dresser are both essentially boxes, so we have to be able to analyze relative proportions to tell the difference.

In short, the suggestion is that the famous “face processing area” would be better thought of as performing the function of making discriminations that depend on relative distances and not just gross placement of parts.

To find out what a part of the brain does, researchers must use some definite task. And then, as responsible scientists, they must relate their descriptions of the functions performed to the tasks they used. Otherwise, they would be merely speculating about how the mind works.

But, somewhat paradoxically, this necessary policy may have a built-in cost. Descriptions that are driven by investigative tasks may turn out to be overly narrow, and that may skew our conception of how each brain part contributes to our whole organization. To avoid this pitfall, we have to keep our minds open. It’s always possible that what seems to be a part that performs a specific function may do something more general than what could legitimately be concluded from any single study.

 [Wilson, C. E., Gaffan, D., Browning, P. G. F. and Baxter, M. G. (2010) “Functional localization within the prefrontal cortex: missing the forest for the trees?”, Trends in Neurosciences 33(12):533-540. Work from Cooper’s lab can be found in, e.g., Brooks, B. E. and Cooper, E. E. (2006) “What Types of Visual Recognition Tasks Are Mediated by the Neural Subsystem that Subserves Face Recognition?”, Journal of Experimental Psychology: Learning, Memory, and Cognition 32(4):684-698.  ]