These days, ‘sentience’ and ‘AI’ often occur in the same article. It seems timely to ask what Alan Turing actually thought about the relation between these two topics. To that end, I’m going to look at what he said about this matter in his famous 1950 paper ‘Computing Machinery and Intelligence’ (Mind 59:433-460.)
Strictly speaking, he said nothing – i.e., he does not use the word ‘sentience’. He does, however, use the word ‘consciousness’ a few times, and, of the terms that occur in his paper, that one is the one that comes closest to what contemporaries usually mean by ‘sentience’. ‘Conscious(ness)’ occurs on pages 445, 447, 449 and 451 and I will look at each place.
Discussion of an objection labeled “The Argument from Consciousness” runs from 445 to 447. Turing’s source for this objection is “Professor Jefferson’s Lister Oration for 1949”. The passage Turing quotes from this work does not use the word ‘consciousness’, but does focus on absence in machines of feeling pleasure or grief, and inability to be “charmed by sex, be angry or depressed when it cannot get what it wants”.
One might have expected Turing to have addressed the question whether machines might have feelings. His response, however, does nothing of the sort. Instead, he says that “According to the most extreme form of this view the only way by which one could be sure that a machine thinks is to be the machine and to feel oneself thinking.” He then goes on to point out that if one applied this requirement to the question whether another human thinks, one would be led to solipsism. Finally, he dismisses that view by saying “Instead of arguing continually over this point it is usual to have the polite convention that everyone thinks.” [Both quotes from p. 446.]
This bit of discussion responds to an extreme view that is not expressed in what Turing quotes from Prof. Jefferson. It thus cries out for an explanation. The one that seems to me to be most consistent with the rest of the paper (as we shall see) is that Turing was serious about the title of his paper; in particular the term intelligence. He did not think of intelligence as including feeling (or what many would now call ‘phenomenal consciousness’) and addressing feeling would, from his point of view, be digressing from what he intended to talk about.
The last paragraph of the discussion of the Argument from Consciousness is this:
“I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.”
And what question is that? Again, just look at Turing’s title: he is concerned with computing machinery and intelligence.
The word ‘unconscious’ occurs on 448, but it refers to applying a principle without realizing one is doing so. This occurrence offers nothing on the question of the possibility of consciousness in machines.
p. 448 has one paragraph about the inability of machines to enjoy strawberries and cream. Turing says: “Possibly a machine might be made to enjoy this delicious dish, but any attempt to make one do so would be idiotic.”
This clearly indicates that Turing not interested in, or making any claims about, phenomenal consciousness (e.g., taste qualities or pleasure). He has made computing machines. He’s claiming there will be some that are intelligent in the not too distant future. He is not claiming they can enjoy anything; he’s not even trying to get them to do that. To do so would be idiotic. Turing evidently does not think it’s idiotic to try to get machines to be intelligent.
What’s important about not being able to enjoy strawberries and cream, he says, is that it contributes to some other disabilities, e.g., to entering into genuine friendships.
The argument from consciousness is mentioned again near the bottom of p. 449, but not much is said. It seems to be a complaint that if there is a method for doing something the doing of it is “rather base”. – In any case, it’s clear that Turing’s brief remark here is in no way an indication that he thinks his machines have phenomenal consciousness, or that he’s trying to build one that does, or that he thinks he needs to show such a possibility in order to show the possibility of machine intelligence.
P. 451 also mentions the argument from consciousness. But then Turing says this is “a line of argument we must consider closed.” There is, again, no indication that he thinks machines are, or might become, phenomenally conscious.
“Consciousness” does not occur on p. 457, but a brief discussion will help support the interpretation I am offering. At this point, Turing is imagining a child machine being taught. He notes that reward and punishment are part of teaching children and anticipates that a learning machine will have to have a ‘punishment-signal’ and a ‘reward-signal’. These signals are defined by reference to their role in decreasing or increasing, respectively, the probability of repetition of events that shortly preceded them. Immediately after giving these definitions, Turing says “These definitions do not presuppose any feelings on the part of the machine.”.
In sum, the passages I’ve reviewed compel us to think that Turing clearly distinguished having intelligence from having phenomenal consciousness (being sentient, having qualitative experiences, having subjective experience, etc.). And he was clear that the project of making an intelligent machine was entirely distinct from trying to make a sentient (conscious, feeling) machine: the first was expected to succeed, the second was not even worth trying.