Turing on Consciousness

March 16, 2023

These days, ‘sentience’ and ‘AI’ often occur in the same article. It seems timely to ask what Alan Turing actually thought about the relation between these two topics. To that end, I’m going to look at what he said about this matter in his famous 1950 paper ‘Computing Machinery and Intelligence’ (Mind 59:433-460.)

Strictly speaking, he said nothing – i.e., he does not use the word ‘sentience’. He does, however, use the word ‘consciousness’ a few times, and, of the terms that occur in his paper, that one is the one that comes closest to what contemporaries usually mean by ‘sentience’. ‘Conscious(ness)’ occurs on pages 445, 447, 449 and 451 and I will look at each place.

Discussion of an objection labeled “The Argument from Consciousness” runs from 445 to 447. Turing’s source for this objection is “Professor Jefferson’s Lister Oration for 1949”. The passage Turing quotes from this work does not use the word ‘consciousness’, but does focus on absence in machines of feeling pleasure or grief, and inability to be “charmed by sex, be angry or depressed when it cannot get what it wants”.

One might have expected Turing to have addressed the question whether machines might have feelings. His response, however, does nothing of the sort. Instead, he says that “According to the most extreme form of this view the only way by which one could be sure that a machine thinks is to be the machine and to feel oneself thinking.” He then goes on to point out that if one applied this requirement to the question whether another human thinks, one would be led to solipsism. Finally, he dismisses that view by saying “Instead of arguing continually over this point it is usual to have the polite convention that everyone thinks.” [Both quotes from p. 446.]

This bit of discussion responds to an extreme view that is not expressed in what Turing quotes from Prof. Jefferson. It thus cries out for an explanation. The one that seems to me to be most consistent with the rest of the paper (as we shall see) is that Turing was serious about the title of his paper; in particular the term intelligence. He did not think of intelligence as including feeling (or what many would now call ‘phenomenal consciousness’) and addressing feeling would, from his point of view, be digressing from what he intended to talk about. 

The last paragraph of the discussion of the Argument from Consciousness is this:

“I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.”

And what question is that? Again, just look at Turing’s title: he is concerned with computing machinery and intelligence.

The word ‘unconscious’ occurs on 448, but it refers to applying a principle without realizing one is doing so. This occurrence offers nothing on the question of the possibility of consciousness in machines.

p. 448 has one paragraph about the inability of machines to enjoy strawberries and cream. Turing says: “Possibly a machine might be made to enjoy this delicious dish, but any attempt to make one do so would be idiotic.”

This clearly indicates that Turing not interested in, or making any claims about, phenomenal consciousness (e.g., taste qualities or pleasure). He has made computing machines. He’s claiming there will be some that are intelligent in the not too distant future. He is not claiming they can enjoy anything; he’s not even trying to get them to do that. To do so would be idiotic. Turing evidently does not think it’s idiotic to try to get machines to be intelligent.

What’s important about not being able to enjoy strawberries and cream, he says, is that it contributes to some other disabilities, e.g., to entering into genuine friendships.

The argument from consciousness is mentioned again near the bottom of p. 449, but not much is said. It seems to be a complaint that if there is a method for doing something the doing of it is “rather base”. – In any case, it’s clear that Turing’s brief remark here is in no way an indication that he thinks his machines have phenomenal consciousness, or that he’s trying to build one that does, or that he thinks he needs to show such a possibility in order to show the possibility of machine intelligence.

P. 451 also mentions the argument from consciousness. But then Turing says this is “a line of argument we must consider closed.” There is, again, no indication that he thinks machines are, or might become, phenomenally conscious.

“Consciousness” does not occur on p. 457, but a brief discussion will help support the interpretation I am offering. At this point, Turing is imagining a child machine being taught. He notes that reward and punishment are part of teaching children and anticipates that a learning machine will have to have a ‘punishment-signal’ and a ‘reward-signal’. These signals are defined by reference to their role in decreasing or increasing, respectively, the probability of repetition of events that shortly preceded them. Immediately after giving these definitions, Turing says “These definitions do not presuppose any feelings on the part of the machine.”.

In sum, the passages I’ve reviewed compel us to think that Turing clearly distinguished having intelligence from having phenomenal consciousness (being sentient, having qualitative experiences, having subjective experience, etc.). And he was clear that the project of making an intelligent machine was entirely distinct from trying to make a sentient (conscious, feeling) machine: the first was expected to succeed, the second was not even worth trying.


Zombie Neuroscience

October 14, 2014

In an opinion piece in the New York Times Sunday Review (October 12, 2014, p. 12), Michael Graziano asks “Are We Really Conscious?” His answer is that we are probably not conscious. If his theory is right, our belief in awareness is merely a “distorted account” of attention, which is a reality that consists of “the enhancing of some [neural] signals at the expense of others”.

This distorted account develops in all of us, and seems to us to be almost impossible to deny. But beliefs that the Earth is the center of the universe, that our cognitive capacities required a special creation, and that white light is light that is purified of all colors, have seemed quite natural and compelling, yet have turned out to be wrong. We should be skeptical of our intuitive belief that we are conscious.

In short, Graziano is saying that your impression that you are conscious is likely a false belief. “When we introspect and seem to find that ghostly thing –awareness, consciousness, the way green looks or pain feels – our cognitive machinery is accessing internal models and those models are providing information that is wrong.”

One might well wonder what is supposed to be “ghostly” about the experience of green you think you have when you look at, say, an unripe banana, or a pain that you might believe occurs when you miss a nail and hit your thumb with a hammer. But, of course, if you are already convinced that there are no such things, then you must think that their apparent presence is merely the holding of false beliefs. If you then try to say what these false beliefs are beliefs about, you will be hard pressed to produce anything but ghosts. There are, of course, neural events that are necessary for causing these allegedly false beliefs about the way bananas look or pains – but these beliefs are not beliefs about those neural events, nor are they beliefs about any neural events at all. (People believed that unripe bananas looked green and that they had pains long before anyone had any belief whatsoever about neural events.)

Graziano’s positive story about awareness is that it is a caricature: “a cartoonish reconstruction of attention” (where, recall, attention is enhancement of some signals at the expense of others). This description raises a puzzle as to what the difference is between a cartoonish reconstruction of a signal enhancement that is caused by light reflected from an unripe banana, and a cartoonish reconstruction of a signal enhancement caused by a blow to your thumb. But perhaps this puzzle can be resolved in this way: The banana causes enhancement of one set of signals, the blow to the thumb causes enhancement of a different set of signals, and which false belief you acquire depends on which set of signals is enhanced.

A problem remains, however. Your beliefs that you are experiencing green or that you are in pain are certainly not beliefs about your signal enhancements. They are not beliefs about wavelengths or neural activations caused by blows to your thumb (though, of course, wavelengths and neural activations are among the causes of your having beliefs about what colors or pains you are experiencing). There is nothing else relevant here that Graziano recognizes as real. Your false beliefs are beliefs about nothing that is real.

We can, of course, have beliefs about things that are not real – for example, unicorns, conspiracies that never took place, profits that will in fact never materialize. In all such cases, however, we can build the non-existent targets of the beliefs by imaginatively combining things that do exist. For example, we have seen horses and animals with horns, so we can build a horse with a horn in our thoughts by imaginative combination.

But green and pain are not like unicorns. They have no parts that are not themselves colors or feelings. There are no Xs and Ys that are not themselves colors or feelings, such that we can build green or pain in our imagination by putting together Xs and Ys. So, if we were to accept Graziano’s dismissal of color experiences and pains as unreal, we would have to allow that we can have beliefs about things that neither exist, nor can be imaginatively constructed. We have, however, no account of how there could be such a belief. The words “way green things look” and “pain” could not so much as mean anything, if we suppose that there are no actual examples to which these words apply, and no way of giving them meaning by imaginative construction.

Graziano invokes impressive authorities – Copernicus, Darwin, and Newton – in support of skepticism about intuitions that once seemed incontestable. (See list in the second paragraph above.) He presents his theory as coming from his “lab at Princeton”.

The view he proposes, however, is not a result supported by scientific investigation. It is supported by the other authorities to which he appeals – Patricia Churchland and Daniel Dennett. These writers are philosophers who offer to solve the notoriously difficult mind-body (or, consciousness-brain) problem by the simple expedient of cutting off consciousness. Voilà. No more problem.

But it is not good philosophy to affirm a view that commits one to there being beliefs of a kind for which one can give no account.

It is important to understand that resisting the dismissal of consciousness is fully compatible with affirming that there are indeed neural causes for our behavior. Hammer blows to the thumb cause neural signals, which cause reflexive withdrawals. Somewhat later, the interaction of these signals with neurons in our brains causes behaviors such as swearing and taking painkillers. But hammer blows to the thumb also cause pains. It is indeed difficult to understand how or why pains should result from neural events that such blows cause. But it is not a solution to this problem to dismiss the pains as unrealities. Nor is it true that science teaches us that we ought to deny consciousness.


The Cambridge Declaration on Consciousness

August 24, 2012

On July 7, 2012 a “prominent international group” of brain scientists issued The Cambridge Declaration on Consciousness. The full document has four paragraphs of justification, leading to the declaration itself, which follows.

We declare the following: “The absence of a neocortex does not appear to preclude an organism from experiencing affective states. Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Non-human animals, including all mammals and birds, and many other creatures, including octopuses, also possess the neurological substrates.”

Back in the 90’s I published a paper under the title “Some Nonhuman Animals Can Have Pains in a Morally Relevant Sense”. (In case you’re wondering, that view had been denied by Peter Carruthers in a paper in a top tier journal.) So, not surprisingly, I am quite sympathetic to the sense of this declaration.

I also approve of the declaration’s prioritizing of neurological similarities over behavior. The philosophy textbook presentation of the supposedly best reason for thinking that other people have minds goes like this:

1. When I behave in certain ways, I have accompanying thoughts and feelings.

2. Other people behave in ways similar to me. Therefore, very probably,

3.  Other people have accompanying thoughts and feelings that are similar to mine.

This argument is often criticized as very weak. Part of my paper’s argument was that we have a much better reason for thinking our fellows have minds, namely:

1. Stimulation of my sense organs (e.g., being stuck with a pin) causes me to have sensations (e.g., a pain).

2. Other people are constructed very much like me. Therefore, very probably,

3. Stimulation of other people in similar ways causes them to have sensations similar to mine.

If one approaches the matter in this second way, it is natural to extend the argument to nonhuman animals to the extent that they are found to be constructed like us. This is the main line of approach in the Cambridge Declaration (although some of the lead-up paragraphs also sound like the first argument).

In sum, I am inclined to accept the sense of the Cambridge Declaration, and to agree that the evidence and reasoning presented make its stand a reasonable one.

But still, there is something peculiar about this Declaration, even aside from its being unusual for academic conferences to issue position statements. The question is, Why? Just what is odd about it?

One of the Declaration’s authors, Christoph Koch, recently gave an interview on the radio. (The link is to the written transcript.) In it, he characterizes fMRI scans as a “very blunt” instrument. The point is that the smallest region that can be resolved by an fMRI scan contains about half a million neurons, some of which may be firing quite actively while others are hardly firing at all. So, our scanning techniques do not tell us what neural firing patterns occur, but only where there are some highly active neurons.

Ignorance of neural patterns is relevant here. Another point that Koch makes in the interview is that there are many neurons – about three quarters of what you have – in the cerebellum. Damage to this part of the brain disrupts smooth and finely tuned movements, such as are required for dancing, rock climbing and speech, but has little or no effect on consciousness.

So, it is not just many neurons’ being active, or there being a complex system of neural activations of some kind or other that brings about consciousness. It is some particular kind of complexity, some particular kind of pattern in neural activations.

I am optimistic. I think that some day we will figure out just what kind of patterned activity in neurons causes consciousness. But it is clear that we do not now know what kind of neural activity is required.

The peculiarity of the Cambridge Declaration, then, is that it seems to be getting ahead of our actual evidence, yet it was signed by members of a group who must be in the best position to be acutely aware of that fact. Of course, ‘not appear[ing] to preclude’ consciousness in nonhuman animals is a very weak and guarded formulation. The remainder of the declaration, however, is more positively committal.

The best kind of argument for consciousness in nonhuman animals would go like this:

1. Neural activity patterns of type X cause consciousness in us.

2. Certain nonhuman animals have neural activity patterns of type X. Therefore, very likely,

3. Those nonhuman animals have consciousness.

Since we do not now know how to fill in the “X”, we cannot now give this best kind of argument. The signers of the Declaration must know this.

[The radio interviewer is Steve Paulson, and the date is August 22, 2012. The paper of mine referred to above is in Biology and Philosophy (1997), v.12:51-71. Peter Carruthers’ paper is “Brute Experience” in The Journal of Philosophy, (1989) v.86:258-269.]


Responsibility and Brains

August 2, 2012

In a thoughtful recent article, John Monterosso and Barry Schwartz rightly point out that whatever action we may take, it is always true that our brains “made us do it”. They are concerned that recognition of this fact may undermine our practice of holding people responsible for what they do, and say that “It’s important that we don’t succumb to the allure of neuroscientific explanations and let everyone off the hook”.

The concern arises from some of their experiments, in which participants read descriptions of harmful actions and were provided with background on those who had done them. The background was given either (a) in psychological terms (for example, a history of childhood abuse) or (b) in terms of brain anomalies (for example, imbalance of neurotransmitters).

The striking result was that participants’ views about responsibility for the harmful actions depended on which kind of terms were used in giving them the background. Those who got the background in psychological terms were likely to regard the actions as intentional and as reflecting the character of the persons who had done them. Those who got the background in terms of brain facts were more likely to view the actions as “automatic” and only weakly related to the performers’ true character.

The authors of the article describe this difference as “naive dualism” – the belief that actions are brought about either by intentions or by physical laws that govern our brains, and that responsibility accrues to the first but not the second. Naive dualism unfortunately ignores the fact that intentions must be realized in neural form – must be brain events – if they are to result in the muscle contractions that produce our actions.

They recommend a better question for thinking about responsibility and the background conditions of our actions. Whether the background condition is given as a brain fact, or a psychological property, or a certain kind of history, we should ask about the strength of the correlation between that condition and the performance of harmful actions. If most of those who have background condition X act harmfully, their level of responsibility may be low, regardless of which kind of background fact X is. If most of those with background condition X do not act badly, then having X should not be seen as diminishing responsibility – again, regardless of which kind of background condition X may be.

This way of judging responsibility is compatible with recognizing that we all have an interest in the preservation of safety. Even if bad behavior can hardly be avoided by some people, and their responsibility is low, it may be perfectly legitimate to take steps to prevent them from inflicting damage on others.

I’ll add here a speculation on a mechanism by which citing brain facts may lead us to assign people less responsibility than we should. Many reports of brain facts emphasize the role of some part of the brain. But we do not think of people as their parts: Jones is not his insula or his amygdala, Smith is not her frontal lobes or her neurotransmitters. So, some reports of background conditions in terms of brain facts may lead us to think of actions as the result of people’s parts, and thus not as the actions of the whole person.

A corrective to this kind of mistake is to bear in mind that our encouragements of good behavior and our threats of punishment are addressed to whole persons. Whole persons generally do know what is expected of them, and in most cases knowledge of these expectations offsets deficiencies that may occur in some of our parts. Our brains are organized systems, and the effects of their parts can become visible only when the whole system is mobilized toward carrying out an action.

[The article referred to is John Monterosso and Barry Schwartz, “Did Your Brain Make You Do It?”, The New York Times, Sunday, July 29, 2012, Sunday Review section, p. 12.]


Science and Free Will

May 9, 2012

At last month’s “Toward a Science of Consciousness” conference in Tucson, Pomona College researcher Eve Isham reported on several studies under the title “Saving Free Will From Science”. These studies cast doubt on conclusions that are often drawn from a series of famous studies carried out by Benjamin Libet.

Participants in Libet’s experiments wore a device on their head that measures electrical activity at points under the scalp. They were instructed to make a small movement (e.g., a flick of the wrist) whenever they felt like doing so. They watched a clock-like device, in which a dot rotates around the edge of a circle at the rate of two and a half times per second. There are marks and numbers around the edge of the circle. Their task was not only to make a movement when they felt like it; they were also to take note of where the dot was when they (a) formed the intention to make their movement, and (b) when they actually moved. They were asked to report these two times right after they made their movement.

Let’s call these times Int and Mov. What Libet found was that participants’ electrical activity showed a telltale rise a short time (roughly, a half second) before the reported time Int. This telltale marker is called a “readiness potential”, usually abbreviated as RP. (There are complicated corrections that must be made for the time it takes for signals to travel along neurons; the interval between RP and Int is what remains after these corrections have been taken into account.)

The key points are that RP comes before Int, and that Int is the moment when the intention to move first becomes conscious. The conclusion that many have drawn is that the process that is going to result in a movement starts unconsciously before a person becomes aware of an intention to move. So, the intention comes too late to be the real cause of the movement. But if people’s intentions to move are not the real causes of their movements, then they don’t have free will.

(Libet tried to avoid this conclusion by holding that there was enough time for a person to insert a “veto” and stop the movement. This has earned him a quip: he undercuts free will, but allows for free won’t. Few have found this view attractive.)

Critics of Libet’s work have raised many questions about the experimental design, and I have long regarded these experiments as resting on assumptions that seem difficult to pin down with sufficient accuracy. Isham’s presentation significantly deepened these doubts.

Although I have read many papers on Libet’s work, I had never seen the clock in motion. Isham showed a video, and as I watched, I imagined myself as a participant in Libet’s experiments. I flicked my wrist a few times, and thought of what I would have reported as the times of my intention to move, and my actual movement.

I found that it was extremely difficult to try to estimate two times. In fact, I found it hopeless, and soon gave up, settling for focusing on trying to get an accurate estimate of the time of my intention to move.

But even this simpler task was difficult, and I had no sense of confidence that I could locate the time of my intention more accurately than about an eighth of a revolution (that’s the distance of about eight minutes around the circumference of a regular clock face). When trying to do the task, the dot seemed to be moving very fast.

It’s natural to wonder whether accuracy might be improved by using a clock where the dot was not traveling so fast. That’s one of the variations on Libet’s set up that Isham tried. And here is the decisive result: When she did that, the estimates of the time of intention were earlier. RP was still a little bit earlier than the reported Int time, but the interval was very significantly reduced.

Isham reported several other variations on Libet’s design, all of which lead to the same general result: the estimates of Int depend on clock speed and several other factors that shouldn’t make a difference, but do. These results offer strong support for her conclusion that the time of intention is not consciously accessible to us, and we cannot use Libet-style experiments to undercut the view that our intentions cause our actions.

Readers of Your Brain and You, or some other posts on this blog, may recall that I am sympathetic to the conclusions that are often drawn from Libet’s work. So, I don’t think Isham has saved free will from science. But her work gives us more reason than we have previously had for not basing anti-free-will conclusions on Libet-style investigations.

[The abstract of Isham’s conference paper is available here.]


What’s New in Consciousness?

April 18, 2012

I am just back from the four-day “Toward a Science of Consciousness” conference in Tucson. I heard 32 papers on a wide variety of topics and I’m trying to tell myself what I’ve learned about consciousness. Today, I’ll focus on the most fundamental of the difficulties in this area, known to afficionados as “the Hard Problem of consciousness”. The following four statements explain the problem in a way that leaves open to many different kinds of response to it.

1. Data concerning our sensations and feelings correlate with data concerning the firings of neurons in our brains.

There is overwhelming evidence for this statement. Strokes in the back of the head, or severe injuries there, cause loss of vision. The extent of the area in which one can no longer see is well correlated with the extent of the damage. Pin pricks normally hurt; but if injury or drugs prevent  the neural signals they cause from reaching the brain, you won’t feel a thing. Direct electrical stimulation of some brain locations produces experiences. And so on.

2. What neurons do is to fire or not fire.

Neurons can fire faster or slower. There can be bursts of rapid firing, separated by intervals of relative quiescence, and there can be patterns of such bursts. With 100 billion or so neurons, that allows an enormous range of possible combinations of neural activities. We know of no other aspects of neural activity that are relevant to what we are experiencing.

3. We experience a large number of qualities – the colors, the tastes, the smells, the sounds, feelings like warmth, itchiness, nausea, and pain; and feelings such as jealousy, anger, joy, and remorse.

4. The qualities noted in 3. seem quite different from the neural firing patterns in 2, and from any complex physical property.

The Hard Problem is this: What is the relation between our experiences and their qualities, and our neural firing patterns? How can we explain why 1. is true?

There are two fundamental responses to the Hard Problem, and many ways of developing each of them. They are:

Physicalism. Everything is physical. Experiences are the same things as neural events of some particular kind. (It is not claimed that we know what particular kind of neural event is the same as any particular kind of experience.) The explanation of 1. is that the data concerning experiences and neural events are correlated, since experiences and neural events are just the same thing. It’s like the explanation of why accurate data about Samuel Clemens’ whereabouts would be perfectly correlated with accurate data about Mark Twain’s whereabouts.

Dualism. Experiences are not identical to neural events. 1. is true either because neural events cause experiences, or because some events have both neural properties and other properties that cannot be discovered by the methods of science.

Today, the dominant view (about two to one, by my unscientific estimate) is physicalism. The reason is suggested by my descriptions. Dualists evidently have to say how neural events can cause experiences, or explain the relation between the properties known to science and the properties not known to science. Physicalists have no such task: if there is just one thing under two names, there is no “relation” or “connection” to be explained.

But physicalism has another task, namely, to explain how 4. can be true. According to physicalism, blueness = some complex physical property X, the feeling of nausea = some complex physical property Y, and so on. How could these pairs of items even seem to be different, if they were really just the same?

Of course, it will not do to say that blue is the way a physical property X appears to us, the feeling of nausea is the way a physical property Y appears to us, and so on. That would just introduce ways things appear. But then we would just have to ask how physical properties are related to their ways of appearing. They do not appear as what (according to physicalism) they really are; i.e., they do not appear as complex physical properties. So how, according to physicalism, could it happen that X can appear as blue, but not as complex physical property X, if blue = X?

The new developments reported at the conference that were of most interest to me were attempts by physicalists of ways of dealing with this last question. Here are brief summaries of two key ideas.

A. To be feeling something is nothing other than to record and adjust action in the following ways. (i) Recognize dependence of changes of input to one’s sensory devices upon movement of one’s own body. (ii) Recognize changes of input to one’s sensory devices from sources that do not depend on one’s own body (and distinguish these from the changes in (i)). (iii) Process some parts of input more intensively than others. (When we do this, it is called attending to some aspects of our situation more than others.)

We understand how these features could be instantiated in a robot; so we understand how we could make a robot – a purely physical thing – that feels.

B. What is in the world is all physical. Experienced qualities like blue and the feeling of nausea are not in the world – they are its “form”, not its “content”. So, there is no question of “relating” experienced qualities to what is in the world – in fact, it is misleading to speak of “experienced qualities” at all, since that phrase suggests (falsely, on this view) they are something that is in the world.

It’s time for disclosure: I am a dualist. Not surprisingly, I didn’t find either of these efforts to offer a good solution to what I see as the key problem for physicalism. I’ve done my best to represent A. and B. fairly, but you should, of course, remember that what you’re getting here is what a dualist has been able to make of physicalists’ efforts.


How Does Your Food Taste?

March 26, 2012

One question I’ve thought about, without much success, is what exactly is different about our neural firings when we have different sensations. Imagine, for example, looking at a ripe, red tomato on a white tablecloth. Now imagine everything the same, except that you’ve replaced the tomato with an unripe green one of the same size. It’s a sure bet that something different is happening in the back of your head that’s different in these two cases. We know some parts of the story about this kind of difference, but it would be nice to have a full and detailed accounting.

I thought I might get some enlightenment about this question by looking up what’s known about tastes. I suspected that this might be a somewhat simpler case, because I knew that despite the complexity of the lovely tastes that emerge from my wife’s exquisite cooking, they are all combinations of just five basic tastes – sweet, salty, sour, bitter, and umami.

Or, then again, maybe not.

The article that causes my doubts was published by David V. Smith and colleagues in 2000, but I was unaware of it until recently. The Wikipedia entry for “Taste” is organized around the above five “basic” tastes, so perhaps others have also missed this article.

The key problem about taste arises from the fact that taste buds are generally sensitive to many different chemicals. They differ in the degree to which different substances will raise their activation, but they will show some amount of increased activation for many different components of the foods we eat.

This fact supports the view that the cause of a taste is a set of relative degrees of activation across many taste buds. Moreover, we cannot think of each taste bud as devoted to one of the five tastes, and the pattern of relative degrees of activation as a pattern of five kinds of response. Instead, there is a pattern of greater and lesser activations of cells, each of which responds in one degree or another to many kinds of inputs.

A natural question is how the five-basic-taste theory has seemed so good for so long. According to Smith and colleagues, the answer lies in a certain methodological practice. This method involves identifying different kinds of taste buds by their “best response”. That means, for example, that a cell will be classified as a “sweet” cell if its activation is raised more by glucose than by other inputs such as salt or quinine.

Such classifications tend to obscure the point that a so-called “sweet” cell will also be activated by things that aren’t sweet. They may be highly activated by other substances, just not as much so as by glucose.

An even more important point is that the results of this method depend on one’s choice of substances used in comparing relative activations. Smith and colleagues identify another set of chemicals that are quite different from the usual ones, but that, using the same methodology, would give a different set of “basic” tastes. This result casts doubt on the utility of the “best response” methodology for identifying basic tastes.

More generally, it suggests, at least to me, that the story about the neural activations that underlie the differences among our sensations will turn out to be very complex, and will involve patterns of activity across a large number of neural cells.

Bon appetit!

[ David V. Smith, Steven J. St. John, John D. Boughter, Jr., (2000) “Neuronal cell types and taste quality coding”, Physiology and Behavior 69:77-85. ]


Be Creative! Have a Drink!

March 5, 2012

In a recently published experiment, researchers administered a task to a group of intoxicated participants and a group of sober controls. The task is thought to involve creativity, and the intoxicated participants did better than the controls.

The task was the Remote Associates Test (RAT). Each item in the test consists of three words, e.g., Peach, Arm, and Tar, and the task is to find a fourth word that will make a good two-word phrase with each of the given words. Perhaps the first word immediately suggests something – in this example, perhaps Tree – but that doesn’t make sense with the other two, and one had better look elsewhere. If one finds it difficult to let go of this first thought, it will take longer to find a good answer than if one can be open to further gifts from one’s brain, and let other, less closely (more remotely) associated words appear in one’s consciousness. Sooner, later, or perhaps never, the good candidate, Pit, may arrive. Participants responded as soon as they thought they had a good answer, which they then provided. If they had not responded by the end of 1 minute, they were asked to guess, and were then given the next item.

The intoxicated participants got that way through consuming a vodka and cranberry juice drink, which resulted in an average blood alcohol content of  .075. (The range was from .059 to .091. For comparison, the legal limit for driving in the US is .08.)

On average, the intoxicated group solved more problems than the sober controls. They also reached their solutions faster. Further, for each correct solution, participants were asked to rate the degree to which they had reached their solution by strategically searching, versus by “insight” (an Aha! moment). The intoxicated participants were more likely than the sober controls to attribute their successes to insight.

The researchers offer an explanation of these results. In brief, the account is this. Sobriety is good for maintaining attention, staying focused on goals, remembering where one is in a calculation, keeping track of one’s assumptions, and so on. But what’s needed for high performance on the RAT is the relaxing of attention, easy letting go of ideas that don’t pan out, and openness to letting one’s network of associations work without hindrance. A bit of alcohol helps reduce focused attention, so it helps with being open to letting one’s network operate without inhibition. The authors point out that this explanation is consilient with a number of other studies, including some that examined effects of frontal lobe damage, and others that showed an advantage for those who had an opportunity to sleep between exposure to a problem type and attempts to provide solutions.

Of course, creativity requires the knowledge necessary to recognize a useful idea when it arises. (In our illustration, for example, participants have to be able to recognize that Pit fits with each of the given words.) But this experiment suggests that there is another part in a creative process, a period in which unconscious processes do their work, combining materials that are already present, but have not previously been brought together in just the way that is necessary for creative success.

I think I have often profited from following the advice, “Sleep on it”, so perhaps I should worry about confirmation bias (the tendency to give more weight than one should to evidence that agrees with what one already accepts, and to discount conflicting information). But I’m glad to have this study, identified by the authors as the first empirical demonstration of the effects of alcohol’s effects on creative problem solving.

[Andrew F. Jarosz, Gregory J. H. Colflesh, and Jennifer Wiley (2012) “Uncorking the muse: Alcohol intoxication facilitates creative problem solving”, Consciousness and Cognition 21:487-493. ]


Does Thinking About God Have a Down Side?

February 13, 2012

Research led by University of Waterloo psychologist Kristin Laurin has yielded a result that’s surprising to me, and that raises questions about a possible unwanted effect of work by many thinkers, including myself.

Laurin and her colleagues did several experiments, and tested two main hypotheses. I’ll focus on one: God thoughts lead to reduction in active pursuit of goals. This hypothesis was tested in three experiments, all of which supported it. A summary of just one of the experiments will explain what the hypothesis means, and give some idea of how it can be investigated. (The other hypothesis, which did not surprise me, was this: God thoughts lead to increase in resistance to temptation.)

How can you be sure people have recently had “God representations” in mind? One way is to give them the task of composing sentences from lists of words they are given, and include words like “God”, “divine”, sacred” on the lists. That was the set up for one group of participants. Another group of participants was given the same task with other lists that contained none of those words, but did contain words for positively valued items (e.g., “sun”, “flowers”, “party”). A third group did the same task using lists with neutral words.

To get at the effect of the differences among these groups, Laurin and her colleagues asked all participants to do a new verbal task. They were told that high scoring on this second task was a good predictor of success in their chosen field (engineering, as it happens). The task was to write down as many English words as they could in 5 minutes that are composed of just the letters R, S, T, L, I, E, and A.

The key result – predicted by the researchers but surprising to me –  was that participants who had received the list of religion-related words on the first task did less well on this second task than the other participants – they averaged 19.5 words, compared to 30.4 and 30.3 for the participants who had gotten non-religion-related words that were positive, or neutral, respectively.

Several weeks before this experiment was conducted, the authors had given a questionnaire to their participants that included a religion identification question. They were thus able to test whether their experimental result depended on participants’ religious classification. They found that their result did not depend on religious classification, even when that classification was “atheist’ (about half of the participants in this study).

The authors suggest a mechanism for their observed effect, namely that exposure to the religion-related words in the lists in the first task “activated the idea of an omnipotent, controlling force that has influence over [participants’] outcomes”. In a second study, they found experimental support for this mechanism, and concluded that “only those who believed external forces could influence their career success demonstrated reduced active goal pursuit following the God prime” (where receiving the “God prime” = receiving the religion-related lists in the first task).

This conclusion gives me pause, for the following reason. As is evident from several posts on this and several other blogs, recent books, and newspaper reports, there are many lines of research that show the importance of unconscious processes. A large number of effects on our behavior come from circumstances of which we are unaware, or circumstances that we consciously notice, but that influence our behavior in ways we do not realize. In the last decade, and continuing today, our dependence on processes that are unconscious, and therefore not under our control, has become more and more widely publicized.

It thus seems that there is a serious question whether the increasing recognition of the effects of unconscious processes may have an unwanted, deleterious effect of reducing our motivation to actively pursue our goals. Your Brain and You resolved somewhat similar issues about the relation of unconscious processes to responsibility and certain attitudes toward ourselves. But, of course, it could not consider this recent experiment, and it did not address the question of what effect the recognition of our dependence on unconscious processes might have upon our degree of motivation to pursue our goals.

I do not think I have an answer to this question, but I wonder whether the following distinction may turn out to be relevant. Getting people to think about a god is getting them to think about an agent – an entity with its own purposes and ability to enact them. On the other hand, accepting that there are causes of behavior that lie beyond our control is not the same as accepting that our outcomes depend on another agent’s purposes. So, it seems possible that the growing recognition of the importance of unconscious processes to our thoughts and actions may not lead to reduced motivation to achieve our goals.

[Laurin, K., Kay, A. C. and Fitzsimmons, G. M. (2012) “Divergent Effects of Activating Thoughts of God on Self-Regulation”, Journal of Personality and Social Psychology, 102(1):4-21.]


Is There an Appearance/Reality Distinction for Pain?

January 23, 2012

In a recent article, philosopher Kevin Reuter has provided an interesting example of experimental philosophy that challenges a widely held view.

The background is that many philosophers (including me) hold that there is no appearance/reality distinction for pain. Pain is nothing but a feeling, so if you have a painful feeling there is no question but that you have a pain. You can be fooled about what is causing you to have the pain; for example, you might think you’ve got a tumor when it’s just a cyst. But you can’t be fooled about whether you are suffering. (Another author in the same journal humorously imagines lack of success for a doctor who would refuse to prescribe painkillers, explaining that the patient is only having an appearance of pain, not a real one.)

There are parallels for our “outer” senses. You can, for example, be fooled about what color a thing is, because you might be looking at it in bad lighting. But you can’t be fooled about the way it *looks*. You might inadvertently pick the wrong word for the color a thing looks to you, but hardly makes sense to say that a thing might seem to look to you other than the way it does look to you. The way a thing looks just is its appearance, and while things in your kitchen can appear other than they really are, appearances themselves can do no such thing.

Many leading views say that the same thing holds for pain. There is simply no difference between feeling a pain, or having something appear to you as a pain, and actually having a pain.

Many leading philosophers also believe that this view – “There is no appearance/reality distinction for pains” – is not a philosophical theory. They are not claiming to say what people *ought* to believe about pains and they are not claiming to have made a philosophical discovery. They regard themselves as merely making explicit what is already implicit in the way people in general speak about their pains.

It is this attribution to the general public of the “No appearance/reality distinction for pains” view that Reuter directly challenges.

The key ground for the challenge is something one does not often see in a philosophy paper. It is a statistical analysis of remarks by non-philosophers – in this case, remarks found on health-related internet sites. Reuter gives details about his methods of search and analysis, but I will just summarize the key results, which I think his evidence clearly supports.

To wit: (1) People use both “I feel a pain” and “I have a pain” (and grammatical variants) in reporting both mild pains and severe pains. However, (2) “feel” is used about as often as “have” when mild pains are referred to, whereas “have” is used far more often than “feel” when the reported pain is severe (about 6 times as often on average, ranging from equally often to 14 times as often, depending on exactly what word — e.g., “major” , “severe”, “bad” — is used).

Result (2) is then combined with another observation: When people use variants of “seems” (e.g., “feels” “looks”, “sounds like”, etc.) in the case of senses such as touch, vision, or audition, they are making an appearance/reality distinction, and they are indicating lower confidence in their judgment. For example, if you speak of a blue tie, or say a tie is blue, you are confidently committing yourself to the claim that the tie is blue. But if you say it looks blue, you are leaving open the possibility that it might not really be blue, and that the way it looks – its appearance – is misleading as to how it really is.

The conclusion is then drawn that the difference in frequency of use of “feel” versus “have” that correlates with mildness versus severity of pain indicates that, at least for mild pains, people – users of health-related internet sites – are making an appearance/reality distinction.

Of course, this conclusion depends on supposing that there is not a better explanation of the correlation between “feel”/”have” and mild/severe. Reuter considers several more or less plausible alternative explanations, and adequately rebuts them. The most plausible of these is that “I have a pain” is, implicitly, a request for help. If the pain is mild, there may be no need for help, so the person reduces the help-seeking implication by using “feel” instead of “have”.

Reuter’s point about this suggestion is that more direct means of seeking aid are easily available, so it is unlikely that pain reports have the function of indirectly asking for help.

There is, however, a variant of this alternative that Reuter does not consider. People know that others are likely to empathize with a reporter of pain. So, if the pain is mild, the person who reports it may want to convey something like “Don’t worry, don’t feel bad for me, it’s only a little pain”. Perhaps using “feel” is a way of indicating this lack of need for empathy.

Of course, it’s unlikely that anyone thinks explicitly that this is what they are doing. So, we might wonder whether such an unconscious adjustment of language is too subtle to be plausible. I do not think so. Consider the shades of politeness in the following list:

Shut the door.

Shut the door, ok?

Would you shut the door?

Please shut the door.

Would you shut the door, please?

If you’ll shut the door, we’ll be less likely to be interrupted.

Which of these we use depends on how we are related to the person we’re addressing, and on circumstances. We do use different degrees of politeness, and we may sometimes pay careful attention to how to put a request. But on many occasions, we tailor what we say to relationships and circumstances without reflecting on or attending to our choice of phrasing, or even realizing that we are adjusting our words to relationships and circumstances. So, perhaps we are sometimes engaging in a similar, unreflective shading of politeness when we say that we “feel a pain” instead of that we “have a pain”.

Whether or not that is a good explanation, we should not forget result (1): People sometimes use “feel” even for severe pains that they cannot plausibly be taken to regard as unreal.

[Kevin Reuter (2011) “Distinguishing the Appearance from the Reality of Pain” _Journal of Consciousness Studies_ 18(9-10):94-109.]