Zombie Neuroscience

October 14, 2014

In an opinion piece in the New York Times Sunday Review (October 12, 2014, p. 12), Michael Graziano asks “Are We Really Conscious?” His answer is that we are probably not conscious. If his theory is right, our belief in awareness is merely a “distorted account” of attention, which is a reality that consists of “the enhancing of some [neural] signals at the expense of others”.

This distorted account develops in all of us, and seems to us to be almost impossible to deny. But beliefs that the Earth is the center of the universe, that our cognitive capacities required a special creation, and that white light is light that is purified of all colors, have seemed quite natural and compelling, yet have turned out to be wrong. We should be skeptical of our intuitive belief that we are conscious.

In short, Graziano is saying that your impression that you are conscious is likely a false belief. “When we introspect and seem to find that ghostly thing –awareness, consciousness, the way green looks or pain feels – our cognitive machinery is accessing internal models and those models are providing information that is wrong.”

One might well wonder what is supposed to be “ghostly” about the experience of green you think you have when you look at, say, an unripe banana, or a pain that you might believe occurs when you miss a nail and hit your thumb with a hammer. But, of course, if you are already convinced that there are no such things, then you must think that their apparent presence is merely the holding of false beliefs. If you then try to say what these false beliefs are beliefs about, you will be hard pressed to produce anything but ghosts. There are, of course, neural events that are necessary for causing these allegedly false beliefs about the way bananas look or pains – but these beliefs are not beliefs about those neural events, nor are they beliefs about any neural events at all. (People believed that unripe bananas looked green and that they had pains long before anyone had any belief whatsoever about neural events.)

Graziano’s positive story about awareness is that it is a caricature: “a cartoonish reconstruction of attention” (where, recall, attention is enhancement of some signals at the expense of others). This description raises a puzzle as to what the difference is between a cartoonish reconstruction of a signal enhancement that is caused by light reflected from an unripe banana, and a cartoonish reconstruction of a signal enhancement caused by a blow to your thumb. But perhaps this puzzle can be resolved in this way: The banana causes enhancement of one set of signals, the blow to the thumb causes enhancement of a different set of signals, and which false belief you acquire depends on which set of signals is enhanced.

A problem remains, however. Your beliefs that you are experiencing green or that you are in pain are certainly not beliefs about your signal enhancements. They are not beliefs about wavelengths or neural activations caused by blows to your thumb (though, of course, wavelengths and neural activations are among the causes of your having beliefs about what colors or pains you are experiencing). There is nothing else relevant here that Graziano recognizes as real. Your false beliefs are beliefs about nothing that is real.

We can, of course, have beliefs about things that are not real – for example, unicorns, conspiracies that never took place, profits that will in fact never materialize. In all such cases, however, we can build the non-existent targets of the beliefs by imaginatively combining things that do exist. For example, we have seen horses and animals with horns, so we can build a horse with a horn in our thoughts by imaginative combination.

But green and pain are not like unicorns. They have no parts that are not themselves colors or feelings. There are no Xs and Ys that are not themselves colors or feelings, such that we can build green or pain in our imagination by putting together Xs and Ys. So, if we were to accept Graziano’s dismissal of color experiences and pains as unreal, we would have to allow that we can have beliefs about things that neither exist, nor can be imaginatively constructed. We have, however, no account of how there could be such a belief. The words “way green things look” and “pain” could not so much as mean anything, if we suppose that there are no actual examples to which these words apply, and no way of giving them meaning by imaginative construction.

Graziano invokes impressive authorities – Copernicus, Darwin, and Newton – in support of skepticism about intuitions that once seemed incontestable. (See list in the second paragraph above.) He presents his theory as coming from his “lab at Princeton”.

The view he proposes, however, is not a result supported by scientific investigation. It is supported by the other authorities to which he appeals – Patricia Churchland and Daniel Dennett. These writers are philosophers who offer to solve the notoriously difficult mind-body (or, consciousness-brain) problem by the simple expedient of cutting off consciousness. Voilà. No more problem.

But it is not good philosophy to affirm a view that commits one to there being beliefs of a kind for which one can give no account.

It is important to understand that resisting the dismissal of consciousness is fully compatible with affirming that there are indeed neural causes for our behavior. Hammer blows to the thumb cause neural signals, which cause reflexive withdrawals. Somewhat later, the interaction of these signals with neurons in our brains causes behaviors such as swearing and taking painkillers. But hammer blows to the thumb also cause pains. It is indeed difficult to understand how or why pains should result from neural events that such blows cause. But it is not a solution to this problem to dismiss the pains as unrealities. Nor is it true that science teaches us that we ought to deny consciousness.


The Cambridge Declaration on Consciousness

August 24, 2012

On July 7, 2012 a “prominent international group” of brain scientists issued The Cambridge Declaration on Consciousness. The full document has four paragraphs of justification, leading to the declaration itself, which follows.

We declare the following: “The absence of a neocortex does not appear to preclude an organism from experiencing affective states. Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Non-human animals, including all mammals and birds, and many other creatures, including octopuses, also possess the neurological substrates.”

Back in the 90’s I published a paper under the title “Some Nonhuman Animals Can Have Pains in a Morally Relevant Sense”. (In case you’re wondering, that view had been denied by Peter Carruthers in a paper in a top tier journal.) So, not surprisingly, I am quite sympathetic to the sense of this declaration.

I also approve of the declaration’s prioritizing of neurological similarities over behavior. The philosophy textbook presentation of the supposedly best reason for thinking that other people have minds goes like this:

1. When I behave in certain ways, I have accompanying thoughts and feelings.

2. Other people behave in ways similar to me. Therefore, very probably,

3.  Other people have accompanying thoughts and feelings that are similar to mine.

This argument is often criticized as very weak. Part of my paper’s argument was that we have a much better reason for thinking our fellows have minds, namely:

1. Stimulation of my sense organs (e.g., being stuck with a pin) causes me to have sensations (e.g., a pain).

2. Other people are constructed very much like me. Therefore, very probably,

3. Stimulation of other people in similar ways causes them to have sensations similar to mine.

If one approaches the matter in this second way, it is natural to extend the argument to nonhuman animals to the extent that they are found to be constructed like us. This is the main line of approach in the Cambridge Declaration (although some of the lead-up paragraphs also sound like the first argument).

In sum, I am inclined to accept the sense of the Cambridge Declaration, and to agree that the evidence and reasoning presented make its stand a reasonable one.

But still, there is something peculiar about this Declaration, even aside from its being unusual for academic conferences to issue position statements. The question is, Why? Just what is odd about it?

One of the Declaration’s authors, Christoph Koch, recently gave an interview on the radio. (The link is to the written transcript.) In it, he characterizes fMRI scans as a “very blunt” instrument. The point is that the smallest region that can be resolved by an fMRI scan contains about half a million neurons, some of which may be firing quite actively while others are hardly firing at all. So, our scanning techniques do not tell us what neural firing patterns occur, but only where there are some highly active neurons.

Ignorance of neural patterns is relevant here. Another point that Koch makes in the interview is that there are many neurons – about three quarters of what you have – in the cerebellum. Damage to this part of the brain disrupts smooth and finely tuned movements, such as are required for dancing, rock climbing and speech, but has little or no effect on consciousness.

So, it is not just many neurons’ being active, or there being a complex system of neural activations of some kind or other that brings about consciousness. It is some particular kind of complexity, some particular kind of pattern in neural activations.

I am optimistic. I think that some day we will figure out just what kind of patterned activity in neurons causes consciousness. But it is clear that we do not now know what kind of neural activity is required.

The peculiarity of the Cambridge Declaration, then, is that it seems to be getting ahead of our actual evidence, yet it was signed by members of a group who must be in the best position to be acutely aware of that fact. Of course, ‘not appear[ing] to preclude’ consciousness in nonhuman animals is a very weak and guarded formulation. The remainder of the declaration, however, is more positively committal.

The best kind of argument for consciousness in nonhuman animals would go like this:

1. Neural activity patterns of type X cause consciousness in us.

2. Certain nonhuman animals have neural activity patterns of type X. Therefore, very likely,

3. Those nonhuman animals have consciousness.

Since we do not now know how to fill in the “X”, we cannot now give this best kind of argument. The signers of the Declaration must know this.

[The radio interviewer is Steve Paulson, and the date is August 22, 2012. The paper of mine referred to above is in Biology and Philosophy (1997), v.12:51-71. Peter Carruthers’ paper is “Brute Experience” in The Journal of Philosophy, (1989) v.86:258-269.]


What’s New in Consciousness?

April 18, 2012

I am just back from the four-day “Toward a Science of Consciousness” conference in Tucson. I heard 32 papers on a wide variety of topics and I’m trying to tell myself what I’ve learned about consciousness. Today, I’ll focus on the most fundamental of the difficulties in this area, known to afficionados as “the Hard Problem of consciousness”. The following four statements explain the problem in a way that leaves open to many different kinds of response to it.

1. Data concerning our sensations and feelings correlate with data concerning the firings of neurons in our brains.

There is overwhelming evidence for this statement. Strokes in the back of the head, or severe injuries there, cause loss of vision. The extent of the area in which one can no longer see is well correlated with the extent of the damage. Pin pricks normally hurt; but if injury or drugs prevent  the neural signals they cause from reaching the brain, you won’t feel a thing. Direct electrical stimulation of some brain locations produces experiences. And so on.

2. What neurons do is to fire or not fire.

Neurons can fire faster or slower. There can be bursts of rapid firing, separated by intervals of relative quiescence, and there can be patterns of such bursts. With 100 billion or so neurons, that allows an enormous range of possible combinations of neural activities. We know of no other aspects of neural activity that are relevant to what we are experiencing.

3. We experience a large number of qualities – the colors, the tastes, the smells, the sounds, feelings like warmth, itchiness, nausea, and pain; and feelings such as jealousy, anger, joy, and remorse.

4. The qualities noted in 3. seem quite different from the neural firing patterns in 2, and from any complex physical property.

The Hard Problem is this: What is the relation between our experiences and their qualities, and our neural firing patterns? How can we explain why 1. is true?

There are two fundamental responses to the Hard Problem, and many ways of developing each of them. They are:

Physicalism. Everything is physical. Experiences are the same things as neural events of some particular kind. (It is not claimed that we know what particular kind of neural event is the same as any particular kind of experience.) The explanation of 1. is that the data concerning experiences and neural events are correlated, since experiences and neural events are just the same thing. It’s like the explanation of why accurate data about Samuel Clemens’ whereabouts would be perfectly correlated with accurate data about Mark Twain’s whereabouts.

Dualism. Experiences are not identical to neural events. 1. is true either because neural events cause experiences, or because some events have both neural properties and other properties that cannot be discovered by the methods of science.

Today, the dominant view (about two to one, by my unscientific estimate) is physicalism. The reason is suggested by my descriptions. Dualists evidently have to say how neural events can cause experiences, or explain the relation between the properties known to science and the properties not known to science. Physicalists have no such task: if there is just one thing under two names, there is no “relation” or “connection” to be explained.

But physicalism has another task, namely, to explain how 4. can be true. According to physicalism, blueness = some complex physical property X, the feeling of nausea = some complex physical property Y, and so on. How could these pairs of items even seem to be different, if they were really just the same?

Of course, it will not do to say that blue is the way a physical property X appears to us, the feeling of nausea is the way a physical property Y appears to us, and so on. That would just introduce ways things appear. But then we would just have to ask how physical properties are related to their ways of appearing. They do not appear as what (according to physicalism) they really are; i.e., they do not appear as complex physical properties. So how, according to physicalism, could it happen that X can appear as blue, but not as complex physical property X, if blue = X?

The new developments reported at the conference that were of most interest to me were attempts by physicalists of ways of dealing with this last question. Here are brief summaries of two key ideas.

A. To be feeling something is nothing other than to record and adjust action in the following ways. (i) Recognize dependence of changes of input to one’s sensory devices upon movement of one’s own body. (ii) Recognize changes of input to one’s sensory devices from sources that do not depend on one’s own body (and distinguish these from the changes in (i)). (iii) Process some parts of input more intensively than others. (When we do this, it is called attending to some aspects of our situation more than others.)

We understand how these features could be instantiated in a robot; so we understand how we could make a robot – a purely physical thing – that feels.

B. What is in the world is all physical. Experienced qualities like blue and the feeling of nausea are not in the world – they are its “form”, not its “content”. So, there is no question of “relating” experienced qualities to what is in the world – in fact, it is misleading to speak of “experienced qualities” at all, since that phrase suggests (falsely, on this view) they are something that is in the world.

It’s time for disclosure: I am a dualist. Not surprisingly, I didn’t find either of these efforts to offer a good solution to what I see as the key problem for physicalism. I’ve done my best to represent A. and B. fairly, but you should, of course, remember that what you’re getting here is what a dualist has been able to make of physicalists’ efforts.


Do Conscious Thoughts Cause Behavior?

December 12, 2011

In the late 19th Century, Thomas Huxley advanced a view he called “automatism”. This view says that conscious thoughts themselves don’t actually do anything. They are, in Huxley’s famous analogy, like the blowings of a steam whistle on an old locomotive. The steam comes from the same boiler that drives the locomotive’s pistons, and blowings of the whistle are well correlated with the locomotive’s starting to move, but the whistling contributes nothing to the motion. Just so with conscious thoughts: the brain processes that produce our behavior also produce conscious thoughts, but the thoughts themselves don’t produce anything.

Automatism (later known as epiphenomenalism) is currently out of favor among philosophers, many of whom dismiss it without bothering to argue against it. But it has enough legs to be the target of an article by Roy F. Baumeister and colleagues in this year’s Annual Review of Psychology. These authors review a large number of studies that they regard as presenting evidence “supporting a causal role for consciousness” (p. 333). A little more specifically, they are concerned with the causal role of “conscious thought”, which “includes reflection, reasoning, and temporally extended sense of self” (p. 333). The majority of the evidence they present is claimed to be evidence against the “steam whistle” hypothesis that “treats conscious thoughts as wholly effects and not causes” (p. 334).

To understand their argument, we need to know a little more about the contrast between unconscious thought and conscious thought. To this end, suppose that a process occurs in your brain that represents some fact, and enables you to behave in ways that are appropriate to that fact. Suppose that you cannot report – either to others or to yourself in your inner speech – what fact that process represented. That process would be a thought that was unconscious. But if a process occurs in you, and you can say – inwardly or overtly – what fact it is representing, then you have had a conscious thought.

What if I tell you something, or instruct you to do some action or to think about a particular topic? Does that involve conscious thought? Baumeister et al. assume, with plausible reason, that if you were able to understand a whole sentence, then you were conscious, and at least part of your understanding the sentence involved conscious thought. (For example, you could report what you were told, or repeat the gist of the instruction.) They also clearly recognize that understanding what others say to you may, in addition, trigger unconscious processes – processes that you would not be able to report on.

If you want to do a psychological experiment, you have to set up at least two sets of circumstances, so that you can compare the effect of one set with the effect of another. If your interest is in effects of conscious thoughts, you need to have one group of participants who have a certain conscious thought, and another group who are less likely to have had that conscious thought. The way that differences of this kind are created is to vary the instructions given to different groups of participants.

For example, in one of the reviewed studies, participants randomly assigned to one group were given information about costs and features of a cable service, and also instructed to imagine being a cable subscriber. Participants in another group received the same information about costs and features, but no further instruction. A later follow up revealed that a significantly higher proportion of those in the group that received the special instruction had actually become cable subscribers.

In another study, the difference was that one group was asked to form specific “implementation intentions”. These are definite plans to do a certain action on a certain kind of occasion – for example to exercise on a particular day and time, as contrasted with a more general intention to take up exercise, but without thinking of a particular plan for when to do it. The other group received the same information about benefits of the action, but no encouragement to form specific implementation intentions. Significantly more of those who were encouraged to form implementation intentions actually engaged in the activity.

The logic behind these studies is that one group was more likely to have a certain kind of conscious thought than the other (due to the experimenters’ instructions), and it was that group that exhibited behavior that was different from the group that was less likely to have had that conscious thought. The correlation between the difference in conscious thoughts and the difference in subsequent behavior is then taken as evidence for a causal connection between the (earlier) thoughts and the (later) behavior.

There is, however, a problem with this logic. It arises from the fact (which, as noted earlier, the authors of the review article acknowledge) that conscious processing of instructions triggers unconscious processes. We can easily see that this is so, because processing what is said to us requires that we parse the grammar of sentences that we understand. But we cannot report on how we do this; our parsing is an unconscious process. What we know about it comes from decades of careful work by linguists, not from introspection.

Since conscious reception of instructions triggers unconscious processes, it is always possible that behavioral effects of the different instructions are brought about by unconscious processes that are set in motion by hearing those instructions. The hearing (or reading) of instructions is clearly conscious, but what happens after that may or may not be conscious. So, the causal dependence of behavior on instructions does not demonstrate causal dependence of behavior on conscious processes that occur after receiving the instructions, as opposed to unconscious processes that are triggered by (conscious) hearing or reading of instructions.

This point is difficult to appreciate. The reason is that there is something else that sounds very similar, and to which we really are entitled to claim on the basis of the evidence presented in the review article. This claim is the following (where “Jones” can be anybody)

(1) If Jones had not had the conscious thought CT, Jones would not have been as likely to engage in behavior B.

This is different from

(2) Jones’s conscious thought CT caused it to be more likely that Jones engaged in behavior B.

What’s the difference? The first allows something that the second rules out. Namely, the first, but not the second, allows that some unconscious process, UP caused both whatever conscious thoughts occur after receiving instructions, and the subsequent behavior. The experimenter’s giving of the instructions may set off a cascade of unconscious processes, and it may be these that are responsible for both some further conscious (reportable) thoughts and for subsequent actions related to the instructions. If the instructions had not been given, those particular unconscious thoughts would likely not have occurred, and thus the action might not have been produced.

Analogously, if the flash of an exploding firecracker had not occurred (for example, because the fuse was not lit) it would have been very unlikely that there would have been a bang. But that does not show that, in a case where the fuse was lit, the flash causes the bang. Instead, both are caused by the exploding powder.

The procedure of manipulating instructions and then finding correlated differences in behavior thus establishes (1), but not (2). So, this procedure cannot rule out the steam whistle hypothesis regarding conscious thought.

Interestingly, there are some cases for which the authors of the review identify good reasons to think that the steam whistle view is actually the way things work.

For example, one study compared people who imagined a virtuous choice with those who had not done so. In a subsequent hypothetical choice, people in the first group were more self-indulgent than those in the comparison group. This difference was removed if the same activity was imagined as a court-ordered punishment rather than a choice to volunteer.

However, it seems very unlikely that anyone consciously reasoned “I imagined myself making a virtuous choice, therefore I’m entitled to a bit of self-indulgence”. In this, and several similar reported cases, it seems far more likely that the connection between imagining a virtuous choice, feeling good about oneself, and feeling entitled to self-indulgence runs along on processes that do not cause conscious thoughts with relevant content.

The article under discussion is full of interesting effects, and these are presented in a way that is highly accessible. But it does not succeed in overturning an alternative to its authors’ preferred view. According to this alternative view, the causing of behavior (after consciously perceiving one’s situation, or consciously receiving instructions) is done by unconscious processes. This alternative view allows that sometimes, but not always, these unconscious processes also cause some conscious thoughts that we express either in overt verbal behavior, or in sentences about what we are doing that we affirm to ourselves in our inner speech.

[The article under discussion is Roy F. Baumeister, E. J. Masicampo, and Kathleen Vohs, “Do Conscious Thoughts Cause Behavior?”, Annual Review of Psychology 62:331-361 (2011). The difference between (1) and (2) is further explained and discussed in Chapter 4 of Your Brain and You. ]


Do You Look Like a Self-Controlled Planner?

October 31, 2011

In an article soon to appear in the Journal of Personality and Social Psychology, Kurt Gray and colleagues question whether we “objectify” other people, if that means to regard them as objects with no mental capacities. They suggest that there are two kinds of mental capacities, and that what’s often thought of as “objectification” may actually a redistribution of judgments about these kinds. They did a series of experiments to test this possibility.

The two kinds of mental capacities are Agency and Experience. “Agency”, in these experiments, comprises the capacities for self-control, planning, and acting morally. “Experience” covers abilities to experience pleasure, desire, and hunger or fear.

The hypothesis, stated a little more fully, is that people who attended to a target’s bodily aspects would tend to rate those targets higher on Experience and lower on Agency, with reverse effects when attention is focused less on bodily aspects and more on cognitive abilities.

They tested this hypothesis in several ways, of which I’m going to describe only the first. The general result of this set of experiments was converging support for the hypothesis.

The first experiment was admirably simple. 159 participants, recruited from campus dining halls, were given a sheet of paper that had one picture, a brief description, and a series of six questions. The single picture was one of the following four:

Erin, presented in a head shot that had been cropped from the following picture.
Erin, presented in a fairly cleavage-revealing outfit from just below the breasts up.
Aaron, presented in a head shot cropped from the following picture.
Aaron, presented shirtless from just below the pectorals up.

Both of these targets are attractive young people and look very healthy. The two head shots will be referred to as Face pictures, and the two others as Body pictures. (The head shots were enlarged, so each of the pictures was about the same size.)

The description given was the same for both, except for the names and corresponding appropriate pronouns. It provided only the information that the person in the picture is an English major at a liberal arts college, belongs to a few student groups, and likes to hang out with friends on weekends.

The questions were all of the form “Compared to the average person, how much is [target’s name] capable of X?”. Fillers for X were self-control, planning, and acting morally (combined into an Agency measure); and experiencing pleasure, experiencing hunger, and experiencing desire. (Since ability to experience hunger did not correlate highly with the other two, only experiencing pleasure and experiencing desire were used to compose the Experience measure.) Answers took the form of a rating on a five point scale, ranging from “Much less capable” to “Much more capable”, with “Equally as capable” for the midpoint.

The key results of this experiment are that participants who were given Body pictures rated the targets higher on Experience and lower on Agency than participants who were given Face pictures. The differences are not large (.27 out of five for Experience, .33 out of five for Agency), but they are statistically significant.

The authors take these results to support the view that “focusing on the body does not involve complete dementalization, but instead redistribution of mind, with decreased agency but increased experience” (pp. 8-9).

As noted, the remaining experiments in this study point in the same direction. In a way, that seems to be good news – ‘different aspect of mind’ seems better than ‘no mind, mere object’. The authors make it explicitly clear, however, that being regarded as less of an agent would, in general, not be in a person’s interest. Some other intriguing aspects of this experiment are that the gender of the participants doing the ratings was not found to matter, and Erin came out a little ahead of Aaron on the Agency measure.

However, the aspect of this experiment that intrigues me the most is one that lies outside of the authors’ focus, and on which they do not comment. To explain this aspect, note first that the description provides very little information – it could be fairly summarized by saying the person in the picture is a typical college student. A person could be forgiven for reacting to the rating request with “How on Earth should I know whether this person is above or below average on self-control (or planning ability, or moral action, experiencing pleasure, or experiencing desire)!?”

Since the participants were college students, and thus similar to the depicted targets as described, perhaps we should expect them to rate the targets as somewhat above average in mental abilities. However, one rating was below average: the rating for Agency in response to Body pictures was 2.90 (where capability equal to that of the average person would be 3). The difference between this rating for Body pictures and higher rating for Face pictures indeed supports the authors’ hypothesis, but it leaves me wondering what could have been in the consciousness of those doing the ratings.

An even greater puzzle comes from fact that the highest rating was for Experience in response to Body pictures – it was 3.65. (Remember, the highest number on the scale was 5, so 3.65 is about a third of the distance between “Equally as Capable” and “Much More Capable”). So, I wonder: Do college students really think they and their peers are better at experiencing pleasure and desire than the average person? That seems a very strange opinion.

[ Kurt Gray, Joshua Knobe, Mark Sheskin, Paul Bloom, and Lisa Feldman Barrett, “More than a Body: Mind Perception and the Nature of Objectification” Journal of Personality and Social Psychology, in press. ]


An Unusual Aphrodisiac

October 10, 2011

Imagine you’re a prehistoric heterosexual man who’s going into battle tomorrow. The thought that there’s a fair chance of your dying might so completely occupy your mind that you’d be uninterested in anything save, perhaps, sharpening your spear.

On the other hand, your attitude might be that if you’re going to be checking out tomorrow, you’d like to have one last time with a woman tonight.

We are more likely to be descendants of the second type of man than the first. So, we might expect that there would be a tendency among men for thoughts of their own death to raise their susceptibility to sexual arousal.

In contrast, women who were more erotically motivated when they believed their own death might be just around the corner would not generally have produced more offspring than their less susceptible sisters. So, there is no reason to expect that making thoughts of death salient should affect sexual preparedness in women.

These ideas have recently been tested in two studies by Omri Gillath and colleagues. Of course, they didn’t send anybody into battle. Instead, they used two methods – one conscious, one not – to make the idea of death salient.

In the first study, one group of participants wrote responses to questions about the emotions they had while thinking about their own death and events related to it. Another group responded to similarly phrased questions about dental pain. The point of this contrast was to distinguish whether an arousal (if found) was specific to death, or whether it was due more generally to dwelling on unpleasant topics.

After responding to the questions, participants were shown either five sexual pictures (naked women for men, naked men, for women) or five non-sexual pictures (sports cars for men, luxury houses for women). Previous studies had found that all the pictures were about equal for their respective groups on overall perceived attractiveness. Participants had all self-identified as heterosexual. They had five minutes to carefully examine their set of five pictures.

Participants were each connected to a device that measured their heart rate. The key result was that the men who answered the questions about death and viewed erotic pictures had a significantly higher average heart rate during the picture viewing than any other group. That means that, on average, they had a higher rate than other men who saw the same pictures, but had answered questions about dental pain. They also had a higher rate than other men who had answered questions about death, but then saw non-sexual pictures. And they had a higher rate than women who answered either question and viewed either pictures of naked men or non-sexual pictures.

In the second study, the death/pain saliency difference was induced by flashing the word “dead” (for half the participants) or the word “pain” (for the other half) before each item in a series of pictures. The presentation of the words was very brief (22 thousands of a second) and came between masks (strings of four X s). With the masks, that’s too short to recognize the word. The pictures either contained a person or did not. Half of the pictures that contained a person were sexual, half were not. Pictures remained visible until the participant responded.

The response was to move a lever if, but only if, the picture contained a person. The movement was either pulling the lever toward oneself, or pushing it away. There were 40 consecutive opportunities for pulling, and 40 for pushing; half of participants started with pulling, half started with pushing.

The logic of this experiment depends on a connection previously established by Chen and Bargh (1999) between rapidity of certain responses and the value of what is being responded to. Pulling brings things closer to you, and if what’s before your mind is something you like, then that will speed the pulling (relative to pulling in response to something you’d ordinarily try to avoid, or something toward which you are neutral).

The reasoning, then, is that those who had a higher degree of sexual preparedness should pull faster in response to erotic materials than those who were not so highly prepared. Gillath and colleagues hypothesized that participants who received the brief exposure to “dead” and then saw an erotic picture should be faster pullers than those who received a brief exposure to “pain” before an erotic picture.

And that is what they found – for men. There was no such result for women. Nor did the brief exposure to “dead” result in faster pulling after being presented with non-sexual pictures; the faster reaction times depended on both the exposure to “dead” and the sexual nature of the following picture.

These two studies are certainly interesting in relation to the evolutionary thinking that led them to be undertaken. But I also find them fascinating in relation to a more general point. The second study provides evidence that our brains can (a) make a distinction (between pain and death) and (b) relate it to another difference (sexual vs. non-sexual material) completely unconsciously and extremely rapidly. And the first study, although done at a much slower time scale and with consciousness of the materials used to manipulate mood (i.e., the writing about death vs. pain), showed an effect on heart rate, which is not something that was under participants’ control. The brain processes of which we are unaware (except when revealed in studies like these) are amazing indeed.

[O. Gillath, M. J. Landau, E. Selcuk and J. L. Goldenberg (2011) “Effects of low survivability cues and participant sex on physiological and behavioral responses to sexual stimuli”, Journal of Experimental Social Psychology 47:1219-1224. The previous study mentioned in the discussion of Study 2 is M. Chen and J. A. Bargh (1999) “Consequences of automatic evaluation: Immediate behavioral dispositions to approach or avoid the stimulus”, Personality and Social Psychology Bulletin 25:215-224. ]


Mind the Gut

September 19, 2011

Johan Lehrer’s Wall Street Journal column for September 17-18, 2011 reports a fascinating pair of facts – and then makes a puzzling application of them.

The first fact concerns probiotic bacteria, which are often found in yogurt and other dairy products. Researchers provided mice with either a normal diet, or a diet rich in probiotic bacteria, and then subjected them to stressful situations. The mice with the probiotic-enriched diet showed less anxiety and had lower levels of stress hormones.

By itself, this result is not so interesting. After all, it could be that the probiotic bacteria affect digestion, then blood chemistry, and finally hormone levels. But the second fact shows that a different mechanism is at work.

The second fact is that when neural connections between gut and brain were severed, the probiotic-enriched diet no longer produced the effect of reducing symptoms of stress. This fact suggests that the effect of the difference in diet works directly through the gut-brain neural connection, rather than through a less direct blood chemistry path.

It’s as if we have a sense organ in our gut that feeds into an evaluative system. It doesn’t give us any sensations, but it tells our brains how things are in our digestive systems. If things are going well down there, we’re less prone to anxiety when stressful situations arise.

That’s a surprise that contributes to a sense of wonder at how deliciously complex unconscious processes can be. Lest one think that this has nothing to do with us, Lehrer also reports a study that showed an analogous result in human subjects who received large doses of probiotics for a month. (No cutting of nerves in that case, of course.)

Now for the puzzling conclusion. These and other studies are taken by Lehrer to show that “the immateriality of mind is a deep illusion. Although we feel like a disembodied soul, many feelings and choices are actually shaped by the microbes in our gut . . . . ” And, although he concedes that “This doesn’t mean, of course, that the mind-body problem has been solved”, he goes on to declare that “it’s now abundantly clear that the mind is not separate from the body . . . . Rather, we emerge from the very same stuff that digests our lunch.”

But “shaped” is one of the many words that mean “caused”, with the addition of something about the manner of causing (as in “burned” or “built”), or degree of causal contribution (as in “influenced” or “forced”). What the cited research shows is that causes of anxious behavior and hormone levels include the presence of probiotic bacteria in the gut, and that the means of that causal contribution works through a neural connection. That is surprising and fascinating, but it offers no evidence whatsoever that feelings of anxiety are the same things as any material events.

In general, causes and effects are different. From “How anxious you feel depends in part on what kind of bacteria you have in your gut” it does not follow that feelings are material – only that feelings, whatever they are, can be caused in a very surprising way.

Similar remarks apply to “emerge”. Different people use this word in different ways, so it’s not a very helpful term. But one of its meanings is “causes”. Yes, it is indeed fascinating that what’s in our gut can cause how we feel, and do so through a direct, neural pathway. But no, that does not show that feelings are material events. It does not show that immateriality of feelings is a deep illusion.

For some purposes, the point I’m making may not matter. It’s an important fact that what goes on in our consciousness is brought about by events in our neural systems, and the studies Lehrer cites in this article do help drive that point home. But when the mind-body problem is introduced into the discussion, it becomes important to distinguish between the views (1) that neural events cause mental events such as feelings and (2) feelings are the same things as neural events. The evidence Lehrer cites in his article support (1), but are silent as regards (2).

[Jonah Lehrer, “The Yogurt Made Me Do ItThe Wall Street Journal, September 17-18, 2011, p. C12.]