Zombie Neuroscience

October 14, 2014

In an opinion piece in the New York Times Sunday Review (October 12, 2014, p. 12), Michael Graziano asks “Are We Really Conscious?” His answer is that we are probably not conscious. If his theory is right, our belief in awareness is merely a “distorted account” of attention, which is a reality that consists of “the enhancing of some [neural] signals at the expense of others”.

This distorted account develops in all of us, and seems to us to be almost impossible to deny. But beliefs that the Earth is the center of the universe, that our cognitive capacities required a special creation, and that white light is light that is purified of all colors, have seemed quite natural and compelling, yet have turned out to be wrong. We should be skeptical of our intuitive belief that we are conscious.

In short, Graziano is saying that your impression that you are conscious is likely a false belief. “When we introspect and seem to find that ghostly thing –awareness, consciousness, the way green looks or pain feels – our cognitive machinery is accessing internal models and those models are providing information that is wrong.”

One might well wonder what is supposed to be “ghostly” about the experience of green you think you have when you look at, say, an unripe banana, or a pain that you might believe occurs when you miss a nail and hit your thumb with a hammer. But, of course, if you are already convinced that there are no such things, then you must think that their apparent presence is merely the holding of false beliefs. If you then try to say what these false beliefs are beliefs about, you will be hard pressed to produce anything but ghosts. There are, of course, neural events that are necessary for causing these allegedly false beliefs about the way bananas look or pains – but these beliefs are not beliefs about those neural events, nor are they beliefs about any neural events at all. (People believed that unripe bananas looked green and that they had pains long before anyone had any belief whatsoever about neural events.)

Graziano’s positive story about awareness is that it is a caricature: “a cartoonish reconstruction of attention” (where, recall, attention is enhancement of some signals at the expense of others). This description raises a puzzle as to what the difference is between a cartoonish reconstruction of a signal enhancement that is caused by light reflected from an unripe banana, and a cartoonish reconstruction of a signal enhancement caused by a blow to your thumb. But perhaps this puzzle can be resolved in this way: The banana causes enhancement of one set of signals, the blow to the thumb causes enhancement of a different set of signals, and which false belief you acquire depends on which set of signals is enhanced.

A problem remains, however. Your beliefs that you are experiencing green or that you are in pain are certainly not beliefs about your signal enhancements. They are not beliefs about wavelengths or neural activations caused by blows to your thumb (though, of course, wavelengths and neural activations are among the causes of your having beliefs about what colors or pains you are experiencing). There is nothing else relevant here that Graziano recognizes as real. Your false beliefs are beliefs about nothing that is real.

We can, of course, have beliefs about things that are not real – for example, unicorns, conspiracies that never took place, profits that will in fact never materialize. In all such cases, however, we can build the non-existent targets of the beliefs by imaginatively combining things that do exist. For example, we have seen horses and animals with horns, so we can build a horse with a horn in our thoughts by imaginative combination.

But green and pain are not like unicorns. They have no parts that are not themselves colors or feelings. There are no Xs and Ys that are not themselves colors or feelings, such that we can build green or pain in our imagination by putting together Xs and Ys. So, if we were to accept Graziano’s dismissal of color experiences and pains as unreal, we would have to allow that we can have beliefs about things that neither exist, nor can be imaginatively constructed. We have, however, no account of how there could be such a belief. The words “way green things look” and “pain” could not so much as mean anything, if we suppose that there are no actual examples to which these words apply, and no way of giving them meaning by imaginative construction.

Graziano invokes impressive authorities – Copernicus, Darwin, and Newton – in support of skepticism about intuitions that once seemed incontestable. (See list in the second paragraph above.) He presents his theory as coming from his “lab at Princeton”.

The view he proposes, however, is not a result supported by scientific investigation. It is supported by the other authorities to which he appeals – Patricia Churchland and Daniel Dennett. These writers are philosophers who offer to solve the notoriously difficult mind-body (or, consciousness-brain) problem by the simple expedient of cutting off consciousness. Voilà. No more problem.

But it is not good philosophy to affirm a view that commits one to there being beliefs of a kind for which one can give no account.

It is important to understand that resisting the dismissal of consciousness is fully compatible with affirming that there are indeed neural causes for our behavior. Hammer blows to the thumb cause neural signals, which cause reflexive withdrawals. Somewhat later, the interaction of these signals with neurons in our brains causes behaviors such as swearing and taking painkillers. But hammer blows to the thumb also cause pains. It is indeed difficult to understand how or why pains should result from neural events that such blows cause. But it is not a solution to this problem to dismiss the pains as unrealities. Nor is it true that science teaches us that we ought to deny consciousness.

The Cambridge Declaration on Consciousness

August 24, 2012

On July 7, 2012 a “prominent international group” of brain scientists issued The Cambridge Declaration on Consciousness. The full document has four paragraphs of justification, leading to the declaration itself, which follows.

We declare the following: “The absence of a neocortex does not appear to preclude an organism from experiencing affective states. Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Non-human animals, including all mammals and birds, and many other creatures, including octopuses, also possess the neurological substrates.”

Back in the 90’s I published a paper under the title “Some Nonhuman Animals Can Have Pains in a Morally Relevant Sense”. (In case you’re wondering, that view had been denied by Peter Carruthers in a paper in a top tier journal.) So, not surprisingly, I am quite sympathetic to the sense of this declaration.

I also approve of the declaration’s prioritizing of neurological similarities over behavior. The philosophy textbook presentation of the supposedly best reason for thinking that other people have minds goes like this:

1. When I behave in certain ways, I have accompanying thoughts and feelings.

2. Other people behave in ways similar to me. Therefore, very probably,

3.  Other people have accompanying thoughts and feelings that are similar to mine.

This argument is often criticized as very weak. Part of my paper’s argument was that we have a much better reason for thinking our fellows have minds, namely:

1. Stimulation of my sense organs (e.g., being stuck with a pin) causes me to have sensations (e.g., a pain).

2. Other people are constructed very much like me. Therefore, very probably,

3. Stimulation of other people in similar ways causes them to have sensations similar to mine.

If one approaches the matter in this second way, it is natural to extend the argument to nonhuman animals to the extent that they are found to be constructed like us. This is the main line of approach in the Cambridge Declaration (although some of the lead-up paragraphs also sound like the first argument).

In sum, I am inclined to accept the sense of the Cambridge Declaration, and to agree that the evidence and reasoning presented make its stand a reasonable one.

But still, there is something peculiar about this Declaration, even aside from its being unusual for academic conferences to issue position statements. The question is, Why? Just what is odd about it?

One of the Declaration’s authors, Christoph Koch, recently gave an interview on the radio. (The link is to the written transcript.) In it, he characterizes fMRI scans as a “very blunt” instrument. The point is that the smallest region that can be resolved by an fMRI scan contains about half a million neurons, some of which may be firing quite actively while others are hardly firing at all. So, our scanning techniques do not tell us what neural firing patterns occur, but only where there are some highly active neurons.

Ignorance of neural patterns is relevant here. Another point that Koch makes in the interview is that there are many neurons – about three quarters of what you have – in the cerebellum. Damage to this part of the brain disrupts smooth and finely tuned movements, such as are required for dancing, rock climbing and speech, but has little or no effect on consciousness.

So, it is not just many neurons’ being active, or there being a complex system of neural activations of some kind or other that brings about consciousness. It is some particular kind of complexity, some particular kind of pattern in neural activations.

I am optimistic. I think that some day we will figure out just what kind of patterned activity in neurons causes consciousness. But it is clear that we do not now know what kind of neural activity is required.

The peculiarity of the Cambridge Declaration, then, is that it seems to be getting ahead of our actual evidence, yet it was signed by members of a group who must be in the best position to be acutely aware of that fact. Of course, ‘not appear[ing] to preclude’ consciousness in nonhuman animals is a very weak and guarded formulation. The remainder of the declaration, however, is more positively committal.

The best kind of argument for consciousness in nonhuman animals would go like this:

1. Neural activity patterns of type X cause consciousness in us.

2. Certain nonhuman animals have neural activity patterns of type X. Therefore, very likely,

3. Those nonhuman animals have consciousness.

Since we do not now know how to fill in the “X”, we cannot now give this best kind of argument. The signers of the Declaration must know this.

[The radio interviewer is Steve Paulson, and the date is August 22, 2012. The paper of mine referred to above is in Biology and Philosophy (1997), v.12:51-71. Peter Carruthers’ paper is “Brute Experience” in The Journal of Philosophy, (1989) v.86:258-269.]

What’s New in Consciousness?

April 18, 2012

I am just back from the four-day “Toward a Science of Consciousness” conference in Tucson. I heard 32 papers on a wide variety of topics and I’m trying to tell myself what I’ve learned about consciousness. Today, I’ll focus on the most fundamental of the difficulties in this area, known to afficionados as “the Hard Problem of consciousness”. The following four statements explain the problem in a way that leaves open to many different kinds of response to it.

1. Data concerning our sensations and feelings correlate with data concerning the firings of neurons in our brains.

There is overwhelming evidence for this statement. Strokes in the back of the head, or severe injuries there, cause loss of vision. The extent of the area in which one can no longer see is well correlated with the extent of the damage. Pin pricks normally hurt; but if injury or drugs prevent  the neural signals they cause from reaching the brain, you won’t feel a thing. Direct electrical stimulation of some brain locations produces experiences. And so on.

2. What neurons do is to fire or not fire.

Neurons can fire faster or slower. There can be bursts of rapid firing, separated by intervals of relative quiescence, and there can be patterns of such bursts. With 100 billion or so neurons, that allows an enormous range of possible combinations of neural activities. We know of no other aspects of neural activity that are relevant to what we are experiencing.

3. We experience a large number of qualities – the colors, the tastes, the smells, the sounds, feelings like warmth, itchiness, nausea, and pain; and feelings such as jealousy, anger, joy, and remorse.

4. The qualities noted in 3. seem quite different from the neural firing patterns in 2, and from any complex physical property.

The Hard Problem is this: What is the relation between our experiences and their qualities, and our neural firing patterns? How can we explain why 1. is true?

There are two fundamental responses to the Hard Problem, and many ways of developing each of them. They are:

Physicalism. Everything is physical. Experiences are the same things as neural events of some particular kind. (It is not claimed that we know what particular kind of neural event is the same as any particular kind of experience.) The explanation of 1. is that the data concerning experiences and neural events are correlated, since experiences and neural events are just the same thing. It’s like the explanation of why accurate data about Samuel Clemens’ whereabouts would be perfectly correlated with accurate data about Mark Twain’s whereabouts.

Dualism. Experiences are not identical to neural events. 1. is true either because neural events cause experiences, or because some events have both neural properties and other properties that cannot be discovered by the methods of science.

Today, the dominant view (about two to one, by my unscientific estimate) is physicalism. The reason is suggested by my descriptions. Dualists evidently have to say how neural events can cause experiences, or explain the relation between the properties known to science and the properties not known to science. Physicalists have no such task: if there is just one thing under two names, there is no “relation” or “connection” to be explained.

But physicalism has another task, namely, to explain how 4. can be true. According to physicalism, blueness = some complex physical property X, the feeling of nausea = some complex physical property Y, and so on. How could these pairs of items even seem to be different, if they were really just the same?

Of course, it will not do to say that blue is the way a physical property X appears to us, the feeling of nausea is the way a physical property Y appears to us, and so on. That would just introduce ways things appear. But then we would just have to ask how physical properties are related to their ways of appearing. They do not appear as what (according to physicalism) they really are; i.e., they do not appear as complex physical properties. So how, according to physicalism, could it happen that X can appear as blue, but not as complex physical property X, if blue = X?

The new developments reported at the conference that were of most interest to me were attempts by physicalists of ways of dealing with this last question. Here are brief summaries of two key ideas.

A. To be feeling something is nothing other than to record and adjust action in the following ways. (i) Recognize dependence of changes of input to one’s sensory devices upon movement of one’s own body. (ii) Recognize changes of input to one’s sensory devices from sources that do not depend on one’s own body (and distinguish these from the changes in (i)). (iii) Process some parts of input more intensively than others. (When we do this, it is called attending to some aspects of our situation more than others.)

We understand how these features could be instantiated in a robot; so we understand how we could make a robot – a purely physical thing – that feels.

B. What is in the world is all physical. Experienced qualities like blue and the feeling of nausea are not in the world – they are its “form”, not its “content”. So, there is no question of “relating” experienced qualities to what is in the world – in fact, it is misleading to speak of “experienced qualities” at all, since that phrase suggests (falsely, on this view) they are something that is in the world.

It’s time for disclosure: I am a dualist. Not surprisingly, I didn’t find either of these efforts to offer a good solution to what I see as the key problem for physicalism. I’ve done my best to represent A. and B. fairly, but you should, of course, remember that what you’re getting here is what a dualist has been able to make of physicalists’ efforts.

Do Conscious Thoughts Cause Behavior?

December 12, 2011

In the late 19th Century, Thomas Huxley advanced a view he called “automatism”. This view says that conscious thoughts themselves don’t actually do anything. They are, in Huxley’s famous analogy, like the blowings of a steam whistle on an old locomotive. The steam comes from the same boiler that drives the locomotive’s pistons, and blowings of the whistle are well correlated with the locomotive’s starting to move, but the whistling contributes nothing to the motion. Just so with conscious thoughts: the brain processes that produce our behavior also produce conscious thoughts, but the thoughts themselves don’t produce anything.

Automatism (later known as epiphenomenalism) is currently out of favor among philosophers, many of whom dismiss it without bothering to argue against it. But it has enough legs to be the target of an article by Roy F. Baumeister and colleagues in this year’s Annual Review of Psychology. These authors review a large number of studies that they regard as presenting evidence “supporting a causal role for consciousness” (p. 333). A little more specifically, they are concerned with the causal role of “conscious thought”, which “includes reflection, reasoning, and temporally extended sense of self” (p. 333). The majority of the evidence they present is claimed to be evidence against the “steam whistle” hypothesis that “treats conscious thoughts as wholly effects and not causes” (p. 334).

To understand their argument, we need to know a little more about the contrast between unconscious thought and conscious thought. To this end, suppose that a process occurs in your brain that represents some fact, and enables you to behave in ways that are appropriate to that fact. Suppose that you cannot report – either to others or to yourself in your inner speech – what fact that process represented. That process would be a thought that was unconscious. But if a process occurs in you, and you can say – inwardly or overtly – what fact it is representing, then you have had a conscious thought.

What if I tell you something, or instruct you to do some action or to think about a particular topic? Does that involve conscious thought? Baumeister et al. assume, with plausible reason, that if you were able to understand a whole sentence, then you were conscious, and at least part of your understanding the sentence involved conscious thought. (For example, you could report what you were told, or repeat the gist of the instruction.) They also clearly recognize that understanding what others say to you may, in addition, trigger unconscious processes – processes that you would not be able to report on.

If you want to do a psychological experiment, you have to set up at least two sets of circumstances, so that you can compare the effect of one set with the effect of another. If your interest is in effects of conscious thoughts, you need to have one group of participants who have a certain conscious thought, and another group who are less likely to have had that conscious thought. The way that differences of this kind are created is to vary the instructions given to different groups of participants.

For example, in one of the reviewed studies, participants randomly assigned to one group were given information about costs and features of a cable service, and also instructed to imagine being a cable subscriber. Participants in another group received the same information about costs and features, but no further instruction. A later follow up revealed that a significantly higher proportion of those in the group that received the special instruction had actually become cable subscribers.

In another study, the difference was that one group was asked to form specific “implementation intentions”. These are definite plans to do a certain action on a certain kind of occasion – for example to exercise on a particular day and time, as contrasted with a more general intention to take up exercise, but without thinking of a particular plan for when to do it. The other group received the same information about benefits of the action, but no encouragement to form specific implementation intentions. Significantly more of those who were encouraged to form implementation intentions actually engaged in the activity.

The logic behind these studies is that one group was more likely to have a certain kind of conscious thought than the other (due to the experimenters’ instructions), and it was that group that exhibited behavior that was different from the group that was less likely to have had that conscious thought. The correlation between the difference in conscious thoughts and the difference in subsequent behavior is then taken as evidence for a causal connection between the (earlier) thoughts and the (later) behavior.

There is, however, a problem with this logic. It arises from the fact (which, as noted earlier, the authors of the review article acknowledge) that conscious processing of instructions triggers unconscious processes. We can easily see that this is so, because processing what is said to us requires that we parse the grammar of sentences that we understand. But we cannot report on how we do this; our parsing is an unconscious process. What we know about it comes from decades of careful work by linguists, not from introspection.

Since conscious reception of instructions triggers unconscious processes, it is always possible that behavioral effects of the different instructions are brought about by unconscious processes that are set in motion by hearing those instructions. The hearing (or reading) of instructions is clearly conscious, but what happens after that may or may not be conscious. So, the causal dependence of behavior on instructions does not demonstrate causal dependence of behavior on conscious processes that occur after receiving the instructions, as opposed to unconscious processes that are triggered by (conscious) hearing or reading of instructions.

This point is difficult to appreciate. The reason is that there is something else that sounds very similar, and to which we really are entitled to claim on the basis of the evidence presented in the review article. This claim is the following (where “Jones” can be anybody)

(1) If Jones had not had the conscious thought CT, Jones would not have been as likely to engage in behavior B.

This is different from

(2) Jones’s conscious thought CT caused it to be more likely that Jones engaged in behavior B.

What’s the difference? The first allows something that the second rules out. Namely, the first, but not the second, allows that some unconscious process, UP caused both whatever conscious thoughts occur after receiving instructions, and the subsequent behavior. The experimenter’s giving of the instructions may set off a cascade of unconscious processes, and it may be these that are responsible for both some further conscious (reportable) thoughts and for subsequent actions related to the instructions. If the instructions had not been given, those particular unconscious thoughts would likely not have occurred, and thus the action might not have been produced.

Analogously, if the flash of an exploding firecracker had not occurred (for example, because the fuse was not lit) it would have been very unlikely that there would have been a bang. But that does not show that, in a case where the fuse was lit, the flash causes the bang. Instead, both are caused by the exploding powder.

The procedure of manipulating instructions and then finding correlated differences in behavior thus establishes (1), but not (2). So, this procedure cannot rule out the steam whistle hypothesis regarding conscious thought.

Interestingly, there are some cases for which the authors of the review identify good reasons to think that the steam whistle view is actually the way things work.

For example, one study compared people who imagined a virtuous choice with those who had not done so. In a subsequent hypothetical choice, people in the first group were more self-indulgent than those in the comparison group. This difference was removed if the same activity was imagined as a court-ordered punishment rather than a choice to volunteer.

However, it seems very unlikely that anyone consciously reasoned “I imagined myself making a virtuous choice, therefore I’m entitled to a bit of self-indulgence”. In this, and several similar reported cases, it seems far more likely that the connection between imagining a virtuous choice, feeling good about oneself, and feeling entitled to self-indulgence runs along on processes that do not cause conscious thoughts with relevant content.

The article under discussion is full of interesting effects, and these are presented in a way that is highly accessible. But it does not succeed in overturning an alternative to its authors’ preferred view. According to this alternative view, the causing of behavior (after consciously perceiving one’s situation, or consciously receiving instructions) is done by unconscious processes. This alternative view allows that sometimes, but not always, these unconscious processes also cause some conscious thoughts that we express either in overt verbal behavior, or in sentences about what we are doing that we affirm to ourselves in our inner speech.

[The article under discussion is Roy F. Baumeister, E. J. Masicampo, and Kathleen Vohs, “Do Conscious Thoughts Cause Behavior?”, Annual Review of Psychology 62:331-361 (2011). The difference between (1) and (2) is further explained and discussed in Chapter 4 of Your Brain and You. ]

Do You Look Like a Self-Controlled Planner?

October 31, 2011

In an article soon to appear in the Journal of Personality and Social Psychology, Kurt Gray and colleagues question whether we “objectify” other people, if that means to regard them as objects with no mental capacities. They suggest that there are two kinds of mental capacities, and that what’s often thought of as “objectification” may actually a redistribution of judgments about these kinds. They did a series of experiments to test this possibility.

The two kinds of mental capacities are Agency and Experience. “Agency”, in these experiments, comprises the capacities for self-control, planning, and acting morally. “Experience” covers abilities to experience pleasure, desire, and hunger or fear.

The hypothesis, stated a little more fully, is that people who attended to a target’s bodily aspects would tend to rate those targets higher on Experience and lower on Agency, with reverse effects when attention is focused less on bodily aspects and more on cognitive abilities.

They tested this hypothesis in several ways, of which I’m going to describe only the first. The general result of this set of experiments was converging support for the hypothesis.

The first experiment was admirably simple. 159 participants, recruited from campus dining halls, were given a sheet of paper that had one picture, a brief description, and a series of six questions. The single picture was one of the following four:

Erin, presented in a head shot that had been cropped from the following picture.
Erin, presented in a fairly cleavage-revealing outfit from just below the breasts up.
Aaron, presented in a head shot cropped from the following picture.
Aaron, presented shirtless from just below the pectorals up.

Both of these targets are attractive young people and look very healthy. The two head shots will be referred to as Face pictures, and the two others as Body pictures. (The head shots were enlarged, so each of the pictures was about the same size.)

The description given was the same for both, except for the names and corresponding appropriate pronouns. It provided only the information that the person in the picture is an English major at a liberal arts college, belongs to a few student groups, and likes to hang out with friends on weekends.

The questions were all of the form “Compared to the average person, how much is [target’s name] capable of X?”. Fillers for X were self-control, planning, and acting morally (combined into an Agency measure); and experiencing pleasure, experiencing hunger, and experiencing desire. (Since ability to experience hunger did not correlate highly with the other two, only experiencing pleasure and experiencing desire were used to compose the Experience measure.) Answers took the form of a rating on a five point scale, ranging from “Much less capable” to “Much more capable”, with “Equally as capable” for the midpoint.

The key results of this experiment are that participants who were given Body pictures rated the targets higher on Experience and lower on Agency than participants who were given Face pictures. The differences are not large (.27 out of five for Experience, .33 out of five for Agency), but they are statistically significant.

The authors take these results to support the view that “focusing on the body does not involve complete dementalization, but instead redistribution of mind, with decreased agency but increased experience” (pp. 8-9).

As noted, the remaining experiments in this study point in the same direction. In a way, that seems to be good news – ‘different aspect of mind’ seems better than ‘no mind, mere object’. The authors make it explicitly clear, however, that being regarded as less of an agent would, in general, not be in a person’s interest. Some other intriguing aspects of this experiment are that the gender of the participants doing the ratings was not found to matter, and Erin came out a little ahead of Aaron on the Agency measure.

However, the aspect of this experiment that intrigues me the most is one that lies outside of the authors’ focus, and on which they do not comment. To explain this aspect, note first that the description provides very little information – it could be fairly summarized by saying the person in the picture is a typical college student. A person could be forgiven for reacting to the rating request with “How on Earth should I know whether this person is above or below average on self-control (or planning ability, or moral action, experiencing pleasure, or experiencing desire)!?”

Since the participants were college students, and thus similar to the depicted targets as described, perhaps we should expect them to rate the targets as somewhat above average in mental abilities. However, one rating was below average: the rating for Agency in response to Body pictures was 2.90 (where capability equal to that of the average person would be 3). The difference between this rating for Body pictures and higher rating for Face pictures indeed supports the authors’ hypothesis, but it leaves me wondering what could have been in the consciousness of those doing the ratings.

An even greater puzzle comes from fact that the highest rating was for Experience in response to Body pictures – it was 3.65. (Remember, the highest number on the scale was 5, so 3.65 is about a third of the distance between “Equally as Capable” and “Much More Capable”). So, I wonder: Do college students really think they and their peers are better at experiencing pleasure and desire than the average person? That seems a very strange opinion.

[ Kurt Gray, Joshua Knobe, Mark Sheskin, Paul Bloom, and Lisa Feldman Barrett, “More than a Body: Mind Perception and the Nature of Objectification” Journal of Personality and Social Psychology, in press. ]

An Unusual Aphrodisiac

October 10, 2011

Imagine you’re a prehistoric heterosexual man who’s going into battle tomorrow. The thought that there’s a fair chance of your dying might so completely occupy your mind that you’d be uninterested in anything save, perhaps, sharpening your spear.

On the other hand, your attitude might be that if you’re going to be checking out tomorrow, you’d like to have one last time with a woman tonight.

We are more likely to be descendants of the second type of man than the first. So, we might expect that there would be a tendency among men for thoughts of their own death to raise their susceptibility to sexual arousal.

In contrast, women who were more erotically motivated when they believed their own death might be just around the corner would not generally have produced more offspring than their less susceptible sisters. So, there is no reason to expect that making thoughts of death salient should affect sexual preparedness in women.

These ideas have recently been tested in two studies by Omri Gillath and colleagues. Of course, they didn’t send anybody into battle. Instead, they used two methods – one conscious, one not – to make the idea of death salient.

In the first study, one group of participants wrote responses to questions about the emotions they had while thinking about their own death and events related to it. Another group responded to similarly phrased questions about dental pain. The point of this contrast was to distinguish whether an arousal (if found) was specific to death, or whether it was due more generally to dwelling on unpleasant topics.

After responding to the questions, participants were shown either five sexual pictures (naked women for men, naked men, for women) or five non-sexual pictures (sports cars for men, luxury houses for women). Previous studies had found that all the pictures were about equal for their respective groups on overall perceived attractiveness. Participants had all self-identified as heterosexual. They had five minutes to carefully examine their set of five pictures.

Participants were each connected to a device that measured their heart rate. The key result was that the men who answered the questions about death and viewed erotic pictures had a significantly higher average heart rate during the picture viewing than any other group. That means that, on average, they had a higher rate than other men who saw the same pictures, but had answered questions about dental pain. They also had a higher rate than other men who had answered questions about death, but then saw non-sexual pictures. And they had a higher rate than women who answered either question and viewed either pictures of naked men or non-sexual pictures.

In the second study, the death/pain saliency difference was induced by flashing the word “dead” (for half the participants) or the word “pain” (for the other half) before each item in a series of pictures. The presentation of the words was very brief (22 thousands of a second) and came between masks (strings of four X s). With the masks, that’s too short to recognize the word. The pictures either contained a person or did not. Half of the pictures that contained a person were sexual, half were not. Pictures remained visible until the participant responded.

The response was to move a lever if, but only if, the picture contained a person. The movement was either pulling the lever toward oneself, or pushing it away. There were 40 consecutive opportunities for pulling, and 40 for pushing; half of participants started with pulling, half started with pushing.

The logic of this experiment depends on a connection previously established by Chen and Bargh (1999) between rapidity of certain responses and the value of what is being responded to. Pulling brings things closer to you, and if what’s before your mind is something you like, then that will speed the pulling (relative to pulling in response to something you’d ordinarily try to avoid, or something toward which you are neutral).

The reasoning, then, is that those who had a higher degree of sexual preparedness should pull faster in response to erotic materials than those who were not so highly prepared. Gillath and colleagues hypothesized that participants who received the brief exposure to “dead” and then saw an erotic picture should be faster pullers than those who received a brief exposure to “pain” before an erotic picture.

And that is what they found – for men. There was no such result for women. Nor did the brief exposure to “dead” result in faster pulling after being presented with non-sexual pictures; the faster reaction times depended on both the exposure to “dead” and the sexual nature of the following picture.

These two studies are certainly interesting in relation to the evolutionary thinking that led them to be undertaken. But I also find them fascinating in relation to a more general point. The second study provides evidence that our brains can (a) make a distinction (between pain and death) and (b) relate it to another difference (sexual vs. non-sexual material) completely unconsciously and extremely rapidly. And the first study, although done at a much slower time scale and with consciousness of the materials used to manipulate mood (i.e., the writing about death vs. pain), showed an effect on heart rate, which is not something that was under participants’ control. The brain processes of which we are unaware (except when revealed in studies like these) are amazing indeed.

[O. Gillath, M. J. Landau, E. Selcuk and J. L. Goldenberg (2011) “Effects of low survivability cues and participant sex on physiological and behavioral responses to sexual stimuli”, Journal of Experimental Social Psychology 47:1219-1224. The previous study mentioned in the discussion of Study 2 is M. Chen and J. A. Bargh (1999) “Consequences of automatic evaluation: Immediate behavioral dispositions to approach or avoid the stimulus”, Personality and Social Psychology Bulletin 25:215-224. ]

Mind the Gut

September 19, 2011

Johan Lehrer’s Wall Street Journal column for September 17-18, 2011 reports a fascinating pair of facts – and then makes a puzzling application of them.

The first fact concerns probiotic bacteria, which are often found in yogurt and other dairy products. Researchers provided mice with either a normal diet, or a diet rich in probiotic bacteria, and then subjected them to stressful situations. The mice with the probiotic-enriched diet showed less anxiety and had lower levels of stress hormones.

By itself, this result is not so interesting. After all, it could be that the probiotic bacteria affect digestion, then blood chemistry, and finally hormone levels. But the second fact shows that a different mechanism is at work.

The second fact is that when neural connections between gut and brain were severed, the probiotic-enriched diet no longer produced the effect of reducing symptoms of stress. This fact suggests that the effect of the difference in diet works directly through the gut-brain neural connection, rather than through a less direct blood chemistry path.

It’s as if we have a sense organ in our gut that feeds into an evaluative system. It doesn’t give us any sensations, but it tells our brains how things are in our digestive systems. If things are going well down there, we’re less prone to anxiety when stressful situations arise.

That’s a surprise that contributes to a sense of wonder at how deliciously complex unconscious processes can be. Lest one think that this has nothing to do with us, Lehrer also reports a study that showed an analogous result in human subjects who received large doses of probiotics for a month. (No cutting of nerves in that case, of course.)

Now for the puzzling conclusion. These and other studies are taken by Lehrer to show that “the immateriality of mind is a deep illusion. Although we feel like a disembodied soul, many feelings and choices are actually shaped by the microbes in our gut . . . . ” And, although he concedes that “This doesn’t mean, of course, that the mind-body problem has been solved”, he goes on to declare that “it’s now abundantly clear that the mind is not separate from the body . . . . Rather, we emerge from the very same stuff that digests our lunch.”

But “shaped” is one of the many words that mean “caused”, with the addition of something about the manner of causing (as in “burned” or “built”), or degree of causal contribution (as in “influenced” or “forced”). What the cited research shows is that causes of anxious behavior and hormone levels include the presence of probiotic bacteria in the gut, and that the means of that causal contribution works through a neural connection. That is surprising and fascinating, but it offers no evidence whatsoever that feelings of anxiety are the same things as any material events.

In general, causes and effects are different. From “How anxious you feel depends in part on what kind of bacteria you have in your gut” it does not follow that feelings are material – only that feelings, whatever they are, can be caused in a very surprising way.

Similar remarks apply to “emerge”. Different people use this word in different ways, so it’s not a very helpful term. But one of its meanings is “causes”. Yes, it is indeed fascinating that what’s in our gut can cause how we feel, and do so through a direct, neural pathway. But no, that does not show that feelings are material events. It does not show that immateriality of feelings is a deep illusion.

For some purposes, the point I’m making may not matter. It’s an important fact that what goes on in our consciousness is brought about by events in our neural systems, and the studies Lehrer cites in this article do help drive that point home. But when the mind-body problem is introduced into the discussion, it becomes important to distinguish between the views (1) that neural events cause mental events such as feelings and (2) feelings are the same things as neural events. The evidence Lehrer cites in his article support (1), but are silent as regards (2).

[Jonah Lehrer, “The Yogurt Made Me Do ItThe Wall Street Journal, September 17-18, 2011, p. C12.]

Appearances and Aboutness

March 22, 2011

The stimulus for today’s post is an article by Raymond Tallis that appeared in The New Atlantis for last Fall. This article takes a stand on many issues of interest to me and that I’ve written about in Your Brain and You. I find myself in fundamental agreement with some of what Tallis says, and also in fundamental disagreement with other points he makes.

Tallis makes far too many points to take up in one post. I’ll confine myself to two. The first is a point of agreement: There is no account that our sciences give of why there should be any appearances of things whatsoever. “Appearances” include the painful way damage to your body feels to you, the way a cup of hot coffee or a glass of iced tea feels to you, the way things look to you (bright or dim, this or that color), the way things taste and smell to you, and so on.

This point may be most easily seen with tastes and smells. Chemistry tells us that there are molecules of various kinds in the foods we eat and in the air near many flowers. Neuroscience tells us that molecules of each kind cause activation in some of our specialized sensory receptor cells, and not in others. Each of these cells stimulates some, but not all, of our neurons that lie deeper in our brains.

The specialized cells and their connections explain how we can react differently to different molecules arriving on our tongues or in our nostrils. But nowhere in the sciences is there an explanation of why or how the firing of our neurons causes orange flavor, chocolate flavor, lilac scent or outhouse odor.

Many contemporary philosophers are content to say that experiencing a flavor or a scent just is the very same thing as having a set of neural firings of a particular kind. This claim, however, does nothing to explain how it is possible for an experienced flavor or scent to be the same thing as a bunch of activities in nerve cells. The best that can be said for such an identity view is that it is simple, and that it cannot be proven to imply a contradiction. That’s a pretty weak reason: Berkeley’s view that there are just experiences and no corresponding material things is also simple and cannot be proven to imply a contradiction.

(Some self-professed identity theorists cheat. They make their view sound less implausible by writing of two “aspects” of neural events, or saying that neural events “lie behind” experiences, or that in experiences we “take a perspective” on neural events that is different from, and unavailable to, scientists who might be detecting the so-called same events with their instruments. But these palliative phrases all introduce some form of distinction between experiences and neural events, and they are not compatible with identity claims.)

My agreement that natural science does not explain appearances does not extend to Tallis’ favored way of arguing for this conclusion. That argument depends on “intentionality”, and the first thing to do is explain this word.

“Intentionality”, when a philosopher says it, means “Aboutness”. As in, your thought is about something. Of course, if you intend to do something – say, you intend to vote for candidate X – your intention is about something – in this case, it’s about voting for candidate X. But if you believe that Aunt Tillie is arriving tomorrow, your belief is about Aunt Tillie’s arrival. So, even though it’s just a belief and not an intention to do something, it has intentionality (as philosophers use this term) – that is, in plain English, it’s about something.

Some philosophers, including me, avoid “intentionality” whenever they can, and talk about aboutness instead, except when they discuss others who do use it. Many states besides intentions to act and beliefs can be about things or situations: these include hopes, desires, fears, doubts, supposings, wonderings, etc. One thing that makes aboutness interesting is that you can think about things or situations that do not exist. There are no unicorns and there are no men on the Moon at this writing, but that doesn’t stop anyone from thinking about those possibilities.

What about perceptions – are they about what is seen, heard, and so on? Tallis answers Yes, and this answer is a basic premise of the way he argues about appearances. My own answer is No.

This difference is fundamental, and it is a hot topic of discussion in the philosophy journals. A majority of philosophers are probably closer to Tallis’ view than to mine. There can be no hope of settling this issue in one blog post.

But it is relatively easy to provide a reason that raises some suspicion that seeing is very different from thinking. It’s a reason for separating appearance in visual experience from the processing of information about what is seen. And it is a reason that you can provide for yourself in your own home – as follows.

Sit by a window and look out at the buildings or trees or whatever is in the scene before you. (But if it’s a brick wall on the other side of a narrow alley, try a different window. You’ll need a scene where you can see something at significantly different distances.) Now, cover up one eye for about 20 seconds.

While you’re waiting, think about the character of your visual experience. I predict that you’ll agree that the world does *not* suddenly look flat. Nearby houses, for example, will still strike you as being seen as near, more distant houses as farther away. That may strike you as odd, because you may have learned that depth perception depends on cues from both eyes; and people who lose an eye do have some difficulty with such things as reaching for a glass of water. But there are many cues relating to distance. For example, you may know that houses in your neighborhood are roughly the same size. A distant house, however, takes up less of your visual field than a nearer one, and that helps you see it as more distant.

OK, now uncover your blocked eye. If you’re like me, you will experience a palpable restoration of a sense of depth. This too is somewhat puzzling: depth doesn’t dramatically disappear when you cover, but the restoration when you uncover is striking. I don’t know how to explain that, but it’s evident for me. (If anyone tries this and does not find what I find, I would be very interested to hear about it.)

What does this experience tell us? A point to note is that you will not have changed any judgments about what you’re looking at. Your thoughts about what is there will be the same. It is only a sense of depth – something like the difference between the 2D and 3D versions of movies – that is different. This difference is quite unlike a difference of opinion; it’s not a difference in what you think. It’s a difference in your visual experience.

It is almost routine in the philosophical literature to move from (a) the presence of depth in visual experience to (b) claiming that visual experience is about what is seen. But depth and aboutness are two different things. Visual appearances are one thing, judgments about what is seen are another. The judgments are often automatic, of course. You do not have to give yourself a conscious argument to get from appearances to things. You just effortlessly take it that you’re looking at a house, a car, an apple, or whatever. But the little experiment should help you see that the visual experience itself is a different thing from the judgment about what’s being seen.

[The article I’m responding to is by Raymond Tallis, “What Neuroscience Cannot Tell Us About Ourselves”, The New Atlantis, Number 29, Fall 2010, pp. 3-25. Thanks to Maureen Ogle for calling my attention to this article.]

Unconsious Processing and Political Smears

January 24, 2011

In a 2010 paper, Spee Kosloff and colleagues report several studies involving political smears that they conducted during the month before the 2008 election. The smears were that Obama is a closet Muslim extremist and that McCain is senile.

One of the studies measured an effect that worked entirely below the level of consciousness. Participants were presented with strings of letters that were either words or nonwords (of English) and they pressed one of two buttons to indicate whether the string was or was not a word. Before they saw the letter string, two other things happened. (1) They saw the word “trial” (in the same place where the letter string would appear) for about three quarters of a second – amply long enough to see it and read it. (2) Between the “trial” and the letter string, there was a very brief exposure (28.5 thousandths, or about 1/35, of a second) of either “Obama” or “McCain”, also where the word “trial” had been and where the letter string would immediately appear. This exposure is too brief to read; most participants were unaware that there had been a word flashed between “trial” and the string to be classified as a word or nonword. (The roughly 15%  who detected that there had been a word reported having had no idea what it was.)

Among the letter strings that were words, most had no relevance to political smears (e.g., “rectangle”, “lamp”), but a few were laden with such relevance. They were either Muslim-related terms, e.g., “Koran”, “mosque”, or senility-related terms, e.g.,  “dementia”, “Alzheimers”.

The measure in this study was the time it took from the onset of one of the laden words to the participant’s decision that it was a word. (Only correct decisions about word status were included in the data to be analyzed.)

The key results of this experiment are these. (a) Obama supporters decided that senility-related terms were words faster after “McCain” had been briefly flashed than after “Obama” had been briefly flashed. (b) Obama supporters were also faster than McCain supporters to decide that senility-related terms were words after “McCain” had been flashed. (c) and (d) are parallel results for McCain supporters and decisions about Muslim-related terms after the brief presentations of “Obama”.

In short, a presentation of a word too briefly to be consciously read can cause a measurable difference in the time it takes to decide that a politically laden word is a word. And this difference depends on the relation between the flashed word and one’s political views.

It may be tempting to downplay the significance of this result. It’s a special case, one might say.  The task is artificial. The differences in reaction time are smaller than time differences that are relevant to real world action.

But I think that such a dismissive reaction would be unfortunate. The special case and the artificial experimental set up are necessary to get a clear observational result. But the conclusion that the evidence thus gained supports is that an unconscious stimulus can engage a cognitive process (i.e., one’s views about a candidate) and can do so entirely outside of consciousness. The lesson I am inclined to draw from this study is that we have some direct evidence that unconscious processes can have cognitive richness.

A second experiment tested the effect of making race salient (and consciously so) on a group of participants who indicated that they were undecided as to which candidate they supported.

The experimental task was, again, to decide whether a letter string was a word or a nonword. Among those strings that were words, most were neutral fillers, but a few were Muslim-related terms. The measure was the time from onset of the letter string to the pressing of the button indicating the word or nonword decision. The only decisions of interest are those that correctly classified Muslim-related words as words.

The key manipulation was that immediately before the decisions on letter strings began, participants filled out a questionnaire about themselves, which either did or did not include a question about race. This question provided six racial categories and asked participants to circle those “that are personally relevant to your identity”. No participants circled “African American”. Those who got the question are the “race salient” group, with the remainder being the non race salient group.

Participants saw a readable word, “trial”, followed by a too-brief-to-read, 1/35 of a second exposure of “McCain” or “Obama”, followed by the letter string to be classified as a word or nonword.

The most interesting, and somewhat disturbing, results of this experiment are these. (a) Undecided participants who were briefly exposed to “Obama” and who had answered the race question were faster to correctly classify Muslim-related words as words than similarly exposed undecided participants whose form did not include the race question. (b) This difference was not present when the briefly exposed word was “McCain”.

Since the smear that Obama is a closet Muslim extremist was often repeated prior to the 2008 election, it is presumed that all participants were aware of it. It appears from this study that this background awareness did not become activated if the matter of race had not been very recently made salient. But if it was made salient, then it was sufficiently activated to quicken the response on the word status decision task.

The race question itself was, evidently, consciously processed by those whose questionnaire included it. But the decision task was done rapidly and there is no question of the participants having consciously deliberated about the relation between race and the word status decision. So, even though the salience of race worked through a conscious input, the process by which it reduced decision time worked outside of conscious deliberation.

Once again, it might seem that the results of these two experiments show something about unconscious processing, but are unimportant for larger life because the differences in reaction times are far smaller than the time it takes us to think of and decide to execute any conscious, deliberate action (including, as always, speaking or indicating intent with a gesture). For example, the fact that a decision about a word took a fraction of a second less would not show that the decision was any different from what it would have been without the brief exposures or the inclusion of the race question. A third study, however, casts doubt on such a hopeful view.

In the third study, participants read articles of about 600 words, one that elaborated on the Obama smear and one that elaborated on the McCain smear. The articles were written by the experimenters and designed to be parallel in the types of support offered for Obama’s closet Muslim extremism or McCain’s senility, but they were produced in a format that made them look like copies of newspaper articles. After reading one or the other, participants were asked to rate their degree of endorsement of the thesis of the article they read.

Data were analyzed separately for those who had identified as Obama supporters, McCain supporters, or undecided. Care was taken to ensure that the participants knew that the experimenters would not be able to connect the responses to individual participants.

The key manipulation was that a questionnaire about the participants’ demographics, given immediately before the rest of the study, either included or did not include the race question (the same as in experiment 2) or a similar question about participants’ age group.

As expected, declared supporters of each candidate gave low endorsement to the smear of their candidate and higher endorsement to the smear of their candidate’s opponent. A key finding emerged from the results for the undecideds. Those who had received the race question gave higher ratings to the Obama smear than those who had not, and those who had received the age question gave higher ratings to the McCain smear than those who had not. The authors conclude that “It appears that undecided individuals can become motivated to accept smears of multiple candidates when situational factors render intergroup differences salient.” (p. 392)

This experiment is evidence for a sobering result. That one accepts a certain racial classification or a certain age classification as applying to oneself cannot be a reason for accepting or rejecting a smear of a candidate. The race and age of the candidates was well known to everyone. The manipulation consisting of including or not including the self-classification question regarding race or age did not supply new knowledge or a reason. Nonetheless it had an effect. It seems that this effect, therefore, worked through a process that did not engage conscious processing of the kind we would recognize as weighing reasons or evaluating evidence. Thus, this experiment provides evidence that even when inputs (reading and answering the race or age classification question) and outputs (making a mark on a scale indicating endorsement of an article’s thesis) are fully conscious, there can be processes that work outside of consciousness, yet produce effects on the conscious output.

[Kosloff, S., Greenberg, J., Schmader, T., Dechesne, M. and Weise, D. (2010) “Smearing the Opposition: Implicit and Explicit Stigmatization of the 2008 Presidential Candidates and the Current U. S. President”, Journal of Experimental Psychology: General 139(3): 383-398. This paper contains several results not stated here, and a fourth experiment that confirms and extends the results of experiment 3.]

Glimpses of the Unconscious Mind

January 9, 2011

There is a little experiment that I’ve sometimes recommended as a way of appreciating what our brains do unconsciously. It concerns the phenomenon of finding that one has a tune ‘running through one’s head’. Namely, the next time this happens to you, stop and try to think why you has this particular tune in mind right now.

When I’ve tried this, I’ve often had success. What happens is that I’ll recall that a few minutes before I noticed the tune, some key word or phrase from its lyrics happened to occur in a conversation. The conversation was not about the song, or anything closely related to it, and the word or phrase did not trigger any inner speech that had the sense of “Oh, that word/phrase is from <such and such piece of music>.” No: there were several minutes of attending to a conversation on completely unrelated matters, and then “out of the blue” the inner humming of some tune.

(I regret to report that discovery of the explanation for the tune’s running through one’s head does nothing to get rid of its annoying repetition.)

The successes I can recall all worked through the words associated with the tune. But, in his new book, Antonio Damasio reports a more interesting case that worked a little differently. In brief, he found himself thinking of a certain colleague, Dr. B. Damasio had not talked with Dr. B. recently. They were not collaborating on a project, there was no need to see Dr. B., and no plan to do so. Damasio had seen Dr. B. walking by his office window sometime earlier, but that was only remembered later and had not been an object of attention at the time. Damasio wondered why he was thinking of Dr. B.

The explanation that came to mind on reflection was that Damasio had happened, quite unintentionally, to have moved in a way that was similar to Dr. B’s somewhat distinctive gait. Damasio’s explanation is that the accidental circumstance of moving in a way similar to Dr. B. triggered an unconscious process that resulted in Dr. B’s coming to mind.

What makes this case so interesting to me is that, unlike my tune cases, it does not plausibly work through the language system. Of course, I know a few words I could use to describe a person’s gait, but even if I worked hard at it, I think the best I could do would be a vague description that would apply to many people. I suspect it’s the same for most of us. It’s just not believable that Damasio had a verbal description of gait that was specific for Dr. B. And even if he were capable of such a feat of literary skill, he had not been trying to describe his own movement, and so there would have been no route to making a connection by association through words.

What’s left is that the thought of Dr. B. was produced through a process that was not only not conscious, but also not verbal. The memory of Dr. B.’s motion was called up by Damasio’s own motion directly by the similarity of the motions, not through the medium of verbal representations of those motions.

(The suggestion that the motion may not have been accidental, but was caused by Damasio’s having seen his colleague walking by his window, in no way undercuts the point here. For, that would also be a case of unconscious processing leading to, this time, actual movement, without having gone through verbal representations of the stimulus or the resulting motion.)

An attractive analogy for a leading strand in Your Brain and You is that unconscious, nonverbal processing underlies our mental processing in the way that rock strata underlie the ground we walk on. Unless we’re lucky enough to be in a place like the Grand Canyon, we see the strata clearly only occasionally, where there is an outcrop. Damasio’s case seems to me to be one of these outcrops, where we can get a clear glimpse of our unconscious, nonverbal mind at work.

[A. Damasio, Self Comes to Mind: Constructing the Conscious Brain (New York: Pantheon, 2010). The coming to mind of Dr. B. is discussed on pp. 104-106 ]

%d bloggers like this: