Temptation

May 15, 2011

A recent review article by T. F. Heatherton and D. D. Wagner draws many studies together in support of a simple view of temptation and resisting temptation. The latter is also known as self-regulation. That’s what you exhibit when you do not do something that conflicts with your long term goals, even though it is something you would quite enjoy doing.

The view, referred to as a “balance model”, comes complete with an illustration of a beam balanced on a fulcrum. The beam may tip in the direction of giving into temptation for two kinds of reasons. One is that the temptation, or the pleasure anticipated from it, becomes more forcefully represented. That might happen because of signs of the temptation’s presence or easy availability, like the smell of tobacco smoke, or the sight of a high calorie food.

The other route to tipping the balance toward temptation is weakening of the opposition to giving in to temptation. That might happen because of fatigue, drug indulgence (most familiarly, alcohol), disease, or interference from passing a strong electromagnet near the head (TMS: transcranial magnetic stimulation).

The article cites evidence that opposition to giving into temptation importantly involves the prefrontal cortex (PFC). In contrast, different temptations are reflected in increased activation of neurons in different brain regions. Fortunately, the PFC has connections to these different regions, and rise in activity in PFC neurons tends to lower activity in neurons in these other regions.

The struggle to resist temptation, then, is reflected in the competition between activity in the PFC and activity in regions it’s connected to that are turned on by reminders of rewards that you would enjoy, but that would conflict with your longer term goals.

As noted, this balance model rests on many studies, so it has a lot going for it. There is, however, a minor oddity in the description that goes with this model. The article’s authors contrast brain regions representing the value of a tempting stimulus with “prefrontal regions associated with self-control” (pp. 134-135). They refer to the idea that “resisting temptations reflects competition between impulses and self-control” (p. 136).

It would be more natural to think of the opposition as a tension between regions that represent the attractions of immediately available temptations, and a region that represents long term goals. It is, after all, the achievement of long term goals that is the reason for resisting temptations, and it should be increased activation of representation of long term goals that competes with representation of immediate pleasures.

If we think of the PFC as representing long term goals, we can properly locate “self control” as a feature of the person whose brain houses both the PFC and the other regions that represent temptations. Self control is what you (the whole of you) exhibit when you avoid treating yourself to an immediately available pleasure, where indulging in it would conflict with a long term goal (sobriety; weight loss; avoidance of STDs, saving money for the future, and so on). The PFC is only a part of you. Activation of its neurons can certainly inhibit activation of neurons in other regions, but it is not a self, and it’s not the right kind of thing to exert “self control”.

If we think of the PFC as representing long term goals, its ability to inhibit activity in a variety of other regions also seems quite natural. After all, to hold a long term goal is to be directed on a result in the face of whatever obstacles may present themselves. We are not omniscient. We do not know in advance what the obstacles to a long term goal will be. So, the more temptation-promoting brain regions our representations of long term goals are able to inhibit, the more useful they will be to us.

[The model discussed here is found in Heatherton, T. F and Wagner, D. D. (2011) “Cognitive neuroscience of self-regulation failure”, Trends in Cognitive Science 15(3):132-139.]


A Key Problem of Mental Causation

April 25, 2011

Early this month, I attended the 2011 meeting of the Central Division of the American Philosophical Association in Minneapolis. There were several sessions devoted to philosophy of mind, and it occurred to me that it would be appropriate for this blog to report on what the philosophers of mind are up to these days – or, at least, some of it.

There are difficulties and limitations of such a project. The presenters intend to be at the cutting edge of philosophical research and they have only 25 minutes to make their case. So, inevitably, the papers at conferences like this one are thick with jargon. The arguments are complex, and they typically assume familiarity with several papers that have been published in recent years. Furthermore, a paper is not likely to be accepted at a conference like this one unless it argues against a view that’s been espoused by at least one respected member of the profession.

So, there is no hope of being able report on what philosophers as a group think about issues in philosophy of mind. They hold divergent views on a variety of claims. (There was a more generally directed conference session that considered the question of how much could be listed under the heading of “What philosophers know”. On analogy with “What chemists know”, what could we say that philosophers agree on as very well established and not likely to be disputed in the future? Persuasive reasons were given that there is precious little that falls into this category.)

But there ought to be hope that one could say in plain English what the philosophers who spoke at philosophy of mind sessions were interested in, or what Key Problem they were trying to solve. I think this hope can be fulfilled, and that’s what I’ll try to do in this post.

The Key Problem stems from a background view that is widely accepted: Everything we do is brought about by events in the neurons and synapses of our brains, and these neuro-synaptic events are sufficient to account for all our movements. It is neuro-synaptic events that send the impulses that contract our muscles, and the contractions of our muscles constitute our behavior.

Reasonable people have dissented from this view, and some of them are living. It’s even possible that some of them were at the Minneapolis conference. But in today’s philosophical climate that is unlikely; and certainly, no one expressed doubt about this background view in discussion.

The Key Problem arises because it is very attractive to hold each of three further views. But, on pain of contradiction, you can’t accept the background view and hold all three. The further claims are these.

(1) Mental events (for example, thinking of something you believe) are not the same as neuro-synaptic events.

It’s very likely that you and I both believe that Pittsburgh is farther west than Philadelphia. But it’s extremely unlikely that there is any kind of happening in our neurons and synapses that is exactly the same. So, we can’t say that bringing a belief to mind is exactly the same thing as having a certain kind of neuro-synaptic event occur in a brain

(2) Mental events causally contribute to human actions.

The commonsensical backing for this one lies in the many everyday remarks that say someone did something because they had a certain belief. “Jane was mean to Susan because she believed Susan was trying to steal her boyfriend.” “John took the longer route because he believed a section of road on the shorter route had been closed for repairs.” “Tom called off the picnic because he believed it was going to rain”. And so on. No one wants to hold that when we say these things, we are speaking falsely or nonsensically.

(3) If an event has one sufficient cause, nothing else can cause it – unless that other thing would have caused the event all by itself.

This one is a little trickier than the others. To begin at the end, there can sometimes be two sufficient causes of an event, as in a firing squad. Smith’s bullet enters the right side of the traitor’s brain at the same instant that Jones’s bullet enter the left side. Each is a sufficient cause of the traitor’s death, and, importantly, each would have caused the traitor’s death even if the other shooter had missed.

But the philosophers who are having the discussion I’m reporting agree that beliefs do not contribute to actions in this way. That is, none of them suppose that a belief could bring about an action even if there had been no neuro-synaptic cause that was sufficient to cause the action.

But if there was a neuro-synaptic cause that was sufficient for an action, and a belief is not the same thing as that neuro-synaptic cause (as (1) says), then it seems that there is nothing left for believing to do – no way in which believing could causally come into play.

You can’t accept the background view and all three of these statements as they are written. If mental events, such as beliefs, causally contribute to actions (as (2) says), then either they have to be the same as physical events (rejecting (1)) or they can causally contribute even though not the same as a physical event. That will mean rejecting (3), unless mental events can independently cause actions. But that would require giving up either (1) or the background view.

Perhaps you will think that one of the three statements is clearly the obvious candidate for abandonment. But it’s likely that the next two people you talk with about this will have a different candidate that seems just as obvious to them. That, at any rate, is how it works among philosophers.

In a nutshell: Most contemporary philosophers share a vision according to which all that we do is a product of what happens in our neurons and synapses. But then it seems hard to get beliefs to do any causal work. Yet most want to say that our beliefs do causally contribute to what we do.

[Of course, I have a view about the Key Problem – it’s in chapters 3 and 4 of Your Brain and You. Today’s post is aimed only at explaining what the Key Problem is, not at its solution.]


“Free Won’t”?

April 6, 2011

V. S. Ramachandran’s The Tell-Tale Brain: A Neuroscientist’s Quest for What Makes Us Human is a cornucopia of fascinating material. Many of the ideas offered are well established, but Ramachandran is not afraid to venture some risky hypotheses. Some of these give a common explanation for several findings whose connection to each other is not initially obvious. These explanations provide good models for what neural understanding of ourselves ought to look like, even if the evidence may eventually turn out against some of them.

One of the featured topics of the book is mirror neurons. These are neurons that fire (a) when you do an action of some kind, for example, grasping something with your hand; and (b) also fire when you see someone else doing that kind of action. Mirror neurons were first discovered in macaques by G. Rizzolati’s team in the late 1980s. They have been shown to exist in people, have been actively investigated by many researchers.

I am not going to go into the controversy over whether deficiency in the mirror neuron system is the key to understanding autism. The chapter in which this claim is offered is a riveting section of the book, and it is up to those with access to laboratories to sort out how well the theory can be supported.

Instead, I’m going to reflect a bit on a suggestion Ramachandran makes in answer to the question of what prevents us from blindly imitating every action we see. Or, since we also have neurons that fire both when we have a sensation and when we see others in circumstances likely to cause the same kind of sensation, why don’t we actually feel a touch on our own bodies when we see others touched?

Well, according to a 2009 review by Keysers and Gazzola, there are studies showing that there are some people who do have such sensations. (They say about 1% of people.) And Ramachandran mentions echopraxia, a condition in which “the patient sometimes mimics gestures uncontrollably”(p. 125). So the question really is, why aren’t these phenomena more common than they are?

A simple answer that Ramachandran does not discuss is that, in most of us, only *some* of the neurons in the premotor areas are mirror neurons. It may be that there is some tendency for us to mimic observed gestures, but we just don’t have enough mirror neurons to lead to actual execution of that tendency, in normal conditions.

What Ramachandran does say in answer to his question is this: “In the case of motor mirror neurons, one answer is that there may be frontal inhibitory circuits that suppress the automatic mimicry when it is inappropriate. In a delicious paradox, this need to inhibit unwanted or impulsive actions may have been a major reason for the evolution of free will. Your left inferior parietal lobe constantly conjures up vivid images of multiple options for action that are available in any given context, and your frontal cortex suppresses all but one of them. Thus it has been suggested that “free won’t” may be a better term than free will” (p. 124).

It seems evident to me that we are not ordinarily aware of suppressing tendencies to do what we see others doing. For example, suppose we’re having a drink with friends, and have already said “Cheers” and taken our first sips. Thereafter, we all see each other picking up our respective glasses, but in general we don’t take our further sips in near-unison. Yet we are not consciously aware of suppressing the lifting of our own glass every time we see one of our friends lift theirs.  So one intriguing aspect of Ramachandran’s picture is that, if we combine it with the observation just made, we would be led to conclude that there is a tremendous amount of unconscious processing going on whenever we see others acting.

This consequence would be one illustration of our dependence on unconscious processes – a theme that will be familiar to readers of Your Brain and You, or of several other posts on this blog. But I think that most of those who write about “free will” would not find these processes to be a plausible explanation of the beginning of “free will” – or of “free won’t”.

That is because the centrally important cases for those who write about “free will” are situations in which one is quite fully conscious of at least two alternatives, and in which one has time to think about the consequences of doing each of them. For example, if one tells a lie to avoid revealing some embarrassing fact about oneself, one generally knows what the fact is and what the lie would be, and one has at least a little while to consider the consequences of honesty, the likelihood of being caught in a lie, and the consequences of having one’s lie exposed. Writers about “free will” are generally thinking of cases where people are aware of a good reason to do something and also aware of a good reason to do something else. The real question about “free will” is what (if anything) makes one of these reasons prevail over the other.

It’s the same for “free won’t”. That’s just a special case of “free will”, where one of the options is phrased as “I won’t do action A – I’ll just sit still and do nothing”. If we have a case where one is not conscious of a reason to sit still and do nothing, it won’t be a case that writers on “free will” will regard as having anything “free” in it.

Of course, there has to be a difference between thinking of a possible action and doing it, and so ability to inhibit seems to be a required condition for possession of what anyone would count as ‘free will”. But inhibition by itself does not take us very far. Even a lion stalking its prey inhibits its running until favorable circumstances trigger its attack.

[Works referred to are V. S. Ramachandran, The Tell-Tale Brain: A Neuroscientist’s Quest for What Makes Us Human (New York: W.W. Norton & Co, 2011). Keysers, C. and Gazzola, V. (2009) “Expanding the mirror: vicarious activity for actions, emotions, and sensations”, Current Opinion in Neurobiology, 19:666-671. Di Pellegrino, G., Fadiga, L., Fogassi, L., Gallese, V., Rizzolatti, G. (1992) “Understanding motor events: a neurophysiological study”, Experimental Brain Research 91:176-180.]


Appearances and Aboutness

March 22, 2011

The stimulus for today’s post is an article by Raymond Tallis that appeared in The New Atlantis for last Fall. This article takes a stand on many issues of interest to me and that I’ve written about in Your Brain and You. I find myself in fundamental agreement with some of what Tallis says, and also in fundamental disagreement with other points he makes.

Tallis makes far too many points to take up in one post. I’ll confine myself to two. The first is a point of agreement: There is no account that our sciences give of why there should be any appearances of things whatsoever. “Appearances” include the painful way damage to your body feels to you, the way a cup of hot coffee or a glass of iced tea feels to you, the way things look to you (bright or dim, this or that color), the way things taste and smell to you, and so on.

This point may be most easily seen with tastes and smells. Chemistry tells us that there are molecules of various kinds in the foods we eat and in the air near many flowers. Neuroscience tells us that molecules of each kind cause activation in some of our specialized sensory receptor cells, and not in others. Each of these cells stimulates some, but not all, of our neurons that lie deeper in our brains.

The specialized cells and their connections explain how we can react differently to different molecules arriving on our tongues or in our nostrils. But nowhere in the sciences is there an explanation of why or how the firing of our neurons causes orange flavor, chocolate flavor, lilac scent or outhouse odor.

Many contemporary philosophers are content to say that experiencing a flavor or a scent just is the very same thing as having a set of neural firings of a particular kind. This claim, however, does nothing to explain how it is possible for an experienced flavor or scent to be the same thing as a bunch of activities in nerve cells. The best that can be said for such an identity view is that it is simple, and that it cannot be proven to imply a contradiction. That’s a pretty weak reason: Berkeley’s view that there are just experiences and no corresponding material things is also simple and cannot be proven to imply a contradiction.

(Some self-professed identity theorists cheat. They make their view sound less implausible by writing of two “aspects” of neural events, or saying that neural events “lie behind” experiences, or that in experiences we “take a perspective” on neural events that is different from, and unavailable to, scientists who might be detecting the so-called same events with their instruments. But these palliative phrases all introduce some form of distinction between experiences and neural events, and they are not compatible with identity claims.)

My agreement that natural science does not explain appearances does not extend to Tallis’ favored way of arguing for this conclusion. That argument depends on “intentionality”, and the first thing to do is explain this word.

“Intentionality”, when a philosopher says it, means “Aboutness”. As in, your thought is about something. Of course, if you intend to do something – say, you intend to vote for candidate X – your intention is about something – in this case, it’s about voting for candidate X. But if you believe that Aunt Tillie is arriving tomorrow, your belief is about Aunt Tillie’s arrival. So, even though it’s just a belief and not an intention to do something, it has intentionality (as philosophers use this term) – that is, in plain English, it’s about something.

Some philosophers, including me, avoid “intentionality” whenever they can, and talk about aboutness instead, except when they discuss others who do use it. Many states besides intentions to act and beliefs can be about things or situations: these include hopes, desires, fears, doubts, supposings, wonderings, etc. One thing that makes aboutness interesting is that you can think about things or situations that do not exist. There are no unicorns and there are no men on the Moon at this writing, but that doesn’t stop anyone from thinking about those possibilities.

What about perceptions – are they about what is seen, heard, and so on? Tallis answers Yes, and this answer is a basic premise of the way he argues about appearances. My own answer is No.

This difference is fundamental, and it is a hot topic of discussion in the philosophy journals. A majority of philosophers are probably closer to Tallis’ view than to mine. There can be no hope of settling this issue in one blog post.

But it is relatively easy to provide a reason that raises some suspicion that seeing is very different from thinking. It’s a reason for separating appearance in visual experience from the processing of information about what is seen. And it is a reason that you can provide for yourself in your own home – as follows.

Sit by a window and look out at the buildings or trees or whatever is in the scene before you. (But if it’s a brick wall on the other side of a narrow alley, try a different window. You’ll need a scene where you can see something at significantly different distances.) Now, cover up one eye for about 20 seconds.

While you’re waiting, think about the character of your visual experience. I predict that you’ll agree that the world does *not* suddenly look flat. Nearby houses, for example, will still strike you as being seen as near, more distant houses as farther away. That may strike you as odd, because you may have learned that depth perception depends on cues from both eyes; and people who lose an eye do have some difficulty with such things as reaching for a glass of water. But there are many cues relating to distance. For example, you may know that houses in your neighborhood are roughly the same size. A distant house, however, takes up less of your visual field than a nearer one, and that helps you see it as more distant.

OK, now uncover your blocked eye. If you’re like me, you will experience a palpable restoration of a sense of depth. This too is somewhat puzzling: depth doesn’t dramatically disappear when you cover, but the restoration when you uncover is striking. I don’t know how to explain that, but it’s evident for me. (If anyone tries this and does not find what I find, I would be very interested to hear about it.)

What does this experience tell us? A point to note is that you will not have changed any judgments about what you’re looking at. Your thoughts about what is there will be the same. It is only a sense of depth – something like the difference between the 2D and 3D versions of movies – that is different. This difference is quite unlike a difference of opinion; it’s not a difference in what you think. It’s a difference in your visual experience.

It is almost routine in the philosophical literature to move from (a) the presence of depth in visual experience to (b) claiming that visual experience is about what is seen. But depth and aboutness are two different things. Visual appearances are one thing, judgments about what is seen are another. The judgments are often automatic, of course. You do not have to give yourself a conscious argument to get from appearances to things. You just effortlessly take it that you’re looking at a house, a car, an apple, or whatever. But the little experiment should help you see that the visual experience itself is a different thing from the judgment about what’s being seen.

[The article I’m responding to is by Raymond Tallis, “What Neuroscience Cannot Tell Us About Ourselves”, The New Atlantis, Number 29, Fall 2010, pp. 3-25. Thanks to Maureen Ogle for calling my attention to this article.]


Tracing Causation

March 6, 2011

A colleague told me she’d found something puzzling about a recent article in Science News, and wondered what I might think of it. The article turned out to be quite interesting, so here’s a very brief summary, with my reactions.

The article is “Cerebral Delights” (Susan Gaidos, Science News, 2/26/11, v. 179(5), p. 22). It reports some research on the amygdala, a pair of almond-shaped structures symmetrically located in left and right sides of the brain. It has long been known that some cells in the amygdala increase their activity in circumstances we find fearful. The leading discovery reported in the article is that cells in these same structures also play a role in situations that provide something pleasant.

One line of research involved inserting electrodes in monkeys’ amygdalae and presenting them with either a rewarding sip of water or an annoying puff of air to the face. The finding was that different neurons increased their activation, depending on which of these stimuli were used.

Converging results came from another experiment in humans, who already had electrodes implanted in preparation for a medical procedure (necessary for reasons unrelated to the experiment). Assignments by these volunteers of different values to foods was correlated with differences in activation of some individual cells in their amygdalae.

A third line of approach again involved monkeys. Like people, normal monkeys will shift to a less desired food after having a chance to eat a preferred food for a while. Monkeys with damaged amygdalae, however, did not switch their food choices in this way.

Finally, a fourth study found that monkeys with damaged amygdalae made choices that led to larger rewards in preference to smaller ones, just as normal monkeys did. But they were different from normal monkeys on measures of pupil diameter and heart rate – measures typically indicative of emotional response.

These are important results that increase our understanding of our brains. I find them particularly interesting because they give a useful example of complexity of function in a limited region. They help us avoid the fallacious move from “Region R is involved in task T” to “What region R does is contribute to T” – fallacious, because there may be many tasks to which a given region may contribute.

So, why might my colleague have been puzzled? Our conversation was brief, so I’m not sure. But the likely answer lies in the many abilities that this article seems to attribute to the amygdala. The amygdala is said to “help shape behavior”, to “act as a spotlight, calling attention to sensory input that is new, exciting and important”. It is said to “evaluate information”, “assign value”, serve “as a kind of watchdog to identify potential threats and danger”, and “judge” the emotional value of stimuli. (Some of these phrases appear to have been introduced by the reporter, but others seem to have come from some of the researchers themselves.)

Phrases like these should make us pause. The evidence is that cells in the amygdala are activated by certain circumstances, and that behavior can be altered by interfering with its operations. That’s good reason to think that the amygdala is indeed a part of the causal chain leading from perception to behavior. But judging? Assigning value? The evidence does not show that such complex operations – operations we normally ascribe to whole persons – are done in, or by, the amygdala. *Maybe* they are done there, but the research doesn’t show that. It leaves open the question as to *how* a set of circumstances that is potentially dangerous gets to increase the activation of one cell in the amygdala rather than another, or how a situation that offers rewards gets to activate an amygdala cell that’s different from the ones activated by potential danger.

In short, as far as the reported study conclusions go, it could be that the actual sorting out of what is to be avoided from what is to be pursued is done elsewhere, and that what the amygdala does is to communicate the results quickly to the many brain regions that must be mobilized for appropriate action. Or it may be that evaluation results from interaction of cells in the amygdala with cells in other regions. In either case, attributing evaluation to the amygdala would be stopping too soon in tracing the causes of our behavior. Instead of stopping there, we should continue to press the causal question – in this case, What are the prior processes that cause one set of neurons rather than another in the amygdala to increase their activation?

[ Two papers reported in the article under discussion are  Morrison, S. E. & Salzman, C. D. (2010) “Re-valuing the amygdala”, Current Opinion in Neurobiology, 20:221-230; and Jenison, R. L., Rangel, A., Oya, H., Kawasaki, H. and Howard, M. A. (2011) “Value encoding in single neurons in the human amygdala duringdecision making”, Journal of Neuroscience, 31(1):331-338. In addition, conference presentations by S. Rhodes & E. Murray, and by P. Rudebeck are discussed.]


Can We Control Our Attention?

February 11, 2011

My answer to this question in Your Brain and You is, briefly, “Partially, sometimes”. Jonah Lehrer takes a rather different view in a January 18, 2011 post, “Control the Spotlight”, on his blog at http://www.wired.com/wiredscience/frontal-cortex/ .

In the background is some work by psychologist Walter Mischel and colleagues. 4 year old children got to identify two items, both desirable but one preferred to the other. They were told that the experimenter would leave the room for a while and that they could have the preferred item if they waited for the return, but they could end the waiting period at any time by ringing a bell. If they did ring the bell, the experimenter would return immediately but, they were told, they would then get only the less preferred item. The measure of interest was how long a child would hold out before ringing the bell. The children were unobtrusively observed during the waiting period.

Many of the children rang for the experimenter’s return in a relatively short time, but some (“high delayers” in Lehrer’s phrase) held out for upwards of 10 minutes.

Among many interesting results of this work, two especially stand out. One is that when followed up more than 10 years later, high delayers were found, on average, to have fewer problems and significantly higher S.A.T. scores than those who ended the waiting period relatively early. The other is a point of strategy: instead of merely gritting their teeth, the high delayers distracted themselves with some activity, e.g., singing a song, or playing some sort of game.

It’s this last point that seems to lead Lehrer to statements like the following. What is often thought of as “willpower” is “really about properly directing the spotlight of attention, learning how to control that short list of thoughts in working memory”. . . . “When we properly control the spotlight, we can resist negative thoughts and dangerous temptations.” . . . . “Our decisions are driven by the facts and feelings bouncing around in the brain – the allocation of attention allows us to direct this haphazard process, as we consciously select the thoughts we want to think about.” . . . . “And yet, we can still control the spotlight of attention, focusing on those ideas that will help us succeed. In the end, this may be the only thing we can control.”

But “consciously select[ing] what thoughts we want to think about” attributes to us more than we can reliably do. Evidently, we cannot select what thoughts will occur to us in the first place – for to select something requires that we already are thinking of it. And we often cannot continue to think about a topic we want to think about: We have all experience finding ourselves thinking about X when we should be, and want to be, thinking about Y.

An alternative take on the kids’ behavior is suggested by the following reflections. (1) It may not so much as occur to a child to engage in a distracting activity. (2) Even if the idea of doing something distracting occurs, it might happen that no particular activity comes to mind. (3) It could happen that some activity comes to mind, but proves insufficiently engaging – i.e., the child’s thoughts might keep returning to the immediately available reward.

The high delayers did not have any of these possibilities happen to them. But that fortunate fact is not the sort of thing anyone can control by controlling their attention. For the first two cases, the reason is this: They depend on having something come to mind, and you cannot bring something to mind intentionally unless you’ve already thought of it – in which case it has already come to mind. And in the third case, as noted,  we know that sometimes we just “can’t keep our minds on our work”, and that’s a way of saying that our control is only partial.

The high delayers are children whose genetic endowment, developmental circumstances, experiences, parental treatment, and resulting habits have already put them into a state that distinguishes them from their peers. It should not be surprising that being in an advantageous state when young is correlated with having desirable characteristics years later.

It seems to me that Lehrer is on firmer ground when he says that “The unconscious mind, it turns out, is most of the mind”. This remark applies to the factors that cause the spotlight of attention to be upon whatever it is highlighting at a particular moment. The operator of a nonmetaphorical spotlight in a real theater consciously directs the spotlight to the featured performer. But it is not a helpful use of the spotlight metaphor to imagine that behind our ‘spotlight of attention’ we are consciously deciding where to point it. The things we are in a position to decide about are things that are already in the spotlight of our attention.

[For background see Mischel, W., Shoda, Y. and Rodriguez, M. L. “Delay of Gratification in Children”, Science 244:933-938 (1989). This article reports many interesting manipulations that neither Lehrer nor I have attempted to summarize. These variations of conditions (with consequent differences of outcomes) seem to me to support the view that what is attended, and how what is attended is thought of, is highly dependent on many circumstances external to the participating subject.]


Unconsious Processing and Political Smears

January 24, 2011

In a 2010 paper, Spee Kosloff and colleagues report several studies involving political smears that they conducted during the month before the 2008 election. The smears were that Obama is a closet Muslim extremist and that McCain is senile.

One of the studies measured an effect that worked entirely below the level of consciousness. Participants were presented with strings of letters that were either words or nonwords (of English) and they pressed one of two buttons to indicate whether the string was or was not a word. Before they saw the letter string, two other things happened. (1) They saw the word “trial” (in the same place where the letter string would appear) for about three quarters of a second – amply long enough to see it and read it. (2) Between the “trial” and the letter string, there was a very brief exposure (28.5 thousandths, or about 1/35, of a second) of either “Obama” or “McCain”, also where the word “trial” had been and where the letter string would immediately appear. This exposure is too brief to read; most participants were unaware that there had been a word flashed between “trial” and the string to be classified as a word or nonword. (The roughly 15%  who detected that there had been a word reported having had no idea what it was.)

Among the letter strings that were words, most had no relevance to political smears (e.g., “rectangle”, “lamp”), but a few were laden with such relevance. They were either Muslim-related terms, e.g., “Koran”, “mosque”, or senility-related terms, e.g.,  “dementia”, “Alzheimers”.

The measure in this study was the time it took from the onset of one of the laden words to the participant’s decision that it was a word. (Only correct decisions about word status were included in the data to be analyzed.)

The key results of this experiment are these. (a) Obama supporters decided that senility-related terms were words faster after “McCain” had been briefly flashed than after “Obama” had been briefly flashed. (b) Obama supporters were also faster than McCain supporters to decide that senility-related terms were words after “McCain” had been flashed. (c) and (d) are parallel results for McCain supporters and decisions about Muslim-related terms after the brief presentations of “Obama”.

In short, a presentation of a word too briefly to be consciously read can cause a measurable difference in the time it takes to decide that a politically laden word is a word. And this difference depends on the relation between the flashed word and one’s political views.

It may be tempting to downplay the significance of this result. It’s a special case, one might say.  The task is artificial. The differences in reaction time are smaller than time differences that are relevant to real world action.

But I think that such a dismissive reaction would be unfortunate. The special case and the artificial experimental set up are necessary to get a clear observational result. But the conclusion that the evidence thus gained supports is that an unconscious stimulus can engage a cognitive process (i.e., one’s views about a candidate) and can do so entirely outside of consciousness. The lesson I am inclined to draw from this study is that we have some direct evidence that unconscious processes can have cognitive richness.

A second experiment tested the effect of making race salient (and consciously so) on a group of participants who indicated that they were undecided as to which candidate they supported.

The experimental task was, again, to decide whether a letter string was a word or a nonword. Among those strings that were words, most were neutral fillers, but a few were Muslim-related terms. The measure was the time from onset of the letter string to the pressing of the button indicating the word or nonword decision. The only decisions of interest are those that correctly classified Muslim-related words as words.

The key manipulation was that immediately before the decisions on letter strings began, participants filled out a questionnaire about themselves, which either did or did not include a question about race. This question provided six racial categories and asked participants to circle those “that are personally relevant to your identity”. No participants circled “African American”. Those who got the question are the “race salient” group, with the remainder being the non race salient group.

Participants saw a readable word, “trial”, followed by a too-brief-to-read, 1/35 of a second exposure of “McCain” or “Obama”, followed by the letter string to be classified as a word or nonword.

The most interesting, and somewhat disturbing, results of this experiment are these. (a) Undecided participants who were briefly exposed to “Obama” and who had answered the race question were faster to correctly classify Muslim-related words as words than similarly exposed undecided participants whose form did not include the race question. (b) This difference was not present when the briefly exposed word was “McCain”.

Since the smear that Obama is a closet Muslim extremist was often repeated prior to the 2008 election, it is presumed that all participants were aware of it. It appears from this study that this background awareness did not become activated if the matter of race had not been very recently made salient. But if it was made salient, then it was sufficiently activated to quicken the response on the word status decision task.

The race question itself was, evidently, consciously processed by those whose questionnaire included it. But the decision task was done rapidly and there is no question of the participants having consciously deliberated about the relation between race and the word status decision. So, even though the salience of race worked through a conscious input, the process by which it reduced decision time worked outside of conscious deliberation.

Once again, it might seem that the results of these two experiments show something about unconscious processing, but are unimportant for larger life because the differences in reaction times are far smaller than the time it takes us to think of and decide to execute any conscious, deliberate action (including, as always, speaking or indicating intent with a gesture). For example, the fact that a decision about a word took a fraction of a second less would not show that the decision was any different from what it would have been without the brief exposures or the inclusion of the race question. A third study, however, casts doubt on such a hopeful view.

In the third study, participants read articles of about 600 words, one that elaborated on the Obama smear and one that elaborated on the McCain smear. The articles were written by the experimenters and designed to be parallel in the types of support offered for Obama’s closet Muslim extremism or McCain’s senility, but they were produced in a format that made them look like copies of newspaper articles. After reading one or the other, participants were asked to rate their degree of endorsement of the thesis of the article they read.

Data were analyzed separately for those who had identified as Obama supporters, McCain supporters, or undecided. Care was taken to ensure that the participants knew that the experimenters would not be able to connect the responses to individual participants.

The key manipulation was that a questionnaire about the participants’ demographics, given immediately before the rest of the study, either included or did not include the race question (the same as in experiment 2) or a similar question about participants’ age group.

As expected, declared supporters of each candidate gave low endorsement to the smear of their candidate and higher endorsement to the smear of their candidate’s opponent. A key finding emerged from the results for the undecideds. Those who had received the race question gave higher ratings to the Obama smear than those who had not, and those who had received the age question gave higher ratings to the McCain smear than those who had not. The authors conclude that “It appears that undecided individuals can become motivated to accept smears of multiple candidates when situational factors render intergroup differences salient.” (p. 392)

This experiment is evidence for a sobering result. That one accepts a certain racial classification or a certain age classification as applying to oneself cannot be a reason for accepting or rejecting a smear of a candidate. The race and age of the candidates was well known to everyone. The manipulation consisting of including or not including the self-classification question regarding race or age did not supply new knowledge or a reason. Nonetheless it had an effect. It seems that this effect, therefore, worked through a process that did not engage conscious processing of the kind we would recognize as weighing reasons or evaluating evidence. Thus, this experiment provides evidence that even when inputs (reading and answering the race or age classification question) and outputs (making a mark on a scale indicating endorsement of an article’s thesis) are fully conscious, there can be processes that work outside of consciousness, yet produce effects on the conscious output.

[Kosloff, S., Greenberg, J., Schmader, T., Dechesne, M. and Weise, D. (2010) “Smearing the Opposition: Implicit and Explicit Stigmatization of the 2008 Presidential Candidates and the Current U. S. President”, Journal of Experimental Psychology: General 139(3): 383-398. This paper contains several results not stated here, and a fourth experiment that confirms and extends the results of experiment 3.]


Glimpses of the Unconscious Mind

January 9, 2011

There is a little experiment that I’ve sometimes recommended as a way of appreciating what our brains do unconsciously. It concerns the phenomenon of finding that one has a tune ‘running through one’s head’. Namely, the next time this happens to you, stop and try to think why you has this particular tune in mind right now.

When I’ve tried this, I’ve often had success. What happens is that I’ll recall that a few minutes before I noticed the tune, some key word or phrase from its lyrics happened to occur in a conversation. The conversation was not about the song, or anything closely related to it, and the word or phrase did not trigger any inner speech that had the sense of “Oh, that word/phrase is from <such and such piece of music>.” No: there were several minutes of attending to a conversation on completely unrelated matters, and then “out of the blue” the inner humming of some tune.

(I regret to report that discovery of the explanation for the tune’s running through one’s head does nothing to get rid of its annoying repetition.)

The successes I can recall all worked through the words associated with the tune. But, in his new book, Antonio Damasio reports a more interesting case that worked a little differently. In brief, he found himself thinking of a certain colleague, Dr. B. Damasio had not talked with Dr. B. recently. They were not collaborating on a project, there was no need to see Dr. B., and no plan to do so. Damasio had seen Dr. B. walking by his office window sometime earlier, but that was only remembered later and had not been an object of attention at the time. Damasio wondered why he was thinking of Dr. B.

The explanation that came to mind on reflection was that Damasio had happened, quite unintentionally, to have moved in a way that was similar to Dr. B’s somewhat distinctive gait. Damasio’s explanation is that the accidental circumstance of moving in a way similar to Dr. B. triggered an unconscious process that resulted in Dr. B’s coming to mind.

What makes this case so interesting to me is that, unlike my tune cases, it does not plausibly work through the language system. Of course, I know a few words I could use to describe a person’s gait, but even if I worked hard at it, I think the best I could do would be a vague description that would apply to many people. I suspect it’s the same for most of us. It’s just not believable that Damasio had a verbal description of gait that was specific for Dr. B. And even if he were capable of such a feat of literary skill, he had not been trying to describe his own movement, and so there would have been no route to making a connection by association through words.

What’s left is that the thought of Dr. B. was produced through a process that was not only not conscious, but also not verbal. The memory of Dr. B.’s motion was called up by Damasio’s own motion directly by the similarity of the motions, not through the medium of verbal representations of those motions.

(The suggestion that the motion may not have been accidental, but was caused by Damasio’s having seen his colleague walking by his window, in no way undercuts the point here. For, that would also be a case of unconscious processing leading to, this time, actual movement, without having gone through verbal representations of the stimulus or the resulting motion.)

An attractive analogy for a leading strand in Your Brain and You is that unconscious, nonverbal processing underlies our mental processing in the way that rock strata underlie the ground we walk on. Unless we’re lucky enough to be in a place like the Grand Canyon, we see the strata clearly only occasionally, where there is an outcrop. Damasio’s case seems to me to be one of these outcrops, where we can get a clear glimpse of our unconscious, nonverbal mind at work.

[A. Damasio, Self Comes to Mind: Constructing the Conscious Brain (New York: Pantheon, 2010). The coming to mind of Dr. B. is discussed on pp. 104-106 ]


A Methodological Puzzle

December 20, 2010

A recent study by Wilson et al. argues for a function that is done by the prefrontal cortex (PFC) but is apparently not done exclusively by any one of its parts. This function is processing of temporally complex events. Temporally complex events are stimuli in which several features that are needed to learn a task are presented sequentially, and are not all available at any one time.

This result is particularly interesting because the authors argue for there being other functions that do seem to be located in different parts of the PFC. This is supported in several ways. One method involves selectively destroying parts of macaque monkey brains, and finding, for each part, a task on which performance is highly impaired by destruction of that part but much less impaired by destruction of the others.

The further function of the whole PFC (i.e., the processing of temporally complex events) is then shown by a task on which performance is only slightly impaired by destruction of each of the parts, but is severely impaired by destruction of the whole PFC.

“Here we have argued that the PFC as a whole has an overarching function that is not localized to any particular subregion, and we have proposed that this role is related to its involvement in the processing of temporally complex events.” (p. 538)

As good studies in science should do, this one raises interesting questions. One that intrigues me is this. How should we go about distinguishing functions? How do we tell whether we have found two independent functions, rather than one function that works by making use of another? Could it be that there is a function that (a) can be performed anywhere in the PFC and (b) is drawn upon in performing each of the functions that also require something further that can be done only in a specific part of the PFC?

I have no answer to offer to this question, but I think I can clarify it by reference to another case. There is a brain part (fusiform gyrus) that is often referred to as “the face processing area”. Work from Eric Cooper’s lab suggests, however, that this is an overly narrow description of what this area does. That’s because its activity seems required whenever we have to make finer discriminations that depend on relative distances and not just on general features of where things are. Faces are alike in having the nose between the eyes and above the mouth, so we have to be able to appreciate different distances between these features in order to recognize a particular person. But we also have to use this ability to distinguish, e.g., different makes and models of cars. A credenza and a dresser are both essentially boxes, so we have to be able to analyze relative proportions to tell the difference.

In short, the suggestion is that the famous “face processing area” would be better thought of as performing the function of making discriminations that depend on relative distances and not just gross placement of parts.

To find out what a part of the brain does, researchers must use some definite task. And then, as responsible scientists, they must relate their descriptions of the functions performed to the tasks they used. Otherwise, they would be merely speculating about how the mind works.

But, somewhat paradoxically, this necessary policy may have a built-in cost. Descriptions that are driven by investigative tasks may turn out to be overly narrow, and that may skew our conception of how each brain part contributes to our whole organization. To avoid this pitfall, we have to keep our minds open. It’s always possible that what seems to be a part that performs a specific function may do something more general than what could legitimately be concluded from any single study.

 [Wilson, C. E., Gaffan, D., Browning, P. G. F. and Baxter, M. G. (2010) “Functional localization within the prefrontal cortex: missing the forest for the trees?”, Trends in Neurosciences 33(12):533-540. Work from Cooper’s lab can be found in, e.g., Brooks, B. E. and Cooper, E. E. (2006) “What Types of Visual Recognition Tasks Are Mediated by the Neural Subsystem that Subserves Face Recognition?”, Journal of Experimental Psychology: Learning, Memory, and Cognition 32(4):684-698.  ]


Explanation and Sensations

December 10, 2010

Readers of Your Brain and You will know that I deny that sensations are identical with neural events. Neural events cause sensations, but the effects are different from their causes. The qualities that distinguish sensations – colors, tastes, smells, itchiness, hunger, and so on are found in the sensations, not in the coordinated groups of neuron firings that cause the sensations.

But I do agree that water is identical with H2O and that the heat in, say, a frying pan, is nothing but the average energy in the motion (or, mean kinetic energy) of the molecules that compose the pan.

So, what is the difference between these two kinds of case? In a nutshell, the identities I accept are backed by explanations, but the alleged identity of sensations and neural events is a bald piece of ideology that is not backed by explanations.

Of course, physicalists, who propose that pain is nothing but neural firing of a certain kind (let’s call it Neural Kind NK1), and having an orange afterimage is nothing but having a neural firing of a different kind (let’s call it NK2) will not take this claim of difference lying down. They’ll say “What do you mean “is backed by an explanation”? And they’ll ask whether there is really any difference in the explanatory value of “water = H2O” and “pain = NK1”.

Those are fair questions, so let’s try to answer them. I’ll address the first one by focusing on just one case, which is representative of many others. Namely, ‘water = H2O” is supported by its explaining why water dissolves salt.

What the chemists tell us is that H2O molecules, in virtue of their chemical properties, can surround individual molecules of salt (i.e., NaCl molecules). So, when we put salt into water, its molecules don’t stay together, they get separated so that H2O molecules lie between the salt molecules. And that’s just what we mean by “dissolves salt”. So, the hypothesis that water = H2O explains why salt dissolves in water. And its giving us access to such an explanation is a reason for accepting the identity of water with H2O. Other cases go similarly, and give us further support.

Well, not so fast! While there’s a good idea expressed here, it won’t do as it stands, because it’s simply not true that “dissolves salt” just means “has its molecules surrounded by molecules of the solvent”. After all, people knew that water dissolves salt long before they had any ideas about molecules. And they knew that it also dissolves sugar, for example, but not gold.

So, how can we rescue the good idea without having to say something that is plainly false? Well, how did people know that water dissolves salt when they didn’t know anything about molecules? They put salt into water and, after a bit of stirring, they couldn’t see it any more. But if they put their wedding ring in water and stirred for a long time, there would be the ring, as plainly visible as when they took it off their finger.

What this example suggests is that a better description of what “water = H2O” explains, in our particular case, is why things look the way they do. When we put salt into water and stir, we can’t see it any more. Given that we can’t see individual molecules, the chemists’ story about what happens when we put salt into water explains why we can’t see it after stirring.

The generalization is that identity claims like “water = H2O”, together with other claims we already accept (such as that we can’t see individual molecules) explains why our evidence is what it is.

Here are a few other examples to illustrate the point. (1) Water evaporates, desks do not. E.g., if there is water on a kitchen counter and we don’t do anything, the counter will be dry a few hours later; but our desks don’t disappear. That’s something we see. Chemists’ stories about strength of bonding and the transfer of kinetic energy explain why we see the dry counter, but don’t lose our desks.

(2) We can put a cool thing up against a warm thing, and feel that the first has gotten warmer after a while. If heat is just the motion energy of molecules, and motion energy can be transferred by contact, we have an explanation of why the cooler body warms up.

(3) It would be a thankless task to try to define “life”. But we can see that things we agree are living typically grow. A strictly biological story (ingestion, digestion, metabolism) explains why organisms get bigger. This supports the idea that living things are nothing but biological systems; and similar explanations of other typical functions converge on the same result.

What about the second question? If “pain  = NK1” were like “water = H2O”, then there being NK1 should explain why some piece of evidence is what it is. No such explanation seems to be in the offing. If “having an orange afterimage = having NK2” were like “heat = mean kinetic energy” there should be some piece of evidence that is explained by NK2. Again, no such explanation seems available. It’s not even clear why any neural event should ever be correlated with any sensation at all.

Physicalists may suggest, however, that such explanations are available. Namely, we see that pains cause pain behavior (e.g., withdrawing, favoring injured parts, avoiding repeated contact with the source of the pain), and we see that having orange afterimages cause reports of their occurrence. “Pain = NK1” and “having an orange afterimage = having NK2” explain these pieces of evidence.

But these causal connections are not pieces of evidence: they are inferred conclusions. Moreover, the inferences depend on assuming the identities. Actions, including utterances of reports, are – as physicalists agree – caused by neural events. The neural events are themselves adequate to bring about the behavior. The causal relevance of pain or having an orange afterimage comes in only by using the identity claims. The pattern is: NK1 causes behavior B, pain = NK1, therefore pain causes behavior B.

It follows that these alleged pieces of evidence cannot be used to support identity claims; for to make such a use would be circular reasoning. The pattern would be this:

[1] pain = NK1

[2] NK1 causes behavior B

[3] Therefore, pain causes behavior B.

[4] We can explain why (or how) pain causes behavior B by assuming that pain = NK1.

[5] We are entitled to identity claims if they are backed by explanations.

[6] Therefore, we are entitled to the identity claim that pain = NK1.

But you can’t get to [3] without assuming [1]; and the legitimacy of that assumption is the conclusion of this reasoning. So this kind of “support” for “pain = NK1” depends on assuming what was to be supported.

Even those who think that “pain = NK1” or “having an orange afterimage = NK2” are true ought to be able to see that such claims cannot be supported in a way that parallels our reasons for accepting “water is H2O”, “heat is mean kinetic energy” and similar identity claims.