Gazzaniga’s Modules

January 3, 2012

I’ve been reading Michael Gazzaniga’s 2009 Gifford Lectures, now published as Who’s In Charge? Free Will and the Science of the Brain. I can’t say that I think he’s untied the knot of the free will problem, but the book contains some interesting observations about split brain patients, brain scans, and modules. Most of this post is about modules, but Gazzaniga’s remarks about brain scans deserve top billing.

These remarks come in the last and most useful chapter, which focuses on problems in brining neuroscience into court. Gazzaniga provides a long list of such problems, and anyone who is interested in neuroscience and the law should certainly read it.

The general theme is this. Extracting statistically significant conclusions from brain scans is an extremely complex business. One thing that has to be done to support meaningful statements about the relation between brain regions and our mental abilities is to average scans across multiple individuals. This kind of averaging is part of what is used to generate the familiar pictures of brain regions “lighting up”.

But in any court proceeding, the question is about the brain of one individual only. Brain scans of normal, law-abiding individuals often differ quite noticeably from averages of scans of people doing the same task. So, inferences from an individual’s showing a difference from average in a brain scan to conclusions about that individual’s abilities, proclivities, or degree of responsibility are extremely difficult and risky.

The individual differences in brain scans show that our brains are not standard-issue machines. Brains that are wired differently can lead to actions that have the same practical result across a wide variety of circumstances. This implies that there is a limit to how specialized each part of our brains can be.

But what about modularity? Don’t we have to think of ourselves as composed of relatively small sub-networks that are dedicated to their given tasks?

Here is where things get interesting; for in Gazzaniga’s book, there seem to be two concepts of “module” at work, although the distinction between them is not clearly drawn.

The first arises out of some observations that have been known for a long time, but are not often referred to. (They’re on pp. 32-33 of Gazzaniga’s book.) One of these is that while our brains are 2.75 larger than those of chimpanzees, we have only 1.25 times more neurons. So, on average, our neurons are more distant from each other. What fills the “extra” space is connections among neurons; but if the same degree of connectivity among neurons were maintained with the extra distance, there would have to be many more miles of connecting lines (axons) than there actually are. So, in us, the degree of connectivity is, on average, less than that in chimps. There are still groups of close-lying neural cells that are richly connected, but the connections of one group to another are sometimes relatively sparse. We have thus arrived, by a sort of physiological derivation, at modules.

It must be noticed, however, that this explanation of the existence of modules does not say anything about what kind of functions the small, relatively well connected groups might be performing. So, this explanation does not contribute any reason for supposing that there are “modules” for anything so complex as – to take a famous case – detecting people who cheat on social rules. There is good evidence that we have a well developed ability to detect cheaters, and that this ability fails to extend to similar problems that are not phrased in terms of social rules. But it is another question whether there is one brain part that is dedicated to this task, or whether we can do it because we have several small groups of neurons, each of which does a less complicated task, and whose combined result enables us to detect cheaters with ease.

Modularity reaches its apogee when Gazzaniga introduces the “interpreter module”. The job of this item is to rationalize what all the other modules are doing. It is the module that is supposed to provide our ability to make up a narrative that will present us – to others, and to ourselves – as reasonable actors, planning in accord with our established desires and beliefs, and carrying out what we have told ourselves we intend to do.

According to the interpreter module story, we can see this inveterate rationalizer at work in many ways. It reveals itself in startlingly clear ways in cases of damaged brains. Some of the patients of interest here are split brain patients; others have lost various abilities due to stroke or accidents. Some parts of their brains receive less than the normal amount of input from other parts. Their interpreter modules have incomplete information, and the stories they concoct about what their owners are doing and why are sometimes quite bizarre.

But people with intact brains can be shown to be doing the same sort of rationalizing. For example, Nisbett and Wilson (1977) had people watch a movie. For one group, the circumstances were normal, for another the circumstances were the same except for a noisy power saw in the hall outside. Participants were asked to rate several aspects of the movie, such as interest, and likelihood of affecting other viewers. Then they were asked whether the noise had affected their ratings. In fact, there was no significant difference in the ratings between the non-distracted group and the group exposed to the noise. But a majority of those in the latter group believed that the noise had affected their ratings.

While false beliefs about our own mental processes are well established, I am suspicious of the “interpreter” story. The interpreter is called a “module” and is supposed to explain how we bring off the interesting task of stitching together all that goes on, and all that we do, into a coherent (or, at least, coherent sounding) narrative. But we might be doing any of very many things, under many different circumstances. To make a coherent story, we must remain consistent, or nearly so, with our beliefs about how the physical and social worlds operate. We must anticipate the likely reactions of others to what we say we are doing. So, according to the interpreter module story, this “module” must have access to an enormous body of input from other modules, and be able to process it into (at least the simulacrum of) a coherent story.

To me, that sounds an awful lot like an homunculus – a little person embodied in a relatively tightly interconnected sub-network, that takes in a lot of inputs and reasons to a decently plausible output that gets expressed through the linguistic system. That image does nothing to explain how we generate coherent, or approximately coherent, narratives; it just gives a name to a mystery.

It would be better to say that we have an ability to rationalize – to give a more or less coherent verbal narrative – over a great many circumstances and actions. Our brains enable us to do this. Our brains have parts with various distances between them; somehow, the combined interactions of all these parts results in linguistic output that passes, most of the time, as coherent. We wish we understood how this combined interaction manages to result in concerted speech and action over extended periods of time; but, as yet, we don’t.

[Michael S. Gazzaniga, Who’s In Charge? Free Will and the Science of the Brain, The Gifford Lectures for 2009 (New York: Harper Collins, 2011). Nisbett, R. E. & Wilson, T. D. (1977) “Telling More Than We Can Know: Verbal Reports on Mental Processes”, Psychological Review 84:231-259. This paper describes many other cases of false belief about our mental processes. The physiological comparison between humans and chimpanzees, and its significance, are referenced to Shariff, G. A. (1953) “Cell counts in the primate cerebral cortex”, Comparative Neurology 98:381-400; Deacon, T. W. (1990) “Rethinking mammalian brain evolution”, American Zoology 30:629-705; and Ringo, J. (1991) “Neuronal interconnection as a function of brain size”, Brain, Behavior and Evolution 38:1-6.]


Do Conscious Thoughts Cause Behavior?

December 12, 2011

In the late 19th Century, Thomas Huxley advanced a view he called “automatism”. This view says that conscious thoughts themselves don’t actually do anything. They are, in Huxley’s famous analogy, like the blowings of a steam whistle on an old locomotive. The steam comes from the same boiler that drives the locomotive’s pistons, and blowings of the whistle are well correlated with the locomotive’s starting to move, but the whistling contributes nothing to the motion. Just so with conscious thoughts: the brain processes that produce our behavior also produce conscious thoughts, but the thoughts themselves don’t produce anything.

Automatism (later known as epiphenomenalism) is currently out of favor among philosophers, many of whom dismiss it without bothering to argue against it. But it has enough legs to be the target of an article by Roy F. Baumeister and colleagues in this year’s Annual Review of Psychology. These authors review a large number of studies that they regard as presenting evidence “supporting a causal role for consciousness” (p. 333). A little more specifically, they are concerned with the causal role of “conscious thought”, which “includes reflection, reasoning, and temporally extended sense of self” (p. 333). The majority of the evidence they present is claimed to be evidence against the “steam whistle” hypothesis that “treats conscious thoughts as wholly effects and not causes” (p. 334).

To understand their argument, we need to know a little more about the contrast between unconscious thought and conscious thought. To this end, suppose that a process occurs in your brain that represents some fact, and enables you to behave in ways that are appropriate to that fact. Suppose that you cannot report – either to others or to yourself in your inner speech – what fact that process represented. That process would be a thought that was unconscious. But if a process occurs in you, and you can say – inwardly or overtly – what fact it is representing, then you have had a conscious thought.

What if I tell you something, or instruct you to do some action or to think about a particular topic? Does that involve conscious thought? Baumeister et al. assume, with plausible reason, that if you were able to understand a whole sentence, then you were conscious, and at least part of your understanding the sentence involved conscious thought. (For example, you could report what you were told, or repeat the gist of the instruction.) They also clearly recognize that understanding what others say to you may, in addition, trigger unconscious processes – processes that you would not be able to report on.

If you want to do a psychological experiment, you have to set up at least two sets of circumstances, so that you can compare the effect of one set with the effect of another. If your interest is in effects of conscious thoughts, you need to have one group of participants who have a certain conscious thought, and another group who are less likely to have had that conscious thought. The way that differences of this kind are created is to vary the instructions given to different groups of participants.

For example, in one of the reviewed studies, participants randomly assigned to one group were given information about costs and features of a cable service, and also instructed to imagine being a cable subscriber. Participants in another group received the same information about costs and features, but no further instruction. A later follow up revealed that a significantly higher proportion of those in the group that received the special instruction had actually become cable subscribers.

In another study, the difference was that one group was asked to form specific “implementation intentions”. These are definite plans to do a certain action on a certain kind of occasion – for example to exercise on a particular day and time, as contrasted with a more general intention to take up exercise, but without thinking of a particular plan for when to do it. The other group received the same information about benefits of the action, but no encouragement to form specific implementation intentions. Significantly more of those who were encouraged to form implementation intentions actually engaged in the activity.

The logic behind these studies is that one group was more likely to have a certain kind of conscious thought than the other (due to the experimenters’ instructions), and it was that group that exhibited behavior that was different from the group that was less likely to have had that conscious thought. The correlation between the difference in conscious thoughts and the difference in subsequent behavior is then taken as evidence for a causal connection between the (earlier) thoughts and the (later) behavior.

There is, however, a problem with this logic. It arises from the fact (which, as noted earlier, the authors of the review article acknowledge) that conscious processing of instructions triggers unconscious processes. We can easily see that this is so, because processing what is said to us requires that we parse the grammar of sentences that we understand. But we cannot report on how we do this; our parsing is an unconscious process. What we know about it comes from decades of careful work by linguists, not from introspection.

Since conscious reception of instructions triggers unconscious processes, it is always possible that behavioral effects of the different instructions are brought about by unconscious processes that are set in motion by hearing those instructions. The hearing (or reading) of instructions is clearly conscious, but what happens after that may or may not be conscious. So, the causal dependence of behavior on instructions does not demonstrate causal dependence of behavior on conscious processes that occur after receiving the instructions, as opposed to unconscious processes that are triggered by (conscious) hearing or reading of instructions.

This point is difficult to appreciate. The reason is that there is something else that sounds very similar, and to which we really are entitled to claim on the basis of the evidence presented in the review article. This claim is the following (where “Jones” can be anybody)

(1) If Jones had not had the conscious thought CT, Jones would not have been as likely to engage in behavior B.

This is different from

(2) Jones’s conscious thought CT caused it to be more likely that Jones engaged in behavior B.

What’s the difference? The first allows something that the second rules out. Namely, the first, but not the second, allows that some unconscious process, UP caused both whatever conscious thoughts occur after receiving instructions, and the subsequent behavior. The experimenter’s giving of the instructions may set off a cascade of unconscious processes, and it may be these that are responsible for both some further conscious (reportable) thoughts and for subsequent actions related to the instructions. If the instructions had not been given, those particular unconscious thoughts would likely not have occurred, and thus the action might not have been produced.

Analogously, if the flash of an exploding firecracker had not occurred (for example, because the fuse was not lit) it would have been very unlikely that there would have been a bang. But that does not show that, in a case where the fuse was lit, the flash causes the bang. Instead, both are caused by the exploding powder.

The procedure of manipulating instructions and then finding correlated differences in behavior thus establishes (1), but not (2). So, this procedure cannot rule out the steam whistle hypothesis regarding conscious thought.

Interestingly, there are some cases for which the authors of the review identify good reasons to think that the steam whistle view is actually the way things work.

For example, one study compared people who imagined a virtuous choice with those who had not done so. In a subsequent hypothetical choice, people in the first group were more self-indulgent than those in the comparison group. This difference was removed if the same activity was imagined as a court-ordered punishment rather than a choice to volunteer.

However, it seems very unlikely that anyone consciously reasoned “I imagined myself making a virtuous choice, therefore I’m entitled to a bit of self-indulgence”. In this, and several similar reported cases, it seems far more likely that the connection between imagining a virtuous choice, feeling good about oneself, and feeling entitled to self-indulgence runs along on processes that do not cause conscious thoughts with relevant content.

The article under discussion is full of interesting effects, and these are presented in a way that is highly accessible. But it does not succeed in overturning an alternative to its authors’ preferred view. According to this alternative view, the causing of behavior (after consciously perceiving one’s situation, or consciously receiving instructions) is done by unconscious processes. This alternative view allows that sometimes, but not always, these unconscious processes also cause some conscious thoughts that we express either in overt verbal behavior, or in sentences about what we are doing that we affirm to ourselves in our inner speech.

[The article under discussion is Roy F. Baumeister, E. J. Masicampo, and Kathleen Vohs, “Do Conscious Thoughts Cause Behavior?”, Annual Review of Psychology 62:331-361 (2011). The difference between (1) and (2) is further explained and discussed in Chapter 4 of Your Brain and You. ]


Thinking About Modules

November 21, 2011

In a recent Wall Street Journal review article, Raymond Tallis expresses dissatisfaction with what he calls “biologism” – the view that nothing fundamental separates humanity from animality. Biologism is described as having two “cardinal manifestations”.

The first is that the mind is the brain, or its activity. This view is held to have the consequence that one of the most powerful ways to understand ourselves is through scanning the brain’s activities.

The second manifestation of biologism is the claim that “Darwinism explains not only how the organism Homo sapiens came into being (as, of course, it does) but also what motivates people and shapes their day-to-day behavior”.

Tallis suggests that putting these ideas together leads to the following view. The brain evolved under natural selection, the mind is the (activities of the) brain, our behavior depends on the mind/brain, therefore the mind and our behavior can be explained by evolution. A further implication is claimed, namely, that “The mind is a cluster of apps or modules securing the replication of genes that are expressed in our bodies”. Studying the mind can be broken down into studying (by brain scans) the operation of these modules.

Tallis laments the wide acceptance of this way of looking at ourselves. He affirms that brain activity is a necessary condition of all of our consciousness, but holds that “many aspects of everyday human consciousness elude neural reduction”.

But how could aspects of our consciousness elude neural reduction, if everything in our consciousness depends on the workings of the brain? Tallis answers: “For we belong to a boundless, infinitely elaborated community of minds that has been forged out of a trillion cognitive handshakes over hundreds of thousands of years. . . . Because it is a community of minds, it cannot be inspected by looking at the activity of the solitary brain.”

This statement, however, is not an answer to the question of how aspects of our consciousness can elude neural reduction. It explains, instead, why we cannot understand facts about societies by looking at a solitary brain, and why we cannot reconstruct the evolutionary history of our species by looking at the brain of one individual. But the question about elusiveness of neural reduction concerns the consciousness of individuals. It’s about how individual minds work, and what gives rise to each person’s behavior.

Aside from rare cases of feral children, individuals grow up in societies. Even so, their motivations and behavior depend on their individual brains. Individuals must have some kind of representation of societal facts and norms in their own brains, if those brains are to produce behaviors that are socially appropriate and successful. At present, alas, we do not understand what form those representations take, nor how they are able to contribute, jointly with other representations, to intelligent behavior. But the question of how the individual mind works is a clear one, and the search for an answer is one of the most exciting inquiries of our time.

Despite my dissatisfaction with Tallis’s account, I am sympathetic to some of his doubts about reduction of motivation and behavior to the operations of modules. The true source of the problem, however, is not our attention to the mind/brain of solitary individuals.

The real problem is, instead, uncritical acceptance of modules. The modular way of looking at things does not follow from Tallis’s two cardinal manifestations. They say, in sum, that whatever we think and whatever we do depends on the activities of a brain that developed under principles of Darwinian evolution. They do not say one word about modules. They do not imply any theory of how the evolved brain does what it does.

These remarks are in no way a denial of modules, and in some cases, there is very good reason to accept them. But, even accepting that there are many modules, it does not follow that for any given motivation or behavior, X, there is a module that is dedicated to providing X – i.e., that functions to provide X and does not do anything else. Moreover, it is clear that our evolved brain allows for learning. If we learn two things, they may be related, and if we recognize a relation among things that we had to learn in the first place, there cannot be a module for recognizing that relation.

Caution about introducing modules for specific mental or behavioral features that may interest us is compatible with supposing not only that there are many modules, but even with supposing that operations of several modules is required for everything we do. That’s because plurality of modules carries with it the possibility of variability in how they are connected. Such variability may depend on genetic differences, developmental differences, and/or differences in learning. In any case of combined action of several modules, therefore, there will be no simple relation between a motivation or a behavior and a single module, nor any simple relation between a motivation or behavior and a collection of modules.

So, even granting Tallis’s two cardinal manifestations and a commitment to extensively modular brain organization, we cannot expect any simple relation to hold between some ability that interests us and the operation of a module dedicated to that ability. So, I agree with Tallis that we should be suspicious of facile “discoveries” of a module for X, where X may be, e.g., an economic behavior or an aesthetic reaction. But I think that the complexities that lie behind this suspicion are to be found in the complexity of the workings of individual brains. Our social relations with others provides distinctive material for us to think about, but they will not explain how we do our thinking about them.

[Raymond Tallis, “Rethinking Thinking”, The Wall Street Journal for November 12-13, 2011, pp. C5 and C8. Readers of _Your Brain and You_ will be familiar with reasons for regarding sensations as effects of, rather than the same thing as, neural activities; but this kind of non-reducibility is not relevant to the issues discussed in this post. They will also be aware of reasons for saying that we do not presently understand how individual minds work.]


Do You Look Like a Self-Controlled Planner?

October 31, 2011

In an article soon to appear in the Journal of Personality and Social Psychology, Kurt Gray and colleagues question whether we “objectify” other people, if that means to regard them as objects with no mental capacities. They suggest that there are two kinds of mental capacities, and that what’s often thought of as “objectification” may actually a redistribution of judgments about these kinds. They did a series of experiments to test this possibility.

The two kinds of mental capacities are Agency and Experience. “Agency”, in these experiments, comprises the capacities for self-control, planning, and acting morally. “Experience” covers abilities to experience pleasure, desire, and hunger or fear.

The hypothesis, stated a little more fully, is that people who attended to a target’s bodily aspects would tend to rate those targets higher on Experience and lower on Agency, with reverse effects when attention is focused less on bodily aspects and more on cognitive abilities.

They tested this hypothesis in several ways, of which I’m going to describe only the first. The general result of this set of experiments was converging support for the hypothesis.

The first experiment was admirably simple. 159 participants, recruited from campus dining halls, were given a sheet of paper that had one picture, a brief description, and a series of six questions. The single picture was one of the following four:

Erin, presented in a head shot that had been cropped from the following picture.
Erin, presented in a fairly cleavage-revealing outfit from just below the breasts up.
Aaron, presented in a head shot cropped from the following picture.
Aaron, presented shirtless from just below the pectorals up.

Both of these targets are attractive young people and look very healthy. The two head shots will be referred to as Face pictures, and the two others as Body pictures. (The head shots were enlarged, so each of the pictures was about the same size.)

The description given was the same for both, except for the names and corresponding appropriate pronouns. It provided only the information that the person in the picture is an English major at a liberal arts college, belongs to a few student groups, and likes to hang out with friends on weekends.

The questions were all of the form “Compared to the average person, how much is [target’s name] capable of X?”. Fillers for X were self-control, planning, and acting morally (combined into an Agency measure); and experiencing pleasure, experiencing hunger, and experiencing desire. (Since ability to experience hunger did not correlate highly with the other two, only experiencing pleasure and experiencing desire were used to compose the Experience measure.) Answers took the form of a rating on a five point scale, ranging from “Much less capable” to “Much more capable”, with “Equally as capable” for the midpoint.

The key results of this experiment are that participants who were given Body pictures rated the targets higher on Experience and lower on Agency than participants who were given Face pictures. The differences are not large (.27 out of five for Experience, .33 out of five for Agency), but they are statistically significant.

The authors take these results to support the view that “focusing on the body does not involve complete dementalization, but instead redistribution of mind, with decreased agency but increased experience” (pp. 8-9).

As noted, the remaining experiments in this study point in the same direction. In a way, that seems to be good news – ‘different aspect of mind’ seems better than ‘no mind, mere object’. The authors make it explicitly clear, however, that being regarded as less of an agent would, in general, not be in a person’s interest. Some other intriguing aspects of this experiment are that the gender of the participants doing the ratings was not found to matter, and Erin came out a little ahead of Aaron on the Agency measure.

However, the aspect of this experiment that intrigues me the most is one that lies outside of the authors’ focus, and on which they do not comment. To explain this aspect, note first that the description provides very little information – it could be fairly summarized by saying the person in the picture is a typical college student. A person could be forgiven for reacting to the rating request with “How on Earth should I know whether this person is above or below average on self-control (or planning ability, or moral action, experiencing pleasure, or experiencing desire)!?”

Since the participants were college students, and thus similar to the depicted targets as described, perhaps we should expect them to rate the targets as somewhat above average in mental abilities. However, one rating was below average: the rating for Agency in response to Body pictures was 2.90 (where capability equal to that of the average person would be 3). The difference between this rating for Body pictures and higher rating for Face pictures indeed supports the authors’ hypothesis, but it leaves me wondering what could have been in the consciousness of those doing the ratings.

An even greater puzzle comes from fact that the highest rating was for Experience in response to Body pictures – it was 3.65. (Remember, the highest number on the scale was 5, so 3.65 is about a third of the distance between “Equally as Capable” and “Much More Capable”). So, I wonder: Do college students really think they and their peers are better at experiencing pleasure and desire than the average person? That seems a very strange opinion.

[ Kurt Gray, Joshua Knobe, Mark Sheskin, Paul Bloom, and Lisa Feldman Barrett, “More than a Body: Mind Perception and the Nature of Objectification” Journal of Personality and Social Psychology, in press. ]


An Unusual Aphrodisiac

October 10, 2011

Imagine you’re a prehistoric heterosexual man who’s going into battle tomorrow. The thought that there’s a fair chance of your dying might so completely occupy your mind that you’d be uninterested in anything save, perhaps, sharpening your spear.

On the other hand, your attitude might be that if you’re going to be checking out tomorrow, you’d like to have one last time with a woman tonight.

We are more likely to be descendants of the second type of man than the first. So, we might expect that there would be a tendency among men for thoughts of their own death to raise their susceptibility to sexual arousal.

In contrast, women who were more erotically motivated when they believed their own death might be just around the corner would not generally have produced more offspring than their less susceptible sisters. So, there is no reason to expect that making thoughts of death salient should affect sexual preparedness in women.

These ideas have recently been tested in two studies by Omri Gillath and colleagues. Of course, they didn’t send anybody into battle. Instead, they used two methods – one conscious, one not – to make the idea of death salient.

In the first study, one group of participants wrote responses to questions about the emotions they had while thinking about their own death and events related to it. Another group responded to similarly phrased questions about dental pain. The point of this contrast was to distinguish whether an arousal (if found) was specific to death, or whether it was due more generally to dwelling on unpleasant topics.

After responding to the questions, participants were shown either five sexual pictures (naked women for men, naked men, for women) or five non-sexual pictures (sports cars for men, luxury houses for women). Previous studies had found that all the pictures were about equal for their respective groups on overall perceived attractiveness. Participants had all self-identified as heterosexual. They had five minutes to carefully examine their set of five pictures.

Participants were each connected to a device that measured their heart rate. The key result was that the men who answered the questions about death and viewed erotic pictures had a significantly higher average heart rate during the picture viewing than any other group. That means that, on average, they had a higher rate than other men who saw the same pictures, but had answered questions about dental pain. They also had a higher rate than other men who had answered questions about death, but then saw non-sexual pictures. And they had a higher rate than women who answered either question and viewed either pictures of naked men or non-sexual pictures.

In the second study, the death/pain saliency difference was induced by flashing the word “dead” (for half the participants) or the word “pain” (for the other half) before each item in a series of pictures. The presentation of the words was very brief (22 thousands of a second) and came between masks (strings of four X s). With the masks, that’s too short to recognize the word. The pictures either contained a person or did not. Half of the pictures that contained a person were sexual, half were not. Pictures remained visible until the participant responded.

The response was to move a lever if, but only if, the picture contained a person. The movement was either pulling the lever toward oneself, or pushing it away. There were 40 consecutive opportunities for pulling, and 40 for pushing; half of participants started with pulling, half started with pushing.

The logic of this experiment depends on a connection previously established by Chen and Bargh (1999) between rapidity of certain responses and the value of what is being responded to. Pulling brings things closer to you, and if what’s before your mind is something you like, then that will speed the pulling (relative to pulling in response to something you’d ordinarily try to avoid, or something toward which you are neutral).

The reasoning, then, is that those who had a higher degree of sexual preparedness should pull faster in response to erotic materials than those who were not so highly prepared. Gillath and colleagues hypothesized that participants who received the brief exposure to “dead” and then saw an erotic picture should be faster pullers than those who received a brief exposure to “pain” before an erotic picture.

And that is what they found – for men. There was no such result for women. Nor did the brief exposure to “dead” result in faster pulling after being presented with non-sexual pictures; the faster reaction times depended on both the exposure to “dead” and the sexual nature of the following picture.

These two studies are certainly interesting in relation to the evolutionary thinking that led them to be undertaken. But I also find them fascinating in relation to a more general point. The second study provides evidence that our brains can (a) make a distinction (between pain and death) and (b) relate it to another difference (sexual vs. non-sexual material) completely unconsciously and extremely rapidly. And the first study, although done at a much slower time scale and with consciousness of the materials used to manipulate mood (i.e., the writing about death vs. pain), showed an effect on heart rate, which is not something that was under participants’ control. The brain processes of which we are unaware (except when revealed in studies like these) are amazing indeed.

[O. Gillath, M. J. Landau, E. Selcuk and J. L. Goldenberg (2011) “Effects of low survivability cues and participant sex on physiological and behavioral responses to sexual stimuli”, Journal of Experimental Social Psychology 47:1219-1224. The previous study mentioned in the discussion of Study 2 is M. Chen and J. A. Bargh (1999) “Consequences of automatic evaluation: Immediate behavioral dispositions to approach or avoid the stimulus”, Personality and Social Psychology Bulletin 25:215-224. ]


Mind the Gut

September 19, 2011

Johan Lehrer’s Wall Street Journal column for September 17-18, 2011 reports a fascinating pair of facts – and then makes a puzzling application of them.

The first fact concerns probiotic bacteria, which are often found in yogurt and other dairy products. Researchers provided mice with either a normal diet, or a diet rich in probiotic bacteria, and then subjected them to stressful situations. The mice with the probiotic-enriched diet showed less anxiety and had lower levels of stress hormones.

By itself, this result is not so interesting. After all, it could be that the probiotic bacteria affect digestion, then blood chemistry, and finally hormone levels. But the second fact shows that a different mechanism is at work.

The second fact is that when neural connections between gut and brain were severed, the probiotic-enriched diet no longer produced the effect of reducing symptoms of stress. This fact suggests that the effect of the difference in diet works directly through the gut-brain neural connection, rather than through a less direct blood chemistry path.

It’s as if we have a sense organ in our gut that feeds into an evaluative system. It doesn’t give us any sensations, but it tells our brains how things are in our digestive systems. If things are going well down there, we’re less prone to anxiety when stressful situations arise.

That’s a surprise that contributes to a sense of wonder at how deliciously complex unconscious processes can be. Lest one think that this has nothing to do with us, Lehrer also reports a study that showed an analogous result in human subjects who received large doses of probiotics for a month. (No cutting of nerves in that case, of course.)

Now for the puzzling conclusion. These and other studies are taken by Lehrer to show that “the immateriality of mind is a deep illusion. Although we feel like a disembodied soul, many feelings and choices are actually shaped by the microbes in our gut . . . . ” And, although he concedes that “This doesn’t mean, of course, that the mind-body problem has been solved”, he goes on to declare that “it’s now abundantly clear that the mind is not separate from the body . . . . Rather, we emerge from the very same stuff that digests our lunch.”

But “shaped” is one of the many words that mean “caused”, with the addition of something about the manner of causing (as in “burned” or “built”), or degree of causal contribution (as in “influenced” or “forced”). What the cited research shows is that causes of anxious behavior and hormone levels include the presence of probiotic bacteria in the gut, and that the means of that causal contribution works through a neural connection. That is surprising and fascinating, but it offers no evidence whatsoever that feelings of anxiety are the same things as any material events.

In general, causes and effects are different. From “How anxious you feel depends in part on what kind of bacteria you have in your gut” it does not follow that feelings are material – only that feelings, whatever they are, can be caused in a very surprising way.

Similar remarks apply to “emerge”. Different people use this word in different ways, so it’s not a very helpful term. But one of its meanings is “causes”. Yes, it is indeed fascinating that what’s in our gut can cause how we feel, and do so through a direct, neural pathway. But no, that does not show that feelings are material events. It does not show that immateriality of feelings is a deep illusion.

For some purposes, the point I’m making may not matter. It’s an important fact that what goes on in our consciousness is brought about by events in our neural systems, and the studies Lehrer cites in this article do help drive that point home. But when the mind-body problem is introduced into the discussion, it becomes important to distinguish between the views (1) that neural events cause mental events such as feelings and (2) feelings are the same things as neural events. The evidence Lehrer cites in his article support (1), but are silent as regards (2).

[Jonah Lehrer, “The Yogurt Made Me Do ItThe Wall Street Journal, September 17-18, 2011, p. C12.]


Are You an Addict?

August 29, 2011

Or, to be politically correct, Are you a person with addiction? That, at any rate, is the phrase used in a new Public Policy Statement: Definition of Addiction, put out by the American Society of Addiction Medicine, dated August 15, 2011.

Definitions are supposed to help their recipients correctly apply (and withhold) the defined terms. Since this document runs to eight pages, you might wonder how useful it will be in serving its implied purpose. You would be right to do so: in fact, the Statement itself says that one needs a professional to determine the presence of addiction. (Look in note 2. I’d quote the relevant sentence, but ASAM prohibits excerpting any part of the document without prior permission.)

What the Statement actually is, is an essay that makes many significant claims about addiction. I welcome this statement, because I have long found the concept of addiction to be unclear. What, for example, is the difference between being addicted to something, and just liking it a lot? One occasionally hears the phrase “sex addict”: Can one really be addicted to sex? If one goes to great lengths to obtain it, is one addicted? Or does one just greatly enjoy it? Romeo and Juliet are portrayed as suffering for their love, and as not refraining from expressing it behaviorally even though severe consequences were known to them. Were they addicted to each other?

For all its problems as a definition, the Statement does repay reading, and I encourage readers to do that. But eight pages worth of information is a lot to carry around in one’s head.  Here, I’m going to try to identify, and make a few comments on, the points that I think will be most memorable.

The most important claim comes right at the beginning: Addiction is a disease of certain parts of the brain. The reward system is one of the affected parts, but there are others. This disease of the brain has many effects. Behavior – using a substance or engaging in a behavior such as gambling –  of course, is among them. But other effects include cognitive and emotional ones. Addicts are likely to have different opinions than others about the seriousness of consequences or the causes of their behavior; and they often have unusual emotional reactions.

Here is a limitation of what is offered in the Statement. Parts of it go into considerable detail about some of the neural pathways that may be involved in reward and related functions, identifying connections between several brain areas and specific neurotransmitters. But there is no description of just what kind of difference in the operation of these pathways constitutes the difference between those with addictions and those without. In short, the disease that addiction is said to be is never specified in neural terms.

How, then, do certified professionals identify whether they are dealing with a person with addiction, or not? What makes the subject so complex – the reason why we need certified professionals for diagnoses – is that there is no small set of indicators that are always present.

There are, however, some signs that stand out, to me at least, as particularly important. These are (1) Persistence of a behavior despite accumulation of problems that are due to it. (2) Inability to refrain from a behavior even when undesired consequences of it are acknowledged. (3) Cognitive difficulties in accurately recognizing the relation between a behavior and problems in one’s life.

The classification of addiction as a disease is controversial partly because it forces upon us a question of responsibility. Because the Statement does not identify the nature of the disease in neural terms, it is unlikely to be of much help in resolving that question. Those who incline toward diminished responsibility will point out that one is not responsible for being sick, or for the consequences of having an illness. They may draw comfort from the Statement’s observation that genetic inheritance makes a large contribution to the origin of the disease.

Those oppositely inclined, however, are likely to feel that an addicted person still has control over whether to use a drug, or engage in a behavior, on each particular occasion on which an opportunity presents itself. Being addicted is not being out of control of one’s actions, in the way one would be if one were having a seizure.

In this context, it becomes clear why point (3) is of particular importance. People do not set out to misunderstand. But if they misunderstand the causes of their problems, they will be likely to act in ways that worsen them, or, at the very least, fail to solve them. If being addicted causes false beliefs about the causes of feelings of stress, for example, or causes mistakes in estimating the seriousness of consequences of addictive behavior, then people can be in control of the immediate action of, say, snorting a drug, yet lack the normal resources of reasoning about whether that is something they should do.

It’s as if their brains had a hidden agenda, favoring one set of desires by, in part, hiding from them the ways that satisfying those desires frustrates the satisfaction of other desires.

That kind of failure is disturbing, but we have to face up to it. Clearly recognizing the possibility of cognitive deficit will, I think, affect our attitude toward people with addictions. Even without understanding the underlying neural operations, we can see that admonishment is not well suited to fixing a cognitive problem. A treatment model that aims to restore accurate understanding of causes and consequences seems more appropriate to a condition in which such understanding is impaired, irrespective of how one’s cognitive processes came to be undermined.

[The Statement can be found at http://www.asam.org/DefinitionofAddiction-LongVersion.html .]


The Social Animal

August 8, 2011

In his recent Commentary article (see my previous post of 7/20/11), Peter Wehner mentions David Brooks’s recent book, The Social Animal. Wehner finds Brooks’s book “marvelous” and repeats a statement that Brooks quotes with approval from Jonathan Haidt: “unconscious emotions have supremacy but not dictatorship”.

I’m pleased to report that I too found much to admire in The Social Animal. In highly readable fashion, Brooks presents a feast of delectable morsels from studies in psychology and neuroscience. Many lines of evidence that reveal the operation of our unconscious brain processes are clearly explained, and we get insight into how they affect everything we do, including actions that have deep and lasting consequences for our lives.

Inevitably, recognition of our dependence on unconscious processes raises questions about the extent to which we control our actions, and the extent to which we are responsible for them. These questions come up for discussion on pages 290-292 of The Social Animal. It is these pages – less than 1% of the book – that I want to comment on today.

Most of what Brooks says in these pages is presented in terms of two analogies and a claim about narratives. I’m going to reflect on each of these. I believe that we can see some shortcomings of the analogies and the claim just by being persistent in asking causal questions.

The first analogy is that we are “born with moral muscles that we can build with the steady exercise of good habits” (290), just as we can develop our muscles by regular sessions at the gym.

But let us think for a moment about how habits get to be formed. Let us think back to early days, before a habit is established. Whether it’s going to the gym, or being a good Samaritan, you can’t have a habit unless you do some of the actions that will constitute the habit without yet having such a habit.

Some people habitually go to the gym; but what got them there the first time? Well, of course, they had good reasons. They may have been thinking of health, or status, or perhaps they wanted to look attractive to potential sex partners. Yet, many other people have the same reasons, but don’t go to the gym. What gets some people to go and not others?

That’s a fiendishly complex question. It depends on all sorts of serendipitous circumstances, such as whether one’s assigned roommate was an athlete,  whether a reminder of a reason arrived at a time when going to the gym was convenient, whether one overdid things the first time and looked back on a painful experience, or whether one felt pleasantly tired afterward.

The same degree of complexity surrounds the coming to be of a good Samaritan. In a more general context, Brooks notes that “Character emerges gradually out of the mysterious interplay of a million little good influences” (128). And he cites evidence that behavior is “powerfully influenced by context” (282). The upshot of these considerations is that whether a habit gets formed, and even whether an established habit is followed on a particular occasion, depends on a host of causes that we don’t control, and in many cases are not even aware of.

The second analogy is that of a camera that has automatic settings which can be overridden by switching to a “manual” setting. Similarly, Brooks suggests, we could not have a system of morality unless many of our moral concerns were built in, and were “automatic” products of most people’s genetic constitution and normal experience in families, schools, and society at large. But, like the camera, “in crucial moments, [these automatic moral concerns] can be overridden by the slower process of conscious reflection” (290).

Actions that follow a period of deliberation may indeed be different from actions that would have been done without deliberation. But if we take one step further in pursuit of causal questions, we have to ask where the deliberation comes from. Why do we hesitate? What makes us think that deliberation is called for?

The answers to these questions are, again, dependent on complex circumstances that we know little about and so are not under our control. To put the point in terms of the camera analogy, yes, if you decide on “manual” you can switch to that setting. But some people switch to manual some of the time, others do so in different circumstances, and some never do. What accounts for these differences? That’s a long, complex, and serendipitous matter. It depends on how you think of yourself as a photographer, whether you were lucky enough to have a mentor who encouraged you to make the effort to learn the manual techniques, whether you care enough about this particular shot. That history involves many events whose significance you could not have appreciated at the time, and over which you did not have control.

The claim about narratives is that “we do have some control over our stories. We do have a conscious say in selecting the narrative we will use to organize perceptions” (291) The moral significance of this point is that our stories can have moral weight: “We have the power to tell stories that deny another’s full humanity, or stories that extend it” (291).

We certainly have control over what words we will utter or not utter. But any story we tell about our place in society and our relations to other people has, first, to occur to us, and second, to strike us as acceptable, or better than alternative stories, once we have thought of it. On both counts, we are dependent on brain processes that lie outside our consciousness and that depend on long histories involving many events over which we have had no control.

We can provide a small illustration of this point by thinking about something Brooks brings up in another context. This is confirmation bias – the tendency to overvalue evidence that agrees with what we already think, and undervalue conflicting evidence. People don’t make this kind of error consciously. They tell themselves a story according to which they are objective evaluators who believe a conclusion because they have impartially weighed all the evidence. But, sometimes, they can find such a story about themselves acceptable only because they are unaware of their bias.

I am not being a pessimist here. Those who are lucky enough to read The Social Animal, or the experimental work that lies behind it, may very well be caused to take steps to reduce the influence of confirmation bias. The point remains that the acceptability of a narrative about one’s place in the scheme of things depends on many factors that lie in unconscious effects of complex and serendipitous histories.

[The book under discussion is by David Brooks, The Social Animal: The Hidden Sources of Love, Character, and Achievement (New York: Random House, 2011).]


Free Will, Morality, and Control

July 20, 2011

In a recent article in Commentary magazine, Peter Wehner inveighs against some of the views expressed in Sam Harris’s The Moral Landscape , and claims that “free will isn’t an illusion”.

Since different people mean different things by “free will”, we have to ask what Wehner means by this term. The most definite indication that Wehner provides is the following:

“Try as he might, Sam Harris cannot explain how morality is possible without free will. If every action is the result of biological inputs over which we have no control, moral accountability becomes impossible.”

Or, in other words, having free will requires that some of our actions are not the result of biological inputs over which we have no control. But what does this mean? What is a “biological input”?

Some of Wehner’s remarks suggest that “biological input” means something like “genetic constitution” or, perhaps, “genetic constitution plus developmental factors such as the state of one’s mother’s health during her pregnancy”. What Wehner seems to exclude from “biological inputs” is what we learn from our perceptual experience. This exclusion seems natural – the things you see and hear need not have anything to do with biology.

Even if you learn something by watching animals in a zoo, it would be unusual to think of yourself as having received biological inputs. You receive perceptual inputs at the zoo, and these enable you to know something about biological creatures.

But if “biological input” does not include what we learn from our perceptual experience, then Harris is not claiming that what we do depends only on “biological inputs”. I think it will be difficult to find anyone at all who holds such a view.

A view much more likely to be held is that all our actions are results of our biological inputs together with our perceptual inputs. I do not mean only perceptual inputs that are present at the time of acting (although those, of course, must be included). What we perceive changes us. It gives us memories, and provides information that we retain. It puts us into a state that is different from the state we would have been in if we had perceived something different. The state of our brains that we have at the time of an action is the result not only of present perceptions, but also of a long history of being in a state, perceiving, changing state, perceiving something further, changing state again, and so on and on.

Another key term in Wehner’s understanding of “free will” is “control”. You are in control of your action if you are doing what you aim to do, and you would have been doing something else if you had aimed to do that. Both of these conditions can be met if your actions are a result of current perceptions plus a brain state that you are in because of your original constitution and your history of perceptual inputs. So, you can have some control over your actions.

Of course, it is also true that there is much you are not in control of. You can open your eyes or keep them shut, but what you will see if they are open is not under your control. You can’t control your original constitution, and you can’t control the particular kind of change in your brain state that will be made by what you perceive.

Wehner worries that “If what Harris argues were true, our conception of morality would be smashed to pieces. If there is no free will, human beings are mere automatons, robots programmed to act (and not act) in certain ways. We cannot be held responsible for what we have no control over.”

But these alleged implications do not follow, if we understand Harris to be holding the more plausible view I’ve just sketched. The first point is relatively simple: We are not automatons if our actions are responsive to differences, not only in current perceptual inputs, but in matters of context that may have affected us at various times in the past.

The point about being a “robot programmed to act” is a little more complicated. We must distinguish between being actions being canned and being the result of some definite process. Outputs of grocery store readers are canned – someone has to type in what amounts to a rule like this: If THIS bar code is read, then display THAT price on the monitor and add it to the total. But that is not the way you, or robots, or even programs work. In genuine programming, inputs trigger a process that leads to a result that no one has previously calculated. Even a chess playing program takes in board positions that no programmer has previously thought of, and processes that information until an output (its next move) is reached. Since these board positions were unforeseen, good responses to them cannot have been worked out by programmers, and the responding moves cannot have been canned.

Regarding control, everyone must recognize that we have limits. But if we are not ill or befuddled by drugs, there will be many possible actions that we will do if we aim to do them, and won’t do if we don’t aim to do them; and so there will be many possible actions that are under our control.

[The article is Peter Wehner’s “On Neuroscience, Free Will, and Morality, Commentary, June 8, 2011. Available at http://www.commentarymagazine.com/2011/06/08/on-neuroscience-free-will-and-morality. Several of the points made in this post are more fully explained in Your Brain and You. Sam Harris’s The Moral Landscape was published by The Free Press, New York, in 2010; an earlier post (12/1/2010) on this blog comments on another aspect of this book.]


Brain Infection and Aversion Reversals

July 1, 2011

Christof Koch recently reported on a grim little organism, Toxoplasma gondii. This one celled protozoan reproduces sexually, but only in cat intestines. Mammals that encounter cat feces can become infected with the offspring. The eventual result is an attack on the host’s brain that reduces its aversion to the smell of cats. That’s bad news if you’re a small rodent. From T. gondii’s point of view, however, this is a great strategy for getting itself into a cat intestine, where the cycle can begin again.

There are several lines of reflection that knowledge of this organism may trigger. The one that intrigues me the most is that it may provide evidence for resolving a conundrum that seems at first sight to be irresolvable.

The conundrum can be introduced through a fact commonly reported by beer lovers – namely, that they did not like beer at all when they first tasted it. Many readers will be able to think of parallel examples with other drinks or foods. The reversal can go in the opposite direction, too. I used to like licorice, and also lobster, so much that I overdosed on them, with strikingly unpleasant results. Immediately afterward, I detested these foods, and I avoid them to the present day.

The conundrum is a question about what happens during this kind of reversal. Is it (A) or (B), or, perhaps, some combination of them?

(A) What happens after consuming the item (alcoholic euphoria in one case, nausea in another) causes changes in your brain that makes the item taste differently. For example, you still don’t like the taste that beer produced in you when you first had it in your mouth, but now beer in your mouth no longer produces that taste. It produces a different one that you like better.

(B) What happens after consuming the item causes changes in your brain that alter your evaluation of the taste. For example, beer in your mouth produces exactly the same taste it did the first time you sipped it. It’s just that now you like that taste, whereas before you didn’t.

The problem is that it seems that any case of reversal could be equally well accounted for by either of these views. So, it seems that they are different views, but that nothing could count as supporting one more than it supports the other.

That situation is deeply troubling to some philosophers. A view that was very popular in the mid-Twentieth Century, and still not completely dead, held that there could not even be a difference in what two claims meant, unless it was understood how some conceivable evidence could count as supporting one of them better than the other. If that view were right, and (A) and (B) are equally good at accounting for all possible evidence, then the appearance that they are different accounts must somehow be an illusion.

T. gondii helps us untie this knot. That’s because of three facts that Koch reports. First, infected rodents do not show abnormalities in their sense of smell of other things. More importantly,

     “the density of cysts [housing T. gondii] in the amygdala is almost double that in other brain structures involved in odor perception. Parts of the amygdala have been linked to anxiety and the sensation of fear.”

And,

     “the genome of T. gondii contains two genes related to mammalian genes involved in the regulation of dopamine, the molecule associated with reward and pleasure signals in the brain, including ours.”

While these facts do not rule out some difference in how cat urine smells to infected versus uninfected rodents, they offer more support for a view like (B), i.e., for the idea that infected rodents may smell cat urine just as before, but no longer react to it with fear (and may even find it somewhat pleasant). So there is, after all, a case in which some evidence counts more in favor of one of these views rather than the other.

By the way, many humans are also infected with T. gondii. Effects are not definitively established, but there are suggestive connections between such infection and psychiatric diseases and risky behavior. Hence Koch’s title: “Protozoa Could Be Controlling Your Brain”.

[Koch’s article is at http://www.scientificamerican.com/article.cfm?id=fatal-attraction&WT.mc_id=SA_CAT_MB_20110518 . Reversals have been discussed in D. C. Dennett, Consciousness Explained (Boston: Little, Brown & Co., 1991) and by me in Understanding Phenomenal Consciousness (Cambridge: Cambridge University Press, 2004).]


%d bloggers like this: