Does Thinking About God Have a Down Side?

February 13, 2012

Research led by University of Waterloo psychologist Kristin Laurin has yielded a result that’s surprising to me, and that raises questions about a possible unwanted effect of work by many thinkers, including myself.

Laurin and her colleagues did several experiments, and tested two main hypotheses. I’ll focus on one: God thoughts lead to reduction in active pursuit of goals. This hypothesis was tested in three experiments, all of which supported it. A summary of just one of the experiments will explain what the hypothesis means, and give some idea of how it can be investigated. (The other hypothesis, which did not surprise me, was this: God thoughts lead to increase in resistance to temptation.)

How can you be sure people have recently had “God representations” in mind? One way is to give them the task of composing sentences from lists of words they are given, and include words like “God”, “divine”, sacred” on the lists. That was the set up for one group of participants. Another group of participants was given the same task with other lists that contained none of those words, but did contain words for positively valued items (e.g., “sun”, “flowers”, “party”). A third group did the same task using lists with neutral words.

To get at the effect of the differences among these groups, Laurin and her colleagues asked all participants to do a new verbal task. They were told that high scoring on this second task was a good predictor of success in their chosen field (engineering, as it happens). The task was to write down as many English words as they could in 5 minutes that are composed of just the letters R, S, T, L, I, E, and A.

The key result – predicted by the researchers but surprising to me –  was that participants who had received the list of religion-related words on the first task did less well on this second task than the other participants – they averaged 19.5 words, compared to 30.4 and 30.3 for the participants who had gotten non-religion-related words that were positive, or neutral, respectively.

Several weeks before this experiment was conducted, the authors had given a questionnaire to their participants that included a religion identification question. They were thus able to test whether their experimental result depended on participants’ religious classification. They found that their result did not depend on religious classification, even when that classification was “atheist’ (about half of the participants in this study).

The authors suggest a mechanism for their observed effect, namely that exposure to the religion-related words in the lists in the first task “activated the idea of an omnipotent, controlling force that has influence over [participants’] outcomes”. In a second study, they found experimental support for this mechanism, and concluded that “only those who believed external forces could influence their career success demonstrated reduced active goal pursuit following the God prime” (where receiving the “God prime” = receiving the religion-related lists in the first task).

This conclusion gives me pause, for the following reason. As is evident from several posts on this and several other blogs, recent books, and newspaper reports, there are many lines of research that show the importance of unconscious processes. A large number of effects on our behavior come from circumstances of which we are unaware, or circumstances that we consciously notice, but that influence our behavior in ways we do not realize. In the last decade, and continuing today, our dependence on processes that are unconscious, and therefore not under our control, has become more and more widely publicized.

It thus seems that there is a serious question whether the increasing recognition of the effects of unconscious processes may have an unwanted, deleterious effect of reducing our motivation to actively pursue our goals. Your Brain and You resolved somewhat similar issues about the relation of unconscious processes to responsibility and certain attitudes toward ourselves. But, of course, it could not consider this recent experiment, and it did not address the question of what effect the recognition of our dependence on unconscious processes might have upon our degree of motivation to pursue our goals.

I do not think I have an answer to this question, but I wonder whether the following distinction may turn out to be relevant. Getting people to think about a god is getting them to think about an agent – an entity with its own purposes and ability to enact them. On the other hand, accepting that there are causes of behavior that lie beyond our control is not the same as accepting that our outcomes depend on another agent’s purposes. So, it seems possible that the growing recognition of the importance of unconscious processes to our thoughts and actions may not lead to reduced motivation to achieve our goals.

[Laurin, K., Kay, A. C. and Fitzsimmons, G. M. (2012) “Divergent Effects of Activating Thoughts of God on Self-Regulation”, Journal of Personality and Social Psychology, 102(1):4-21.]


Is There an Appearance/Reality Distinction for Pain?

January 23, 2012

In a recent article, philosopher Kevin Reuter has provided an interesting example of experimental philosophy that challenges a widely held view.

The background is that many philosophers (including me) hold that there is no appearance/reality distinction for pain. Pain is nothing but a feeling, so if you have a painful feeling there is no question but that you have a pain. You can be fooled about what is causing you to have the pain; for example, you might think you’ve got a tumor when it’s just a cyst. But you can’t be fooled about whether you are suffering. (Another author in the same journal humorously imagines lack of success for a doctor who would refuse to prescribe painkillers, explaining that the patient is only having an appearance of pain, not a real one.)

There are parallels for our “outer” senses. You can, for example, be fooled about what color a thing is, because you might be looking at it in bad lighting. But you can’t be fooled about the way it *looks*. You might inadvertently pick the wrong word for the color a thing looks to you, but hardly makes sense to say that a thing might seem to look to you other than the way it does look to you. The way a thing looks just is its appearance, and while things in your kitchen can appear other than they really are, appearances themselves can do no such thing.

Many leading views say that the same thing holds for pain. There is simply no difference between feeling a pain, or having something appear to you as a pain, and actually having a pain.

Many leading philosophers also believe that this view – “There is no appearance/reality distinction for pains” – is not a philosophical theory. They are not claiming to say what people *ought* to believe about pains and they are not claiming to have made a philosophical discovery. They regard themselves as merely making explicit what is already implicit in the way people in general speak about their pains.

It is this attribution to the general public of the “No appearance/reality distinction for pains” view that Reuter directly challenges.

The key ground for the challenge is something one does not often see in a philosophy paper. It is a statistical analysis of remarks by non-philosophers – in this case, remarks found on health-related internet sites. Reuter gives details about his methods of search and analysis, but I will just summarize the key results, which I think his evidence clearly supports.

To wit: (1) People use both “I feel a pain” and “I have a pain” (and grammatical variants) in reporting both mild pains and severe pains. However, (2) “feel” is used about as often as “have” when mild pains are referred to, whereas “have” is used far more often than “feel” when the reported pain is severe (about 6 times as often on average, ranging from equally often to 14 times as often, depending on exactly what word — e.g., “major” , “severe”, “bad” — is used).

Result (2) is then combined with another observation: When people use variants of “seems” (e.g., “feels” “looks”, “sounds like”, etc.) in the case of senses such as touch, vision, or audition, they are making an appearance/reality distinction, and they are indicating lower confidence in their judgment. For example, if you speak of a blue tie, or say a tie is blue, you are confidently committing yourself to the claim that the tie is blue. But if you say it looks blue, you are leaving open the possibility that it might not really be blue, and that the way it looks – its appearance – is misleading as to how it really is.

The conclusion is then drawn that the difference in frequency of use of “feel” versus “have” that correlates with mildness versus severity of pain indicates that, at least for mild pains, people – users of health-related internet sites – are making an appearance/reality distinction.

Of course, this conclusion depends on supposing that there is not a better explanation of the correlation between “feel”/”have” and mild/severe. Reuter considers several more or less plausible alternative explanations, and adequately rebuts them. The most plausible of these is that “I have a pain” is, implicitly, a request for help. If the pain is mild, there may be no need for help, so the person reduces the help-seeking implication by using “feel” instead of “have”.

Reuter’s point about this suggestion is that more direct means of seeking aid are easily available, so it is unlikely that pain reports have the function of indirectly asking for help.

There is, however, a variant of this alternative that Reuter does not consider. People know that others are likely to empathize with a reporter of pain. So, if the pain is mild, the person who reports it may want to convey something like “Don’t worry, don’t feel bad for me, it’s only a little pain”. Perhaps using “feel” is a way of indicating this lack of need for empathy.

Of course, it’s unlikely that anyone thinks explicitly that this is what they are doing. So, we might wonder whether such an unconscious adjustment of language is too subtle to be plausible. I do not think so. Consider the shades of politeness in the following list:

Shut the door.

Shut the door, ok?

Would you shut the door?

Please shut the door.

Would you shut the door, please?

If you’ll shut the door, we’ll be less likely to be interrupted.

Which of these we use depends on how we are related to the person we’re addressing, and on circumstances. We do use different degrees of politeness, and we may sometimes pay careful attention to how to put a request. But on many occasions, we tailor what we say to relationships and circumstances without reflecting on or attending to our choice of phrasing, or even realizing that we are adjusting our words to relationships and circumstances. So, perhaps we are sometimes engaging in a similar, unreflective shading of politeness when we say that we “feel a pain” instead of that we “have a pain”.

Whether or not that is a good explanation, we should not forget result (1): People sometimes use “feel” even for severe pains that they cannot plausibly be taken to regard as unreal.

[Kevin Reuter (2011) “Distinguishing the Appearance from the Reality of Pain” _Journal of Consciousness Studies_ 18(9-10):94-109.]

Gazzaniga’s Modules

January 3, 2012

I’ve been reading Michael Gazzaniga’s 2009 Gifford Lectures, now published as Who’s In Charge? Free Will and the Science of the Brain. I can’t say that I think he’s untied the knot of the free will problem, but the book contains some interesting observations about split brain patients, brain scans, and modules. Most of this post is about modules, but Gazzaniga’s remarks about brain scans deserve top billing.

These remarks come in the last and most useful chapter, which focuses on problems in brining neuroscience into court. Gazzaniga provides a long list of such problems, and anyone who is interested in neuroscience and the law should certainly read it.

The general theme is this. Extracting statistically significant conclusions from brain scans is an extremely complex business. One thing that has to be done to support meaningful statements about the relation between brain regions and our mental abilities is to average scans across multiple individuals. This kind of averaging is part of what is used to generate the familiar pictures of brain regions “lighting up”.

But in any court proceeding, the question is about the brain of one individual only. Brain scans of normal, law-abiding individuals often differ quite noticeably from averages of scans of people doing the same task. So, inferences from an individual’s showing a difference from average in a brain scan to conclusions about that individual’s abilities, proclivities, or degree of responsibility are extremely difficult and risky.

The individual differences in brain scans show that our brains are not standard-issue machines. Brains that are wired differently can lead to actions that have the same practical result across a wide variety of circumstances. This implies that there is a limit to how specialized each part of our brains can be.

But what about modularity? Don’t we have to think of ourselves as composed of relatively small sub-networks that are dedicated to their given tasks?

Here is where things get interesting; for in Gazzaniga’s book, there seem to be two concepts of “module” at work, although the distinction between them is not clearly drawn.

The first arises out of some observations that have been known for a long time, but are not often referred to. (They’re on pp. 32-33 of Gazzaniga’s book.) One of these is that while our brains are 2.75 larger than those of chimpanzees, we have only 1.25 times more neurons. So, on average, our neurons are more distant from each other. What fills the “extra” space is connections among neurons; but if the same degree of connectivity among neurons were maintained with the extra distance, there would have to be many more miles of connecting lines (axons) than there actually are. So, in us, the degree of connectivity is, on average, less than that in chimps. There are still groups of close-lying neural cells that are richly connected, but the connections of one group to another are sometimes relatively sparse. We have thus arrived, by a sort of physiological derivation, at modules.

It must be noticed, however, that this explanation of the existence of modules does not say anything about what kind of functions the small, relatively well connected groups might be performing. So, this explanation does not contribute any reason for supposing that there are “modules” for anything so complex as – to take a famous case – detecting people who cheat on social rules. There is good evidence that we have a well developed ability to detect cheaters, and that this ability fails to extend to similar problems that are not phrased in terms of social rules. But it is another question whether there is one brain part that is dedicated to this task, or whether we can do it because we have several small groups of neurons, each of which does a less complicated task, and whose combined result enables us to detect cheaters with ease.

Modularity reaches its apogee when Gazzaniga introduces the “interpreter module”. The job of this item is to rationalize what all the other modules are doing. It is the module that is supposed to provide our ability to make up a narrative that will present us – to others, and to ourselves – as reasonable actors, planning in accord with our established desires and beliefs, and carrying out what we have told ourselves we intend to do.

According to the interpreter module story, we can see this inveterate rationalizer at work in many ways. It reveals itself in startlingly clear ways in cases of damaged brains. Some of the patients of interest here are split brain patients; others have lost various abilities due to stroke or accidents. Some parts of their brains receive less than the normal amount of input from other parts. Their interpreter modules have incomplete information, and the stories they concoct about what their owners are doing and why are sometimes quite bizarre.

But people with intact brains can be shown to be doing the same sort of rationalizing. For example, Nisbett and Wilson (1977) had people watch a movie. For one group, the circumstances were normal, for another the circumstances were the same except for a noisy power saw in the hall outside. Participants were asked to rate several aspects of the movie, such as interest, and likelihood of affecting other viewers. Then they were asked whether the noise had affected their ratings. In fact, there was no significant difference in the ratings between the non-distracted group and the group exposed to the noise. But a majority of those in the latter group believed that the noise had affected their ratings.

While false beliefs about our own mental processes are well established, I am suspicious of the “interpreter” story. The interpreter is called a “module” and is supposed to explain how we bring off the interesting task of stitching together all that goes on, and all that we do, into a coherent (or, at least, coherent sounding) narrative. But we might be doing any of very many things, under many different circumstances. To make a coherent story, we must remain consistent, or nearly so, with our beliefs about how the physical and social worlds operate. We must anticipate the likely reactions of others to what we say we are doing. So, according to the interpreter module story, this “module” must have access to an enormous body of input from other modules, and be able to process it into (at least the simulacrum of) a coherent story.

To me, that sounds an awful lot like an homunculus – a little person embodied in a relatively tightly interconnected sub-network, that takes in a lot of inputs and reasons to a decently plausible output that gets expressed through the linguistic system. That image does nothing to explain how we generate coherent, or approximately coherent, narratives; it just gives a name to a mystery.

It would be better to say that we have an ability to rationalize – to give a more or less coherent verbal narrative – over a great many circumstances and actions. Our brains enable us to do this. Our brains have parts with various distances between them; somehow, the combined interactions of all these parts results in linguistic output that passes, most of the time, as coherent. We wish we understood how this combined interaction manages to result in concerted speech and action over extended periods of time; but, as yet, we don’t.

[Michael S. Gazzaniga, Who’s In Charge? Free Will and the Science of the Brain, The Gifford Lectures for 2009 (New York: Harper Collins, 2011). Nisbett, R. E. & Wilson, T. D. (1977) “Telling More Than We Can Know: Verbal Reports on Mental Processes”, Psychological Review 84:231-259. This paper describes many other cases of false belief about our mental processes. The physiological comparison between humans and chimpanzees, and its significance, are referenced to Shariff, G. A. (1953) “Cell counts in the primate cerebral cortex”, Comparative Neurology 98:381-400; Deacon, T. W. (1990) “Rethinking mammalian brain evolution”, American Zoology 30:629-705; and Ringo, J. (1991) “Neuronal interconnection as a function of brain size”, Brain, Behavior and Evolution 38:1-6.]

Do Conscious Thoughts Cause Behavior?

December 12, 2011

In the late 19th Century, Thomas Huxley advanced a view he called “automatism”. This view says that conscious thoughts themselves don’t actually do anything. They are, in Huxley’s famous analogy, like the blowings of a steam whistle on an old locomotive. The steam comes from the same boiler that drives the locomotive’s pistons, and blowings of the whistle are well correlated with the locomotive’s starting to move, but the whistling contributes nothing to the motion. Just so with conscious thoughts: the brain processes that produce our behavior also produce conscious thoughts, but the thoughts themselves don’t produce anything.

Automatism (later known as epiphenomenalism) is currently out of favor among philosophers, many of whom dismiss it without bothering to argue against it. But it has enough legs to be the target of an article by Roy F. Baumeister and colleagues in this year’s Annual Review of Psychology. These authors review a large number of studies that they regard as presenting evidence “supporting a causal role for consciousness” (p. 333). A little more specifically, they are concerned with the causal role of “conscious thought”, which “includes reflection, reasoning, and temporally extended sense of self” (p. 333). The majority of the evidence they present is claimed to be evidence against the “steam whistle” hypothesis that “treats conscious thoughts as wholly effects and not causes” (p. 334).

To understand their argument, we need to know a little more about the contrast between unconscious thought and conscious thought. To this end, suppose that a process occurs in your brain that represents some fact, and enables you to behave in ways that are appropriate to that fact. Suppose that you cannot report – either to others or to yourself in your inner speech – what fact that process represented. That process would be a thought that was unconscious. But if a process occurs in you, and you can say – inwardly or overtly – what fact it is representing, then you have had a conscious thought.

What if I tell you something, or instruct you to do some action or to think about a particular topic? Does that involve conscious thought? Baumeister et al. assume, with plausible reason, that if you were able to understand a whole sentence, then you were conscious, and at least part of your understanding the sentence involved conscious thought. (For example, you could report what you were told, or repeat the gist of the instruction.) They also clearly recognize that understanding what others say to you may, in addition, trigger unconscious processes – processes that you would not be able to report on.

If you want to do a psychological experiment, you have to set up at least two sets of circumstances, so that you can compare the effect of one set with the effect of another. If your interest is in effects of conscious thoughts, you need to have one group of participants who have a certain conscious thought, and another group who are less likely to have had that conscious thought. The way that differences of this kind are created is to vary the instructions given to different groups of participants.

For example, in one of the reviewed studies, participants randomly assigned to one group were given information about costs and features of a cable service, and also instructed to imagine being a cable subscriber. Participants in another group received the same information about costs and features, but no further instruction. A later follow up revealed that a significantly higher proportion of those in the group that received the special instruction had actually become cable subscribers.

In another study, the difference was that one group was asked to form specific “implementation intentions”. These are definite plans to do a certain action on a certain kind of occasion – for example to exercise on a particular day and time, as contrasted with a more general intention to take up exercise, but without thinking of a particular plan for when to do it. The other group received the same information about benefits of the action, but no encouragement to form specific implementation intentions. Significantly more of those who were encouraged to form implementation intentions actually engaged in the activity.

The logic behind these studies is that one group was more likely to have a certain kind of conscious thought than the other (due to the experimenters’ instructions), and it was that group that exhibited behavior that was different from the group that was less likely to have had that conscious thought. The correlation between the difference in conscious thoughts and the difference in subsequent behavior is then taken as evidence for a causal connection between the (earlier) thoughts and the (later) behavior.

There is, however, a problem with this logic. It arises from the fact (which, as noted earlier, the authors of the review article acknowledge) that conscious processing of instructions triggers unconscious processes. We can easily see that this is so, because processing what is said to us requires that we parse the grammar of sentences that we understand. But we cannot report on how we do this; our parsing is an unconscious process. What we know about it comes from decades of careful work by linguists, not from introspection.

Since conscious reception of instructions triggers unconscious processes, it is always possible that behavioral effects of the different instructions are brought about by unconscious processes that are set in motion by hearing those instructions. The hearing (or reading) of instructions is clearly conscious, but what happens after that may or may not be conscious. So, the causal dependence of behavior on instructions does not demonstrate causal dependence of behavior on conscious processes that occur after receiving the instructions, as opposed to unconscious processes that are triggered by (conscious) hearing or reading of instructions.

This point is difficult to appreciate. The reason is that there is something else that sounds very similar, and to which we really are entitled to claim on the basis of the evidence presented in the review article. This claim is the following (where “Jones” can be anybody)

(1) If Jones had not had the conscious thought CT, Jones would not have been as likely to engage in behavior B.

This is different from

(2) Jones’s conscious thought CT caused it to be more likely that Jones engaged in behavior B.

What’s the difference? The first allows something that the second rules out. Namely, the first, but not the second, allows that some unconscious process, UP caused both whatever conscious thoughts occur after receiving instructions, and the subsequent behavior. The experimenter’s giving of the instructions may set off a cascade of unconscious processes, and it may be these that are responsible for both some further conscious (reportable) thoughts and for subsequent actions related to the instructions. If the instructions had not been given, those particular unconscious thoughts would likely not have occurred, and thus the action might not have been produced.

Analogously, if the flash of an exploding firecracker had not occurred (for example, because the fuse was not lit) it would have been very unlikely that there would have been a bang. But that does not show that, in a case where the fuse was lit, the flash causes the bang. Instead, both are caused by the exploding powder.

The procedure of manipulating instructions and then finding correlated differences in behavior thus establishes (1), but not (2). So, this procedure cannot rule out the steam whistle hypothesis regarding conscious thought.

Interestingly, there are some cases for which the authors of the review identify good reasons to think that the steam whistle view is actually the way things work.

For example, one study compared people who imagined a virtuous choice with those who had not done so. In a subsequent hypothetical choice, people in the first group were more self-indulgent than those in the comparison group. This difference was removed if the same activity was imagined as a court-ordered punishment rather than a choice to volunteer.

However, it seems very unlikely that anyone consciously reasoned “I imagined myself making a virtuous choice, therefore I’m entitled to a bit of self-indulgence”. In this, and several similar reported cases, it seems far more likely that the connection between imagining a virtuous choice, feeling good about oneself, and feeling entitled to self-indulgence runs along on processes that do not cause conscious thoughts with relevant content.

The article under discussion is full of interesting effects, and these are presented in a way that is highly accessible. But it does not succeed in overturning an alternative to its authors’ preferred view. According to this alternative view, the causing of behavior (after consciously perceiving one’s situation, or consciously receiving instructions) is done by unconscious processes. This alternative view allows that sometimes, but not always, these unconscious processes also cause some conscious thoughts that we express either in overt verbal behavior, or in sentences about what we are doing that we affirm to ourselves in our inner speech.

[The article under discussion is Roy F. Baumeister, E. J. Masicampo, and Kathleen Vohs, “Do Conscious Thoughts Cause Behavior?”, Annual Review of Psychology 62:331-361 (2011). The difference between (1) and (2) is further explained and discussed in Chapter 4 of Your Brain and You. ]

Thinking About Modules

November 21, 2011

In a recent Wall Street Journal review article, Raymond Tallis expresses dissatisfaction with what he calls “biologism” – the view that nothing fundamental separates humanity from animality. Biologism is described as having two “cardinal manifestations”.

The first is that the mind is the brain, or its activity. This view is held to have the consequence that one of the most powerful ways to understand ourselves is through scanning the brain’s activities.

The second manifestation of biologism is the claim that “Darwinism explains not only how the organism Homo sapiens came into being (as, of course, it does) but also what motivates people and shapes their day-to-day behavior”.

Tallis suggests that putting these ideas together leads to the following view. The brain evolved under natural selection, the mind is the (activities of the) brain, our behavior depends on the mind/brain, therefore the mind and our behavior can be explained by evolution. A further implication is claimed, namely, that “The mind is a cluster of apps or modules securing the replication of genes that are expressed in our bodies”. Studying the mind can be broken down into studying (by brain scans) the operation of these modules.

Tallis laments the wide acceptance of this way of looking at ourselves. He affirms that brain activity is a necessary condition of all of our consciousness, but holds that “many aspects of everyday human consciousness elude neural reduction”.

But how could aspects of our consciousness elude neural reduction, if everything in our consciousness depends on the workings of the brain? Tallis answers: “For we belong to a boundless, infinitely elaborated community of minds that has been forged out of a trillion cognitive handshakes over hundreds of thousands of years. . . . Because it is a community of minds, it cannot be inspected by looking at the activity of the solitary brain.”

This statement, however, is not an answer to the question of how aspects of our consciousness can elude neural reduction. It explains, instead, why we cannot understand facts about societies by looking at a solitary brain, and why we cannot reconstruct the evolutionary history of our species by looking at the brain of one individual. But the question about elusiveness of neural reduction concerns the consciousness of individuals. It’s about how individual minds work, and what gives rise to each person’s behavior.

Aside from rare cases of feral children, individuals grow up in societies. Even so, their motivations and behavior depend on their individual brains. Individuals must have some kind of representation of societal facts and norms in their own brains, if those brains are to produce behaviors that are socially appropriate and successful. At present, alas, we do not understand what form those representations take, nor how they are able to contribute, jointly with other representations, to intelligent behavior. But the question of how the individual mind works is a clear one, and the search for an answer is one of the most exciting inquiries of our time.

Despite my dissatisfaction with Tallis’s account, I am sympathetic to some of his doubts about reduction of motivation and behavior to the operations of modules. The true source of the problem, however, is not our attention to the mind/brain of solitary individuals.

The real problem is, instead, uncritical acceptance of modules. The modular way of looking at things does not follow from Tallis’s two cardinal manifestations. They say, in sum, that whatever we think and whatever we do depends on the activities of a brain that developed under principles of Darwinian evolution. They do not say one word about modules. They do not imply any theory of how the evolved brain does what it does.

These remarks are in no way a denial of modules, and in some cases, there is very good reason to accept them. But, even accepting that there are many modules, it does not follow that for any given motivation or behavior, X, there is a module that is dedicated to providing X – i.e., that functions to provide X and does not do anything else. Moreover, it is clear that our evolved brain allows for learning. If we learn two things, they may be related, and if we recognize a relation among things that we had to learn in the first place, there cannot be a module for recognizing that relation.

Caution about introducing modules for specific mental or behavioral features that may interest us is compatible with supposing not only that there are many modules, but even with supposing that operations of several modules is required for everything we do. That’s because plurality of modules carries with it the possibility of variability in how they are connected. Such variability may depend on genetic differences, developmental differences, and/or differences in learning. In any case of combined action of several modules, therefore, there will be no simple relation between a motivation or a behavior and a single module, nor any simple relation between a motivation or behavior and a collection of modules.

So, even granting Tallis’s two cardinal manifestations and a commitment to extensively modular brain organization, we cannot expect any simple relation to hold between some ability that interests us and the operation of a module dedicated to that ability. So, I agree with Tallis that we should be suspicious of facile “discoveries” of a module for X, where X may be, e.g., an economic behavior or an aesthetic reaction. But I think that the complexities that lie behind this suspicion are to be found in the complexity of the workings of individual brains. Our social relations with others provides distinctive material for us to think about, but they will not explain how we do our thinking about them.

[Raymond Tallis, “Rethinking Thinking”, The Wall Street Journal for November 12-13, 2011, pp. C5 and C8. Readers of _Your Brain and You_ will be familiar with reasons for regarding sensations as effects of, rather than the same thing as, neural activities; but this kind of non-reducibility is not relevant to the issues discussed in this post. They will also be aware of reasons for saying that we do not presently understand how individual minds work.]

Do You Look Like a Self-Controlled Planner?

October 31, 2011

In an article soon to appear in the Journal of Personality and Social Psychology, Kurt Gray and colleagues question whether we “objectify” other people, if that means to regard them as objects with no mental capacities. They suggest that there are two kinds of mental capacities, and that what’s often thought of as “objectification” may actually a redistribution of judgments about these kinds. They did a series of experiments to test this possibility.

The two kinds of mental capacities are Agency and Experience. “Agency”, in these experiments, comprises the capacities for self-control, planning, and acting morally. “Experience” covers abilities to experience pleasure, desire, and hunger or fear.

The hypothesis, stated a little more fully, is that people who attended to a target’s bodily aspects would tend to rate those targets higher on Experience and lower on Agency, with reverse effects when attention is focused less on bodily aspects and more on cognitive abilities.

They tested this hypothesis in several ways, of which I’m going to describe only the first. The general result of this set of experiments was converging support for the hypothesis.

The first experiment was admirably simple. 159 participants, recruited from campus dining halls, were given a sheet of paper that had one picture, a brief description, and a series of six questions. The single picture was one of the following four:

Erin, presented in a head shot that had been cropped from the following picture.
Erin, presented in a fairly cleavage-revealing outfit from just below the breasts up.
Aaron, presented in a head shot cropped from the following picture.
Aaron, presented shirtless from just below the pectorals up.

Both of these targets are attractive young people and look very healthy. The two head shots will be referred to as Face pictures, and the two others as Body pictures. (The head shots were enlarged, so each of the pictures was about the same size.)

The description given was the same for both, except for the names and corresponding appropriate pronouns. It provided only the information that the person in the picture is an English major at a liberal arts college, belongs to a few student groups, and likes to hang out with friends on weekends.

The questions were all of the form “Compared to the average person, how much is [target’s name] capable of X?”. Fillers for X were self-control, planning, and acting morally (combined into an Agency measure); and experiencing pleasure, experiencing hunger, and experiencing desire. (Since ability to experience hunger did not correlate highly with the other two, only experiencing pleasure and experiencing desire were used to compose the Experience measure.) Answers took the form of a rating on a five point scale, ranging from “Much less capable” to “Much more capable”, with “Equally as capable” for the midpoint.

The key results of this experiment are that participants who were given Body pictures rated the targets higher on Experience and lower on Agency than participants who were given Face pictures. The differences are not large (.27 out of five for Experience, .33 out of five for Agency), but they are statistically significant.

The authors take these results to support the view that “focusing on the body does not involve complete dementalization, but instead redistribution of mind, with decreased agency but increased experience” (pp. 8-9).

As noted, the remaining experiments in this study point in the same direction. In a way, that seems to be good news – ‘different aspect of mind’ seems better than ‘no mind, mere object’. The authors make it explicitly clear, however, that being regarded as less of an agent would, in general, not be in a person’s interest. Some other intriguing aspects of this experiment are that the gender of the participants doing the ratings was not found to matter, and Erin came out a little ahead of Aaron on the Agency measure.

However, the aspect of this experiment that intrigues me the most is one that lies outside of the authors’ focus, and on which they do not comment. To explain this aspect, note first that the description provides very little information – it could be fairly summarized by saying the person in the picture is a typical college student. A person could be forgiven for reacting to the rating request with “How on Earth should I know whether this person is above or below average on self-control (or planning ability, or moral action, experiencing pleasure, or experiencing desire)!?”

Since the participants were college students, and thus similar to the depicted targets as described, perhaps we should expect them to rate the targets as somewhat above average in mental abilities. However, one rating was below average: the rating for Agency in response to Body pictures was 2.90 (where capability equal to that of the average person would be 3). The difference between this rating for Body pictures and higher rating for Face pictures indeed supports the authors’ hypothesis, but it leaves me wondering what could have been in the consciousness of those doing the ratings.

An even greater puzzle comes from fact that the highest rating was for Experience in response to Body pictures – it was 3.65. (Remember, the highest number on the scale was 5, so 3.65 is about a third of the distance between “Equally as Capable” and “Much More Capable”). So, I wonder: Do college students really think they and their peers are better at experiencing pleasure and desire than the average person? That seems a very strange opinion.

[ Kurt Gray, Joshua Knobe, Mark Sheskin, Paul Bloom, and Lisa Feldman Barrett, “More than a Body: Mind Perception and the Nature of Objectification” Journal of Personality and Social Psychology, in press. ]

An Unusual Aphrodisiac

October 10, 2011

Imagine you’re a prehistoric heterosexual man who’s going into battle tomorrow. The thought that there’s a fair chance of your dying might so completely occupy your mind that you’d be uninterested in anything save, perhaps, sharpening your spear.

On the other hand, your attitude might be that if you’re going to be checking out tomorrow, you’d like to have one last time with a woman tonight.

We are more likely to be descendants of the second type of man than the first. So, we might expect that there would be a tendency among men for thoughts of their own death to raise their susceptibility to sexual arousal.

In contrast, women who were more erotically motivated when they believed their own death might be just around the corner would not generally have produced more offspring than their less susceptible sisters. So, there is no reason to expect that making thoughts of death salient should affect sexual preparedness in women.

These ideas have recently been tested in two studies by Omri Gillath and colleagues. Of course, they didn’t send anybody into battle. Instead, they used two methods – one conscious, one not – to make the idea of death salient.

In the first study, one group of participants wrote responses to questions about the emotions they had while thinking about their own death and events related to it. Another group responded to similarly phrased questions about dental pain. The point of this contrast was to distinguish whether an arousal (if found) was specific to death, or whether it was due more generally to dwelling on unpleasant topics.

After responding to the questions, participants were shown either five sexual pictures (naked women for men, naked men, for women) or five non-sexual pictures (sports cars for men, luxury houses for women). Previous studies had found that all the pictures were about equal for their respective groups on overall perceived attractiveness. Participants had all self-identified as heterosexual. They had five minutes to carefully examine their set of five pictures.

Participants were each connected to a device that measured their heart rate. The key result was that the men who answered the questions about death and viewed erotic pictures had a significantly higher average heart rate during the picture viewing than any other group. That means that, on average, they had a higher rate than other men who saw the same pictures, but had answered questions about dental pain. They also had a higher rate than other men who had answered questions about death, but then saw non-sexual pictures. And they had a higher rate than women who answered either question and viewed either pictures of naked men or non-sexual pictures.

In the second study, the death/pain saliency difference was induced by flashing the word “dead” (for half the participants) or the word “pain” (for the other half) before each item in a series of pictures. The presentation of the words was very brief (22 thousands of a second) and came between masks (strings of four X s). With the masks, that’s too short to recognize the word. The pictures either contained a person or did not. Half of the pictures that contained a person were sexual, half were not. Pictures remained visible until the participant responded.

The response was to move a lever if, but only if, the picture contained a person. The movement was either pulling the lever toward oneself, or pushing it away. There were 40 consecutive opportunities for pulling, and 40 for pushing; half of participants started with pulling, half started with pushing.

The logic of this experiment depends on a connection previously established by Chen and Bargh (1999) between rapidity of certain responses and the value of what is being responded to. Pulling brings things closer to you, and if what’s before your mind is something you like, then that will speed the pulling (relative to pulling in response to something you’d ordinarily try to avoid, or something toward which you are neutral).

The reasoning, then, is that those who had a higher degree of sexual preparedness should pull faster in response to erotic materials than those who were not so highly prepared. Gillath and colleagues hypothesized that participants who received the brief exposure to “dead” and then saw an erotic picture should be faster pullers than those who received a brief exposure to “pain” before an erotic picture.

And that is what they found – for men. There was no such result for women. Nor did the brief exposure to “dead” result in faster pulling after being presented with non-sexual pictures; the faster reaction times depended on both the exposure to “dead” and the sexual nature of the following picture.

These two studies are certainly interesting in relation to the evolutionary thinking that led them to be undertaken. But I also find them fascinating in relation to a more general point. The second study provides evidence that our brains can (a) make a distinction (between pain and death) and (b) relate it to another difference (sexual vs. non-sexual material) completely unconsciously and extremely rapidly. And the first study, although done at a much slower time scale and with consciousness of the materials used to manipulate mood (i.e., the writing about death vs. pain), showed an effect on heart rate, which is not something that was under participants’ control. The brain processes of which we are unaware (except when revealed in studies like these) are amazing indeed.

[O. Gillath, M. J. Landau, E. Selcuk and J. L. Goldenberg (2011) “Effects of low survivability cues and participant sex on physiological and behavioral responses to sexual stimuli”, Journal of Experimental Social Psychology 47:1219-1224. The previous study mentioned in the discussion of Study 2 is M. Chen and J. A. Bargh (1999) “Consequences of automatic evaluation: Immediate behavioral dispositions to approach or avoid the stimulus”, Personality and Social Psychology Bulletin 25:215-224. ]