Does Thinking About God Have a Down Side?

February 13, 2012

Research led by University of Waterloo psychologist Kristin Laurin has yielded a result that’s surprising to me, and that raises questions about a possible unwanted effect of work by many thinkers, including myself.

Laurin and her colleagues did several experiments, and tested two main hypotheses. I’ll focus on one: God thoughts lead to reduction in active pursuit of goals. This hypothesis was tested in three experiments, all of which supported it. A summary of just one of the experiments will explain what the hypothesis means, and give some idea of how it can be investigated. (The other hypothesis, which did not surprise me, was this: God thoughts lead to increase in resistance to temptation.)

How can you be sure people have recently had “God representations” in mind? One way is to give them the task of composing sentences from lists of words they are given, and include words like “God”, “divine”, sacred” on the lists. That was the set up for one group of participants. Another group of participants was given the same task with other lists that contained none of those words, but did contain words for positively valued items (e.g., “sun”, “flowers”, “party”). A third group did the same task using lists with neutral words.

To get at the effect of the differences among these groups, Laurin and her colleagues asked all participants to do a new verbal task. They were told that high scoring on this second task was a good predictor of success in their chosen field (engineering, as it happens). The task was to write down as many English words as they could in 5 minutes that are composed of just the letters R, S, T, L, I, E, and A.

The key result – predicted by the researchers but surprising to me –  was that participants who had received the list of religion-related words on the first task did less well on this second task than the other participants – they averaged 19.5 words, compared to 30.4 and 30.3 for the participants who had gotten non-religion-related words that were positive, or neutral, respectively.

Several weeks before this experiment was conducted, the authors had given a questionnaire to their participants that included a religion identification question. They were thus able to test whether their experimental result depended on participants’ religious classification. They found that their result did not depend on religious classification, even when that classification was “atheist’ (about half of the participants in this study).

The authors suggest a mechanism for their observed effect, namely that exposure to the religion-related words in the lists in the first task “activated the idea of an omnipotent, controlling force that has influence over [participants’] outcomes”. In a second study, they found experimental support for this mechanism, and concluded that “only those who believed external forces could influence their career success demonstrated reduced active goal pursuit following the God prime” (where receiving the “God prime” = receiving the religion-related lists in the first task).

This conclusion gives me pause, for the following reason. As is evident from several posts on this and several other blogs, recent books, and newspaper reports, there are many lines of research that show the importance of unconscious processes. A large number of effects on our behavior come from circumstances of which we are unaware, or circumstances that we consciously notice, but that influence our behavior in ways we do not realize. In the last decade, and continuing today, our dependence on processes that are unconscious, and therefore not under our control, has become more and more widely publicized.

It thus seems that there is a serious question whether the increasing recognition of the effects of unconscious processes may have an unwanted, deleterious effect of reducing our motivation to actively pursue our goals. Your Brain and You resolved somewhat similar issues about the relation of unconscious processes to responsibility and certain attitudes toward ourselves. But, of course, it could not consider this recent experiment, and it did not address the question of what effect the recognition of our dependence on unconscious processes might have upon our degree of motivation to pursue our goals.

I do not think I have an answer to this question, but I wonder whether the following distinction may turn out to be relevant. Getting people to think about a god is getting them to think about an agent – an entity with its own purposes and ability to enact them. On the other hand, accepting that there are causes of behavior that lie beyond our control is not the same as accepting that our outcomes depend on another agent’s purposes. So, it seems possible that the growing recognition of the importance of unconscious processes to our thoughts and actions may not lead to reduced motivation to achieve our goals.

[Laurin, K., Kay, A. C. and Fitzsimmons, G. M. (2012) “Divergent Effects of Activating Thoughts of God on Self-Regulation”, Journal of Personality and Social Psychology, 102(1):4-21.]

Are You an Addict?

August 29, 2011

Or, to be politically correct, Are you a person with addiction? That, at any rate, is the phrase used in a new Public Policy Statement: Definition of Addiction, put out by the American Society of Addiction Medicine, dated August 15, 2011.

Definitions are supposed to help their recipients correctly apply (and withhold) the defined terms. Since this document runs to eight pages, you might wonder how useful it will be in serving its implied purpose. You would be right to do so: in fact, the Statement itself says that one needs a professional to determine the presence of addiction. (Look in note 2. I’d quote the relevant sentence, but ASAM prohibits excerpting any part of the document without prior permission.)

What the Statement actually is, is an essay that makes many significant claims about addiction. I welcome this statement, because I have long found the concept of addiction to be unclear. What, for example, is the difference between being addicted to something, and just liking it a lot? One occasionally hears the phrase “sex addict”: Can one really be addicted to sex? If one goes to great lengths to obtain it, is one addicted? Or does one just greatly enjoy it? Romeo and Juliet are portrayed as suffering for their love, and as not refraining from expressing it behaviorally even though severe consequences were known to them. Were they addicted to each other?

For all its problems as a definition, the Statement does repay reading, and I encourage readers to do that. But eight pages worth of information is a lot to carry around in one’s head.  Here, I’m going to try to identify, and make a few comments on, the points that I think will be most memorable.

The most important claim comes right at the beginning: Addiction is a disease of certain parts of the brain. The reward system is one of the affected parts, but there are others. This disease of the brain has many effects. Behavior – using a substance or engaging in a behavior such as gambling –  of course, is among them. But other effects include cognitive and emotional ones. Addicts are likely to have different opinions than others about the seriousness of consequences or the causes of their behavior; and they often have unusual emotional reactions.

Here is a limitation of what is offered in the Statement. Parts of it go into considerable detail about some of the neural pathways that may be involved in reward and related functions, identifying connections between several brain areas and specific neurotransmitters. But there is no description of just what kind of difference in the operation of these pathways constitutes the difference between those with addictions and those without. In short, the disease that addiction is said to be is never specified in neural terms.

How, then, do certified professionals identify whether they are dealing with a person with addiction, or not? What makes the subject so complex – the reason why we need certified professionals for diagnoses – is that there is no small set of indicators that are always present.

There are, however, some signs that stand out, to me at least, as particularly important. These are (1) Persistence of a behavior despite accumulation of problems that are due to it. (2) Inability to refrain from a behavior even when undesired consequences of it are acknowledged. (3) Cognitive difficulties in accurately recognizing the relation between a behavior and problems in one’s life.

The classification of addiction as a disease is controversial partly because it forces upon us a question of responsibility. Because the Statement does not identify the nature of the disease in neural terms, it is unlikely to be of much help in resolving that question. Those who incline toward diminished responsibility will point out that one is not responsible for being sick, or for the consequences of having an illness. They may draw comfort from the Statement’s observation that genetic inheritance makes a large contribution to the origin of the disease.

Those oppositely inclined, however, are likely to feel that an addicted person still has control over whether to use a drug, or engage in a behavior, on each particular occasion on which an opportunity presents itself. Being addicted is not being out of control of one’s actions, in the way one would be if one were having a seizure.

In this context, it becomes clear why point (3) is of particular importance. People do not set out to misunderstand. But if they misunderstand the causes of their problems, they will be likely to act in ways that worsen them, or, at the very least, fail to solve them. If being addicted causes false beliefs about the causes of feelings of stress, for example, or causes mistakes in estimating the seriousness of consequences of addictive behavior, then people can be in control of the immediate action of, say, snorting a drug, yet lack the normal resources of reasoning about whether that is something they should do.

It’s as if their brains had a hidden agenda, favoring one set of desires by, in part, hiding from them the ways that satisfying those desires frustrates the satisfaction of other desires.

That kind of failure is disturbing, but we have to face up to it. Clearly recognizing the possibility of cognitive deficit will, I think, affect our attitude toward people with addictions. Even without understanding the underlying neural operations, we can see that admonishment is not well suited to fixing a cognitive problem. A treatment model that aims to restore accurate understanding of causes and consequences seems more appropriate to a condition in which such understanding is impaired, irrespective of how one’s cognitive processes came to be undermined.

[The Statement can be found at .]

The Social Animal

August 8, 2011

In his recent Commentary article (see my previous post of 7/20/11), Peter Wehner mentions David Brooks’s recent book, The Social Animal. Wehner finds Brooks’s book “marvelous” and repeats a statement that Brooks quotes with approval from Jonathan Haidt: “unconscious emotions have supremacy but not dictatorship”.

I’m pleased to report that I too found much to admire in The Social Animal. In highly readable fashion, Brooks presents a feast of delectable morsels from studies in psychology and neuroscience. Many lines of evidence that reveal the operation of our unconscious brain processes are clearly explained, and we get insight into how they affect everything we do, including actions that have deep and lasting consequences for our lives.

Inevitably, recognition of our dependence on unconscious processes raises questions about the extent to which we control our actions, and the extent to which we are responsible for them. These questions come up for discussion on pages 290-292 of The Social Animal. It is these pages – less than 1% of the book – that I want to comment on today.

Most of what Brooks says in these pages is presented in terms of two analogies and a claim about narratives. I’m going to reflect on each of these. I believe that we can see some shortcomings of the analogies and the claim just by being persistent in asking causal questions.

The first analogy is that we are “born with moral muscles that we can build with the steady exercise of good habits” (290), just as we can develop our muscles by regular sessions at the gym.

But let us think for a moment about how habits get to be formed. Let us think back to early days, before a habit is established. Whether it’s going to the gym, or being a good Samaritan, you can’t have a habit unless you do some of the actions that will constitute the habit without yet having such a habit.

Some people habitually go to the gym; but what got them there the first time? Well, of course, they had good reasons. They may have been thinking of health, or status, or perhaps they wanted to look attractive to potential sex partners. Yet, many other people have the same reasons, but don’t go to the gym. What gets some people to go and not others?

That’s a fiendishly complex question. It depends on all sorts of serendipitous circumstances, such as whether one’s assigned roommate was an athlete,  whether a reminder of a reason arrived at a time when going to the gym was convenient, whether one overdid things the first time and looked back on a painful experience, or whether one felt pleasantly tired afterward.

The same degree of complexity surrounds the coming to be of a good Samaritan. In a more general context, Brooks notes that “Character emerges gradually out of the mysterious interplay of a million little good influences” (128). And he cites evidence that behavior is “powerfully influenced by context” (282). The upshot of these considerations is that whether a habit gets formed, and even whether an established habit is followed on a particular occasion, depends on a host of causes that we don’t control, and in many cases are not even aware of.

The second analogy is that of a camera that has automatic settings which can be overridden by switching to a “manual” setting. Similarly, Brooks suggests, we could not have a system of morality unless many of our moral concerns were built in, and were “automatic” products of most people’s genetic constitution and normal experience in families, schools, and society at large. But, like the camera, “in crucial moments, [these automatic moral concerns] can be overridden by the slower process of conscious reflection” (290).

Actions that follow a period of deliberation may indeed be different from actions that would have been done without deliberation. But if we take one step further in pursuit of causal questions, we have to ask where the deliberation comes from. Why do we hesitate? What makes us think that deliberation is called for?

The answers to these questions are, again, dependent on complex circumstances that we know little about and so are not under our control. To put the point in terms of the camera analogy, yes, if you decide on “manual” you can switch to that setting. But some people switch to manual some of the time, others do so in different circumstances, and some never do. What accounts for these differences? That’s a long, complex, and serendipitous matter. It depends on how you think of yourself as a photographer, whether you were lucky enough to have a mentor who encouraged you to make the effort to learn the manual techniques, whether you care enough about this particular shot. That history involves many events whose significance you could not have appreciated at the time, and over which you did not have control.

The claim about narratives is that “we do have some control over our stories. We do have a conscious say in selecting the narrative we will use to organize perceptions” (291) The moral significance of this point is that our stories can have moral weight: “We have the power to tell stories that deny another’s full humanity, or stories that extend it” (291).

We certainly have control over what words we will utter or not utter. But any story we tell about our place in society and our relations to other people has, first, to occur to us, and second, to strike us as acceptable, or better than alternative stories, once we have thought of it. On both counts, we are dependent on brain processes that lie outside our consciousness and that depend on long histories involving many events over which we have had no control.

We can provide a small illustration of this point by thinking about something Brooks brings up in another context. This is confirmation bias – the tendency to overvalue evidence that agrees with what we already think, and undervalue conflicting evidence. People don’t make this kind of error consciously. They tell themselves a story according to which they are objective evaluators who believe a conclusion because they have impartially weighed all the evidence. But, sometimes, they can find such a story about themselves acceptable only because they are unaware of their bias.

I am not being a pessimist here. Those who are lucky enough to read The Social Animal, or the experimental work that lies behind it, may very well be caused to take steps to reduce the influence of confirmation bias. The point remains that the acceptability of a narrative about one’s place in the scheme of things depends on many factors that lie in unconscious effects of complex and serendipitous histories.

[The book under discussion is by David Brooks, The Social Animal: The Hidden Sources of Love, Character, and Achievement (New York: Random House, 2011).]

Free Will, Morality, and Control

July 20, 2011

In a recent article in Commentary magazine, Peter Wehner inveighs against some of the views expressed in Sam Harris’s The Moral Landscape , and claims that “free will isn’t an illusion”.

Since different people mean different things by “free will”, we have to ask what Wehner means by this term. The most definite indication that Wehner provides is the following:

“Try as he might, Sam Harris cannot explain how morality is possible without free will. If every action is the result of biological inputs over which we have no control, moral accountability becomes impossible.”

Or, in other words, having free will requires that some of our actions are not the result of biological inputs over which we have no control. But what does this mean? What is a “biological input”?

Some of Wehner’s remarks suggest that “biological input” means something like “genetic constitution” or, perhaps, “genetic constitution plus developmental factors such as the state of one’s mother’s health during her pregnancy”. What Wehner seems to exclude from “biological inputs” is what we learn from our perceptual experience. This exclusion seems natural – the things you see and hear need not have anything to do with biology.

Even if you learn something by watching animals in a zoo, it would be unusual to think of yourself as having received biological inputs. You receive perceptual inputs at the zoo, and these enable you to know something about biological creatures.

But if “biological input” does not include what we learn from our perceptual experience, then Harris is not claiming that what we do depends only on “biological inputs”. I think it will be difficult to find anyone at all who holds such a view.

A view much more likely to be held is that all our actions are results of our biological inputs together with our perceptual inputs. I do not mean only perceptual inputs that are present at the time of acting (although those, of course, must be included). What we perceive changes us. It gives us memories, and provides information that we retain. It puts us into a state that is different from the state we would have been in if we had perceived something different. The state of our brains that we have at the time of an action is the result not only of present perceptions, but also of a long history of being in a state, perceiving, changing state, perceiving something further, changing state again, and so on and on.

Another key term in Wehner’s understanding of “free will” is “control”. You are in control of your action if you are doing what you aim to do, and you would have been doing something else if you had aimed to do that. Both of these conditions can be met if your actions are a result of current perceptions plus a brain state that you are in because of your original constitution and your history of perceptual inputs. So, you can have some control over your actions.

Of course, it is also true that there is much you are not in control of. You can open your eyes or keep them shut, but what you will see if they are open is not under your control. You can’t control your original constitution, and you can’t control the particular kind of change in your brain state that will be made by what you perceive.

Wehner worries that “If what Harris argues were true, our conception of morality would be smashed to pieces. If there is no free will, human beings are mere automatons, robots programmed to act (and not act) in certain ways. We cannot be held responsible for what we have no control over.”

But these alleged implications do not follow, if we understand Harris to be holding the more plausible view I’ve just sketched. The first point is relatively simple: We are not automatons if our actions are responsive to differences, not only in current perceptual inputs, but in matters of context that may have affected us at various times in the past.

The point about being a “robot programmed to act” is a little more complicated. We must distinguish between being actions being canned and being the result of some definite process. Outputs of grocery store readers are canned – someone has to type in what amounts to a rule like this: If THIS bar code is read, then display THAT price on the monitor and add it to the total. But that is not the way you, or robots, or even programs work. In genuine programming, inputs trigger a process that leads to a result that no one has previously calculated. Even a chess playing program takes in board positions that no programmer has previously thought of, and processes that information until an output (its next move) is reached. Since these board positions were unforeseen, good responses to them cannot have been worked out by programmers, and the responding moves cannot have been canned.

Regarding control, everyone must recognize that we have limits. But if we are not ill or befuddled by drugs, there will be many possible actions that we will do if we aim to do them, and won’t do if we don’t aim to do them; and so there will be many possible actions that are under our control.

[The article is Peter Wehner’s “On Neuroscience, Free Will, and Morality, Commentary, June 8, 2011. Available at Several of the points made in this post are more fully explained in Your Brain and You. Sam Harris’s The Moral Landscape was published by The Free Press, New York, in 2010; an earlier post (12/1/2010) on this blog comments on another aspect of this book.]


May 15, 2011

A recent review article by T. F. Heatherton and D. D. Wagner draws many studies together in support of a simple view of temptation and resisting temptation. The latter is also known as self-regulation. That’s what you exhibit when you do not do something that conflicts with your long term goals, even though it is something you would quite enjoy doing.

The view, referred to as a “balance model”, comes complete with an illustration of a beam balanced on a fulcrum. The beam may tip in the direction of giving into temptation for two kinds of reasons. One is that the temptation, or the pleasure anticipated from it, becomes more forcefully represented. That might happen because of signs of the temptation’s presence or easy availability, like the smell of tobacco smoke, or the sight of a high calorie food.

The other route to tipping the balance toward temptation is weakening of the opposition to giving in to temptation. That might happen because of fatigue, drug indulgence (most familiarly, alcohol), disease, or interference from passing a strong electromagnet near the head (TMS: transcranial magnetic stimulation).

The article cites evidence that opposition to giving into temptation importantly involves the prefrontal cortex (PFC). In contrast, different temptations are reflected in increased activation of neurons in different brain regions. Fortunately, the PFC has connections to these different regions, and rise in activity in PFC neurons tends to lower activity in neurons in these other regions.

The struggle to resist temptation, then, is reflected in the competition between activity in the PFC and activity in regions it’s connected to that are turned on by reminders of rewards that you would enjoy, but that would conflict with your longer term goals.

As noted, this balance model rests on many studies, so it has a lot going for it. There is, however, a minor oddity in the description that goes with this model. The article’s authors contrast brain regions representing the value of a tempting stimulus with “prefrontal regions associated with self-control” (pp. 134-135). They refer to the idea that “resisting temptations reflects competition between impulses and self-control” (p. 136).

It would be more natural to think of the opposition as a tension between regions that represent the attractions of immediately available temptations, and a region that represents long term goals. It is, after all, the achievement of long term goals that is the reason for resisting temptations, and it should be increased activation of representation of long term goals that competes with representation of immediate pleasures.

If we think of the PFC as representing long term goals, we can properly locate “self control” as a feature of the person whose brain houses both the PFC and the other regions that represent temptations. Self control is what you (the whole of you) exhibit when you avoid treating yourself to an immediately available pleasure, where indulging in it would conflict with a long term goal (sobriety; weight loss; avoidance of STDs, saving money for the future, and so on). The PFC is only a part of you. Activation of its neurons can certainly inhibit activation of neurons in other regions, but it is not a self, and it’s not the right kind of thing to exert “self control”.

If we think of the PFC as representing long term goals, its ability to inhibit activity in a variety of other regions also seems quite natural. After all, to hold a long term goal is to be directed on a result in the face of whatever obstacles may present themselves. We are not omniscient. We do not know in advance what the obstacles to a long term goal will be. So, the more temptation-promoting brain regions our representations of long term goals are able to inhibit, the more useful they will be to us.

[The model discussed here is found in Heatherton, T. F and Wagner, D. D. (2011) “Cognitive neuroscience of self-regulation failure”, Trends in Cognitive Science 15(3):132-139.]

Can We Control Our Attention?

February 11, 2011

My answer to this question in Your Brain and You is, briefly, “Partially, sometimes”. Jonah Lehrer takes a rather different view in a January 18, 2011 post, “Control the Spotlight”, on his blog at .

In the background is some work by psychologist Walter Mischel and colleagues. 4 year old children got to identify two items, both desirable but one preferred to the other. They were told that the experimenter would leave the room for a while and that they could have the preferred item if they waited for the return, but they could end the waiting period at any time by ringing a bell. If they did ring the bell, the experimenter would return immediately but, they were told, they would then get only the less preferred item. The measure of interest was how long a child would hold out before ringing the bell. The children were unobtrusively observed during the waiting period.

Many of the children rang for the experimenter’s return in a relatively short time, but some (“high delayers” in Lehrer’s phrase) held out for upwards of 10 minutes.

Among many interesting results of this work, two especially stand out. One is that when followed up more than 10 years later, high delayers were found, on average, to have fewer problems and significantly higher S.A.T. scores than those who ended the waiting period relatively early. The other is a point of strategy: instead of merely gritting their teeth, the high delayers distracted themselves with some activity, e.g., singing a song, or playing some sort of game.

It’s this last point that seems to lead Lehrer to statements like the following. What is often thought of as “willpower” is “really about properly directing the spotlight of attention, learning how to control that short list of thoughts in working memory”. . . . “When we properly control the spotlight, we can resist negative thoughts and dangerous temptations.” . . . . “Our decisions are driven by the facts and feelings bouncing around in the brain – the allocation of attention allows us to direct this haphazard process, as we consciously select the thoughts we want to think about.” . . . . “And yet, we can still control the spotlight of attention, focusing on those ideas that will help us succeed. In the end, this may be the only thing we can control.”

But “consciously select[ing] what thoughts we want to think about” attributes to us more than we can reliably do. Evidently, we cannot select what thoughts will occur to us in the first place – for to select something requires that we already are thinking of it. And we often cannot continue to think about a topic we want to think about: We have all experience finding ourselves thinking about X when we should be, and want to be, thinking about Y.

An alternative take on the kids’ behavior is suggested by the following reflections. (1) It may not so much as occur to a child to engage in a distracting activity. (2) Even if the idea of doing something distracting occurs, it might happen that no particular activity comes to mind. (3) It could happen that some activity comes to mind, but proves insufficiently engaging – i.e., the child’s thoughts might keep returning to the immediately available reward.

The high delayers did not have any of these possibilities happen to them. But that fortunate fact is not the sort of thing anyone can control by controlling their attention. For the first two cases, the reason is this: They depend on having something come to mind, and you cannot bring something to mind intentionally unless you’ve already thought of it – in which case it has already come to mind. And in the third case, as noted,  we know that sometimes we just “can’t keep our minds on our work”, and that’s a way of saying that our control is only partial.

The high delayers are children whose genetic endowment, developmental circumstances, experiences, parental treatment, and resulting habits have already put them into a state that distinguishes them from their peers. It should not be surprising that being in an advantageous state when young is correlated with having desirable characteristics years later.

It seems to me that Lehrer is on firmer ground when he says that “The unconscious mind, it turns out, is most of the mind”. This remark applies to the factors that cause the spotlight of attention to be upon whatever it is highlighting at a particular moment. The operator of a nonmetaphorical spotlight in a real theater consciously directs the spotlight to the featured performer. But it is not a helpful use of the spotlight metaphor to imagine that behind our ‘spotlight of attention’ we are consciously deciding where to point it. The things we are in a position to decide about are things that are already in the spotlight of our attention.

[For background see Mischel, W., Shoda, Y. and Rodriguez, M. L. “Delay of Gratification in Children”, Science 244:933-938 (1989). This article reports many interesting manipulations that neither Lehrer nor I have attempted to summarize. These variations of conditions (with consequent differences of outcomes) seem to me to support the view that what is attended, and how what is attended is thought of, is highly dependent on many circumstances external to the participating subject.]

Canine Self-Control?

November 7, 2010

An article by Wray Herbert in Scientific American Mind (11/2/10) reports two related experiments with dogs, and reflects on self-control.

The dogs in the experiments by Holly Miller and colleagues at the University of Kentucky were all familiar with a toy (Tug-a-Jug) that contains treats that can be seen inside a clear cylinder. Normally, the dogs could manipulate the toy and obtain the treats. The toys used at the key point in each experiment had been altered so that they could not be opened.

The dogs were paired according to training history and then randomly assigned to one of two groups. In the experiments, owners of dogs in one of the groups commanded them to sit and then stay, then left the room. Owners of dogs in the other group placed them in a cage. If the owner of a commanded dog had to revisit the room to reissue the command, the owner of the paired caged dog visited the room at the same time interval. Dogs stayed or were caged for 10 minutes.

The interesting result in experiment 1 was that the caged dogs spent, on average, significantly more time than the commanded dogs in trying to open the toy.

In experiment 2, the dogs in each group were divided into two subgroups. Half got a sugar drink before being allowed to attempt to retrieve treats from the altered toy, while the other half got an artificially sweetened drink.

The commanded dogs that got the sugar then performed like the caged dogs in experiment 1. The commanded dogs that got the artificially sweetened drink did not: they gave up much more quickly. These results parallel those of several studies of “ego depletion” in human beings.

What first caught my attention in Herbert’s article was the following:

“These findings suggest that self-control may not be a crowning psychological achievement of human evolution and indeed may have nothing to do with self-awareness. It may simply be biology – and beastly biology at that. These are humbling results . . . .”

This passage got me to thinking about what could possibly be going on in the commanded dogs’ minds during the 10 minutes of staying. If they were people, I think they would be talking to themselves, something along the lines of “Got to wait. Boss says so. Boss won’t like it if I don’t wait for permission to move.” And so forth. There might be an exploration of the boss’s possible reasons, or the legitimacy of the boss’s instructions.

Dogs. of course, don’t have language, so they can’t be running a commentary of this kind. But there’s no reason to suppose they don’t have imagery, and I’m willing to speculate that they do. Perhaps they can have images of running around or exploring their surroundings. But they’ve been well trained. Perhaps they also have a images of Master’s frowns or harsh words if they move, or an image of Master’s smiles and good play after a new command that allows movement.

Such imagery would evidently not amount to a narrative of the canine self. But it would have to be a kind of self-awareness, albeit a minimal one. In the first case, the image could not be of just some dog or other moving about – it would have to be an image of its moving. And images of Master’s smiles or frowns would have to be images of Master’s frowning or smiling while looking at it not just of some master looking at some dog or other.

I think we can get a sense of this minimal kind of self-awareness by imagining ourselves doing something. That is not like imagining watching some person or other doing the same thing, and not even like imagining watching someone who looks just like ourselves doing it.

I’m also willing to speculate that there is a feeling of anxiety or tension in the commanded dogs. Images of moving, and of not moving and Master’s good play may alternate. It seems possible there might even be a muscular oscillation in which there is almost a movement interleaved with inhibitions of movement.

The interest of these speculations is not so much whether they are true, but whether, if they were true, we would be willing to say that self-control “may simply be biology”. Part of me wants to say “Well, of course it’s biology! After all, whether the dogs move or not depends on whether their muscles contract or not; and that depends on the state of motor neuron activations, which depends on the state of neural activations in their brains. And that’s all biological activity.”

Maybe it’s the “simply” that seems not quite right. If a dog’s mind were as I’ve imagined it, it would have a psychological parallel to its biological goings on, i.e., a series of images and feelings of tension that represented the merits and demerits of moving.

There is also an interesting question about the idea of “self-control”. This is raised by my belief that most of us would be willing to say that the owners of the commanded dogs who stayed for 10 minutes had good control of their dogs. Are the dogs exerting self-control? or are their owners exerting control?

I’ll suggest this resolution: It’s both. The owners have control because they aim to have their dogs stay, and (because they’ve spent the necessary time on training) can get that to happen by issuing a command. The dogs have control if they aim to earn Master’s smiles (or avoid Master’s frowns) and their behavior actually concords with that aim. – Of course, to accept this resolution, one has to be prepared to allow that dogs can have aims of this kind. (Miller and colleagues do seem to accept this; in fact they go quite far in this direction: “The ability to coordinate rule-based memories and current behavior in a goal-directed way is pervasive across species” (p. 537).)

I was also intrigued by Herbert’s concluding paragraph:

“So perhaps humans are not unique . . . . It appears that the hallmark sense of human identity – our selfhood – is not a prerequisite for self-discipline. Whatever it is that makes us go to the gym and save for college is fueled by the same brain mechanisms that enable our hounds to sacrifice their own impulses and obey.”

If dogs can have images of themselves doing something, and these are different from images of other dogs doing the same kinds of thing, then some minimal sense of self-awareness may still be required for self-control. It is also important to follow Miller and colleagues in identifying the common “fuel” as glucose. That’s what restores the energy that seems to be depleted by the tension between what dogs (or people) would like to do now and their longer-term aims, or by effort spent on solving a difficult problem.

If we focus on the glucose as the fuel, I think we won’t feel humbled by the commonality that Miller and colleagues have found between us and dogs. For the commonality of the influence of glucose leaves it open that there are many “brain mechanisms” that we have and dogs do not have. There are, for example, brain mechanisms that produce our inner speech, which may contain statements of reasons for going to the gym, and these same mechanisms may also causally contribute to our actually going there. It would seem very difficult to represent such reasons by sequences of images, however complex or lengthy.

[Herberts’ article is “Dog Tired: What Mutts Can Tach Us about Self-Control, available at . The article being reported on is Miller, H. C., Pattison, K. F., DeWall, C. N., Rayburn-Reeves, R. and Zentall, T. R. (April, 2010) “Self-Control Without a “Self”? Common Self-Control Processes in Humans and Dogs”, Psychological Science 21(4):534-538.]

Top-Down Control?

October 20, 2010

In a recent paper, distinguished neuroscientist Chris D. Frith calls attention to a simple but arresting point: “there are no brain areas that have only outputs and no inputs”. Instead, every area that provides its output to other areas also receives inputs from other areas (as well as feedback from areas to which it sends output).

For example, if we act, the motor neurons that drive our muscles fire. Areas in which these motor neurons lie receive input from areas that lie a little farther forward in the brain. These more forward areas receive input from an area still farther forward (the prefrontal cortex), which receives input from many other areas, and so on. We never come to an outputter whose activity is not conditioned by inputs from elsewhere.

The context for this point will be clear from the article’s title: “Free Will and Top-Down Control in the Brain”. Frith contrasts top-down control with bottom-up control. The “bottom” is sensory inputs and in bottom-up control, action is driven by sensory inputs. Reflexes would be the clearest sort of case: if your knee is tapped in the right way, your lower leg will move as a direct result of the tap.

In contrast, Frith takes top-down control to occur when goals or plans are involved in actions. A key point is that goals or plans do not depend directly on what we are immediately sensing. Which food you purchase for dinner may well depend on the quality of what you see in the grocery store, but your goal to get some food did not depend on what you were seeing or hearing when you went to the store.

Your goal to get food did, however, depend on conditions somewhere in your brain, and if there is no area that is solely an outputter, the goals you have are the result of contributions from many brain areas. In Frith’s thinking, a true “top level” would be a brain area that affects other brain areas, but is not affected by other areas. The significance of the fact that all brain areas receive inputs from other areas is that there is no “top level” in this sense. There is no unaffected effector from which our goals emanate.

Frith sees this fact as a problem for locating free will in an individual. His view seems to be that free will requires a top level that is only a top level, i.e., is not affected by anything else. “Nothing must control the controller.” Since there is no top level of this kind in the brain, we cannot find a physiological area in an individual person that provides free will.

One might draw the conclusion that there is no such thing as “free will” if this term is understood to require a top level that has no inputs. That would, of course, leave open the possibility of offering some other conception of “free will” that would not imply a requirement that we know is not satisfied.

Interestingly, that is not the conclusion that Frith draws. Instead, he considers experiments on free actions. Typical tasks in these experiments include moving one’s finger whenever one wishes to do so; or moving the right or left index finger, whichever one wishes, in response to a signal; or generating a series of “random” digits. Frith notes that results in such experiments depend on a social interaction between participants and experimenters – the latter give instructions, and the participants cooperate in agreeing to try to follow them.  So, he says, “The top-down constraints that permit acts of will come from outside the individual brain. . . . If we are to understand the neural basis of free will, we must take into the account the brain mechanisms that allow minds to interact”.

This does not seem to be a happy solution to the problem as Frith sets it up. That’s because social interactions plainly do not provide a “top level” in the sense of something that gives outputs but receives no inputs. Participants and experimenters are themselves subject to many social influences; there are no people who give outputs to others but receive no inputs from others.

It seems that a better conclusion to draw from Frith’s reflections is that there is no “free will” in the sense of an outputter that has no inputs. If there is such a thing as “free will” at all, there must be some other way of conceiving what it amounts to.

[The paper from which I’ve quoted is Chris D. Frith, “Free Will and Top-Down Control in the Brain”, in Murphy, N., Ellis, G. F. R., and O’Connor, T., Downward Causation and the Neurobiology of Free Will (Berlin: Springer-Verlag, 2009), pp. 199-209. Frith attributes the simple point with which this post begins to another distinguished neuroscientist, Semir Zeki.]

Do We Control Our Daydreams?

September 20, 2010

I’ve been reading Paul Bloom’s How Pleasure Works (New York & London: W. W. Norton, 2010), and ran across the arresting statement that “in a daydream you have perfect control” (p. 200).

Readers of Your Brain and You will recognize that this claim goes against the grain of chapter 5. But quite independently of the reasons given there, daydreaming seems a poor candidate for something we control. After all, daydreaming is supposed to be relaxing and pleasant. When we’re daydreaming we are not trying to do much of anything – we are taking a break from our effortful projects. In daydreaming, as Bloom also says (p. 198), our “minds are wandering”. But wandering – as we might do, for example, in a park – is exactly not trying to get anywhere in particular. If we are not aiming at some particular result, or series of actions, or series of mental images, it seems odd to think of ourselves as in control of what images come to mind. 

The puzzle I want to address today, however, is that what Bloom says seems initially plausible, even to me. Why should that be? Why should it seem natural to say we are in control of our daydreams when, on reflection, that does not seem to be so?

Part of the explanation is that Bloom describes daydreaming as involving “the creation of imaginary worlds” and portrays us as designers, casting directors, and screenwriters of these worlds and of the “imaginary beings [that we create] to populate” them (p. 198). But these descriptions actually apply to creators of fiction, i.e., writers of stories, novels, and plays. Such writers are not daydreaming: they are trying to do something, namely, to write a story that will be dramatic, convey a moral, tell us something about ourselves, and so on. They may have many ideas cross their minds, and they exercise control when they reject most of them as not contributing to the drama or atmosphere at which they are aiming. But trying to write a good story is not what we’re doing when we are daydreaming, or letting our minds wander.

A deeper clue comes from a contrast that Bloom draws between normal daydreaming and some cases of schizophrenia, “in which this other-self creation is involuntary and the victim of the disease believes that these selves are actual external agents such as demons, or aliens, or the CIA” (p. 198).

This contrast seems real. But “involuntary” does not seem to be the best description of it, since what occurs to us in normal daydreaming does not seem rightly described as “voluntary”. “Voluntary” actions are actions you consider beforehand and decide to do. But we do not set about trying to bring certain images to mind when we daydream – we relax and they come to us unbidden.

A better description of the contrast is that when we daydream, we have a palpable sense that the images we entertain are ours. This is not control, but something more like ownership. We do not control what comes into our minds, but we do have a sense that we are actively picturing to ourselves, and this is not just like passively perceiving something in the world outside our bodies.

This kind of active involvement seems similar to what happens in our inner speech. When we talk to ourselves, we have auditory imagery that is like hearing what we say to ourselves. But we also have a palpable sense that we are saying something to ourselves, and not merely hearing something being said. To lose this sense would be to “hear voices” – which would be disturbing, and a sign of illness.

The resolution of my puzzlement, then, is this. In daydreaming, we have a sense of active involvement in picturing to ourselves. We feel that our images are our images, something we produce. In many other cases, when we produce something, we have control over the character of what gets produced. So, our active involvement in picturing to ourselves is easy to confuse with control. But “producing” our images in this sense is not the same thing as controlling which images are popping up in our mind’s eye. Projectionists at your local theater are actively involved in producing the images on the screen, but they do not control the character of those images.

%d bloggers like this: