Responsibility and Brains

August 2, 2012

In a thoughtful recent article, John Monterosso and Barry Schwartz rightly point out that whatever action we may take, it is always true that our brains “made us do it”. They are concerned that recognition of this fact may undermine our practice of holding people responsible for what they do, and say that “It’s important that we don’t succumb to the allure of neuroscientific explanations and let everyone off the hook”.

The concern arises from some of their experiments, in which participants read descriptions of harmful actions and were provided with background on those who had done them. The background was given either (a) in psychological terms (for example, a history of childhood abuse) or (b) in terms of brain anomalies (for example, imbalance of neurotransmitters).

The striking result was that participants’ views about responsibility for the harmful actions depended on which kind of terms were used in giving them the background. Those who got the background in psychological terms were likely to regard the actions as intentional and as reflecting the character of the persons who had done them. Those who got the background in terms of brain facts were more likely to view the actions as “automatic” and only weakly related to the performers’ true character.

The authors of the article describe this difference as “naive dualism” – the belief that actions are brought about either by intentions or by physical laws that govern our brains, and that responsibility accrues to the first but not the second. Naive dualism unfortunately ignores the fact that intentions must be realized in neural form – must be brain events – if they are to result in the muscle contractions that produce our actions.

They recommend a better question for thinking about responsibility and the background conditions of our actions. Whether the background condition is given as a brain fact, or a psychological property, or a certain kind of history, we should ask about the strength of the correlation between that condition and the performance of harmful actions. If most of those who have background condition X act harmfully, their level of responsibility may be low, regardless of which kind of background fact X is. If most of those with background condition X do not act badly, then having X should not be seen as diminishing responsibility – again, regardless of which kind of background condition X may be.

This way of judging responsibility is compatible with recognizing that we all have an interest in the preservation of safety. Even if bad behavior can hardly be avoided by some people, and their responsibility is low, it may be perfectly legitimate to take steps to prevent them from inflicting damage on others.

I’ll add here a speculation on a mechanism by which citing brain facts may lead us to assign people less responsibility than we should. Many reports of brain facts emphasize the role of some part of the brain. But we do not think of people as their parts: Jones is not his insula or his amygdala, Smith is not her frontal lobes or her neurotransmitters. So, some reports of background conditions in terms of brain facts may lead us to think of actions as the result of people’s parts, and thus not as the actions of the whole person.

A corrective to this kind of mistake is to bear in mind that our encouragements of good behavior and our threats of punishment are addressed to whole persons. Whole persons generally do know what is expected of them, and in most cases knowledge of these expectations offsets deficiencies that may occur in some of our parts. Our brains are organized systems, and the effects of their parts can become visible only when the whole system is mobilized toward carrying out an action.

[The article referred to is John Monterosso and Barry Schwartz, “Did Your Brain Make You Do It?”, The New York Times, Sunday, July 29, 2012, Sunday Review section, p. 12.]


Science and Free Will

May 9, 2012

At last month’s “Toward a Science of Consciousness” conference in Tucson, Pomona College researcher Eve Isham reported on several studies under the title “Saving Free Will From Science”. These studies cast doubt on conclusions that are often drawn from a series of famous studies carried out by Benjamin Libet.

Participants in Libet’s experiments wore a device on their head that measures electrical activity at points under the scalp. They were instructed to make a small movement (e.g., a flick of the wrist) whenever they felt like doing so. They watched a clock-like device, in which a dot rotates around the edge of a circle at the rate of two and a half times per second. There are marks and numbers around the edge of the circle. Their task was not only to make a movement when they felt like it; they were also to take note of where the dot was when they (a) formed the intention to make their movement, and (b) when they actually moved. They were asked to report these two times right after they made their movement.

Let’s call these times Int and Mov. What Libet found was that participants’ electrical activity showed a telltale rise a short time (roughly, a half second) before the reported time Int. This telltale marker is called a “readiness potential”, usually abbreviated as RP. (There are complicated corrections that must be made for the time it takes for signals to travel along neurons; the interval between RP and Int is what remains after these corrections have been taken into account.)

The key points are that RP comes before Int, and that Int is the moment when the intention to move first becomes conscious. The conclusion that many have drawn is that the process that is going to result in a movement starts unconsciously before a person becomes aware of an intention to move. So, the intention comes too late to be the real cause of the movement. But if people’s intentions to move are not the real causes of their movements, then they don’t have free will.

(Libet tried to avoid this conclusion by holding that there was enough time for a person to insert a “veto” and stop the movement. This has earned him a quip: he undercuts free will, but allows for free won’t. Few have found this view attractive.)

Critics of Libet’s work have raised many questions about the experimental design, and I have long regarded these experiments as resting on assumptions that seem difficult to pin down with sufficient accuracy. Isham’s presentation significantly deepened these doubts.

Although I have read many papers on Libet’s work, I had never seen the clock in motion. Isham showed a video, and as I watched, I imagined myself as a participant in Libet’s experiments. I flicked my wrist a few times, and thought of what I would have reported as the times of my intention to move, and my actual movement.

I found that it was extremely difficult to try to estimate two times. In fact, I found it hopeless, and soon gave up, settling for focusing on trying to get an accurate estimate of the time of my intention to move.

But even this simpler task was difficult, and I had no sense of confidence that I could locate the time of my intention more accurately than about an eighth of a revolution (that’s the distance of about eight minutes around the circumference of a regular clock face). When trying to do the task, the dot seemed to be moving very fast.

It’s natural to wonder whether accuracy might be improved by using a clock where the dot was not traveling so fast. That’s one of the variations on Libet’s set up that Isham tried. And here is the decisive result: When she did that, the estimates of the time of intention were earlier. RP was still a little bit earlier than the reported Int time, but the interval was very significantly reduced.

Isham reported several other variations on Libet’s design, all of which lead to the same general result: the estimates of Int depend on clock speed and several other factors that shouldn’t make a difference, but do. These results offer strong support for her conclusion that the time of intention is not consciously accessible to us, and we cannot use Libet-style experiments to undercut the view that our intentions cause our actions.

Readers of Your Brain and You, or some other posts on this blog, may recall that I am sympathetic to the conclusions that are often drawn from Libet’s work. So, I don’t think Isham has saved free will from science. But her work gives us more reason than we have previously had for not basing anti-free-will conclusions on Libet-style investigations.

[The abstract of Isham’s conference paper is available here.]


The Social Animal

August 8, 2011

In his recent Commentary article (see my previous post of 7/20/11), Peter Wehner mentions David Brooks’s recent book, The Social Animal. Wehner finds Brooks’s book “marvelous” and repeats a statement that Brooks quotes with approval from Jonathan Haidt: “unconscious emotions have supremacy but not dictatorship”.

I’m pleased to report that I too found much to admire in The Social Animal. In highly readable fashion, Brooks presents a feast of delectable morsels from studies in psychology and neuroscience. Many lines of evidence that reveal the operation of our unconscious brain processes are clearly explained, and we get insight into how they affect everything we do, including actions that have deep and lasting consequences for our lives.

Inevitably, recognition of our dependence on unconscious processes raises questions about the extent to which we control our actions, and the extent to which we are responsible for them. These questions come up for discussion on pages 290-292 of The Social Animal. It is these pages – less than 1% of the book – that I want to comment on today.

Most of what Brooks says in these pages is presented in terms of two analogies and a claim about narratives. I’m going to reflect on each of these. I believe that we can see some shortcomings of the analogies and the claim just by being persistent in asking causal questions.

The first analogy is that we are “born with moral muscles that we can build with the steady exercise of good habits” (290), just as we can develop our muscles by regular sessions at the gym.

But let us think for a moment about how habits get to be formed. Let us think back to early days, before a habit is established. Whether it’s going to the gym, or being a good Samaritan, you can’t have a habit unless you do some of the actions that will constitute the habit without yet having such a habit.

Some people habitually go to the gym; but what got them there the first time? Well, of course, they had good reasons. They may have been thinking of health, or status, or perhaps they wanted to look attractive to potential sex partners. Yet, many other people have the same reasons, but don’t go to the gym. What gets some people to go and not others?

That’s a fiendishly complex question. It depends on all sorts of serendipitous circumstances, such as whether one’s assigned roommate was an athlete,  whether a reminder of a reason arrived at a time when going to the gym was convenient, whether one overdid things the first time and looked back on a painful experience, or whether one felt pleasantly tired afterward.

The same degree of complexity surrounds the coming to be of a good Samaritan. In a more general context, Brooks notes that “Character emerges gradually out of the mysterious interplay of a million little good influences” (128). And he cites evidence that behavior is “powerfully influenced by context” (282). The upshot of these considerations is that whether a habit gets formed, and even whether an established habit is followed on a particular occasion, depends on a host of causes that we don’t control, and in many cases are not even aware of.

The second analogy is that of a camera that has automatic settings which can be overridden by switching to a “manual” setting. Similarly, Brooks suggests, we could not have a system of morality unless many of our moral concerns were built in, and were “automatic” products of most people’s genetic constitution and normal experience in families, schools, and society at large. But, like the camera, “in crucial moments, [these automatic moral concerns] can be overridden by the slower process of conscious reflection” (290).

Actions that follow a period of deliberation may indeed be different from actions that would have been done without deliberation. But if we take one step further in pursuit of causal questions, we have to ask where the deliberation comes from. Why do we hesitate? What makes us think that deliberation is called for?

The answers to these questions are, again, dependent on complex circumstances that we know little about and so are not under our control. To put the point in terms of the camera analogy, yes, if you decide on “manual” you can switch to that setting. But some people switch to manual some of the time, others do so in different circumstances, and some never do. What accounts for these differences? That’s a long, complex, and serendipitous matter. It depends on how you think of yourself as a photographer, whether you were lucky enough to have a mentor who encouraged you to make the effort to learn the manual techniques, whether you care enough about this particular shot. That history involves many events whose significance you could not have appreciated at the time, and over which you did not have control.

The claim about narratives is that “we do have some control over our stories. We do have a conscious say in selecting the narrative we will use to organize perceptions” (291) The moral significance of this point is that our stories can have moral weight: “We have the power to tell stories that deny another’s full humanity, or stories that extend it” (291).

We certainly have control over what words we will utter or not utter. But any story we tell about our place in society and our relations to other people has, first, to occur to us, and second, to strike us as acceptable, or better than alternative stories, once we have thought of it. On both counts, we are dependent on brain processes that lie outside our consciousness and that depend on long histories involving many events over which we have had no control.

We can provide a small illustration of this point by thinking about something Brooks brings up in another context. This is confirmation bias – the tendency to overvalue evidence that agrees with what we already think, and undervalue conflicting evidence. People don’t make this kind of error consciously. They tell themselves a story according to which they are objective evaluators who believe a conclusion because they have impartially weighed all the evidence. But, sometimes, they can find such a story about themselves acceptable only because they are unaware of their bias.

I am not being a pessimist here. Those who are lucky enough to read The Social Animal, or the experimental work that lies behind it, may very well be caused to take steps to reduce the influence of confirmation bias. The point remains that the acceptability of a narrative about one’s place in the scheme of things depends on many factors that lie in unconscious effects of complex and serendipitous histories.

[The book under discussion is by David Brooks, The Social Animal: The Hidden Sources of Love, Character, and Achievement (New York: Random House, 2011).]


Free Will, Morality, and Control

July 20, 2011

In a recent article in Commentary magazine, Peter Wehner inveighs against some of the views expressed in Sam Harris’s The Moral Landscape , and claims that “free will isn’t an illusion”.

Since different people mean different things by “free will”, we have to ask what Wehner means by this term. The most definite indication that Wehner provides is the following:

“Try as he might, Sam Harris cannot explain how morality is possible without free will. If every action is the result of biological inputs over which we have no control, moral accountability becomes impossible.”

Or, in other words, having free will requires that some of our actions are not the result of biological inputs over which we have no control. But what does this mean? What is a “biological input”?

Some of Wehner’s remarks suggest that “biological input” means something like “genetic constitution” or, perhaps, “genetic constitution plus developmental factors such as the state of one’s mother’s health during her pregnancy”. What Wehner seems to exclude from “biological inputs” is what we learn from our perceptual experience. This exclusion seems natural – the things you see and hear need not have anything to do with biology.

Even if you learn something by watching animals in a zoo, it would be unusual to think of yourself as having received biological inputs. You receive perceptual inputs at the zoo, and these enable you to know something about biological creatures.

But if “biological input” does not include what we learn from our perceptual experience, then Harris is not claiming that what we do depends only on “biological inputs”. I think it will be difficult to find anyone at all who holds such a view.

A view much more likely to be held is that all our actions are results of our biological inputs together with our perceptual inputs. I do not mean only perceptual inputs that are present at the time of acting (although those, of course, must be included). What we perceive changes us. It gives us memories, and provides information that we retain. It puts us into a state that is different from the state we would have been in if we had perceived something different. The state of our brains that we have at the time of an action is the result not only of present perceptions, but also of a long history of being in a state, perceiving, changing state, perceiving something further, changing state again, and so on and on.

Another key term in Wehner’s understanding of “free will” is “control”. You are in control of your action if you are doing what you aim to do, and you would have been doing something else if you had aimed to do that. Both of these conditions can be met if your actions are a result of current perceptions plus a brain state that you are in because of your original constitution and your history of perceptual inputs. So, you can have some control over your actions.

Of course, it is also true that there is much you are not in control of. You can open your eyes or keep them shut, but what you will see if they are open is not under your control. You can’t control your original constitution, and you can’t control the particular kind of change in your brain state that will be made by what you perceive.

Wehner worries that “If what Harris argues were true, our conception of morality would be smashed to pieces. If there is no free will, human beings are mere automatons, robots programmed to act (and not act) in certain ways. We cannot be held responsible for what we have no control over.”

But these alleged implications do not follow, if we understand Harris to be holding the more plausible view I’ve just sketched. The first point is relatively simple: We are not automatons if our actions are responsive to differences, not only in current perceptual inputs, but in matters of context that may have affected us at various times in the past.

The point about being a “robot programmed to act” is a little more complicated. We must distinguish between being actions being canned and being the result of some definite process. Outputs of grocery store readers are canned – someone has to type in what amounts to a rule like this: If THIS bar code is read, then display THAT price on the monitor and add it to the total. But that is not the way you, or robots, or even programs work. In genuine programming, inputs trigger a process that leads to a result that no one has previously calculated. Even a chess playing program takes in board positions that no programmer has previously thought of, and processes that information until an output (its next move) is reached. Since these board positions were unforeseen, good responses to them cannot have been worked out by programmers, and the responding moves cannot have been canned.

Regarding control, everyone must recognize that we have limits. But if we are not ill or befuddled by drugs, there will be many possible actions that we will do if we aim to do them, and won’t do if we don’t aim to do them; and so there will be many possible actions that are under our control.

[The article is Peter Wehner’s “On Neuroscience, Free Will, and Morality, Commentary, June 8, 2011. Available at http://www.commentarymagazine.com/2011/06/08/on-neuroscience-free-will-and-morality. Several of the points made in this post are more fully explained in Your Brain and You. Sam Harris’s The Moral Landscape was published by The Free Press, New York, in 2010; an earlier post (12/1/2010) on this blog comments on another aspect of this book.]


“Free Won’t”?

April 6, 2011

V. S. Ramachandran’s The Tell-Tale Brain: A Neuroscientist’s Quest for What Makes Us Human is a cornucopia of fascinating material. Many of the ideas offered are well established, but Ramachandran is not afraid to venture some risky hypotheses. Some of these give a common explanation for several findings whose connection to each other is not initially obvious. These explanations provide good models for what neural understanding of ourselves ought to look like, even if the evidence may eventually turn out against some of them.

One of the featured topics of the book is mirror neurons. These are neurons that fire (a) when you do an action of some kind, for example, grasping something with your hand; and (b) also fire when you see someone else doing that kind of action. Mirror neurons were first discovered in macaques by G. Rizzolati’s team in the late 1980s. They have been shown to exist in people, have been actively investigated by many researchers.

I am not going to go into the controversy over whether deficiency in the mirror neuron system is the key to understanding autism. The chapter in which this claim is offered is a riveting section of the book, and it is up to those with access to laboratories to sort out how well the theory can be supported.

Instead, I’m going to reflect a bit on a suggestion Ramachandran makes in answer to the question of what prevents us from blindly imitating every action we see. Or, since we also have neurons that fire both when we have a sensation and when we see others in circumstances likely to cause the same kind of sensation, why don’t we actually feel a touch on our own bodies when we see others touched?

Well, according to a 2009 review by Keysers and Gazzola, there are studies showing that there are some people who do have such sensations. (They say about 1% of people.) And Ramachandran mentions echopraxia, a condition in which “the patient sometimes mimics gestures uncontrollably”(p. 125). So the question really is, why aren’t these phenomena more common than they are?

A simple answer that Ramachandran does not discuss is that, in most of us, only *some* of the neurons in the premotor areas are mirror neurons. It may be that there is some tendency for us to mimic observed gestures, but we just don’t have enough mirror neurons to lead to actual execution of that tendency, in normal conditions.

What Ramachandran does say in answer to his question is this: “In the case of motor mirror neurons, one answer is that there may be frontal inhibitory circuits that suppress the automatic mimicry when it is inappropriate. In a delicious paradox, this need to inhibit unwanted or impulsive actions may have been a major reason for the evolution of free will. Your left inferior parietal lobe constantly conjures up vivid images of multiple options for action that are available in any given context, and your frontal cortex suppresses all but one of them. Thus it has been suggested that “free won’t” may be a better term than free will” (p. 124).

It seems evident to me that we are not ordinarily aware of suppressing tendencies to do what we see others doing. For example, suppose we’re having a drink with friends, and have already said “Cheers” and taken our first sips. Thereafter, we all see each other picking up our respective glasses, but in general we don’t take our further sips in near-unison. Yet we are not consciously aware of suppressing the lifting of our own glass every time we see one of our friends lift theirs.  So one intriguing aspect of Ramachandran’s picture is that, if we combine it with the observation just made, we would be led to conclude that there is a tremendous amount of unconscious processing going on whenever we see others acting.

This consequence would be one illustration of our dependence on unconscious processes – a theme that will be familiar to readers of Your Brain and You, or of several other posts on this blog. But I think that most of those who write about “free will” would not find these processes to be a plausible explanation of the beginning of “free will” – or of “free won’t”.

That is because the centrally important cases for those who write about “free will” are situations in which one is quite fully conscious of at least two alternatives, and in which one has time to think about the consequences of doing each of them. For example, if one tells a lie to avoid revealing some embarrassing fact about oneself, one generally knows what the fact is and what the lie would be, and one has at least a little while to consider the consequences of honesty, the likelihood of being caught in a lie, and the consequences of having one’s lie exposed. Writers about “free will” are generally thinking of cases where people are aware of a good reason to do something and also aware of a good reason to do something else. The real question about “free will” is what (if anything) makes one of these reasons prevail over the other.

It’s the same for “free won’t”. That’s just a special case of “free will”, where one of the options is phrased as “I won’t do action A – I’ll just sit still and do nothing”. If we have a case where one is not conscious of a reason to sit still and do nothing, it won’t be a case that writers on “free will” will regard as having anything “free” in it.

Of course, there has to be a difference between thinking of a possible action and doing it, and so ability to inhibit seems to be a required condition for possession of what anyone would count as ‘free will”. But inhibition by itself does not take us very far. Even a lion stalking its prey inhibits its running until favorable circumstances trigger its attack.

[Works referred to are V. S. Ramachandran, The Tell-Tale Brain: A Neuroscientist’s Quest for What Makes Us Human (New York: W.W. Norton & Co, 2011). Keysers, C. and Gazzola, V. (2009) “Expanding the mirror: vicarious activity for actions, emotions, and sensations”, Current Opinion in Neurobiology, 19:666-671. Di Pellegrino, G., Fadiga, L., Fogassi, L., Gallese, V., Rizzolatti, G. (1992) “Understanding motor events: a neurophysiological study”, Experimental Brain Research 91:176-180.]


%d bloggers like this: