Do Conscious Thoughts Cause Behavior?

December 12, 2011

In the late 19th Century, Thomas Huxley advanced a view he called “automatism”. This view says that conscious thoughts themselves don’t actually do anything. They are, in Huxley’s famous analogy, like the blowings of a steam whistle on an old locomotive. The steam comes from the same boiler that drives the locomotive’s pistons, and blowings of the whistle are well correlated with the locomotive’s starting to move, but the whistling contributes nothing to the motion. Just so with conscious thoughts: the brain processes that produce our behavior also produce conscious thoughts, but the thoughts themselves don’t produce anything.

Automatism (later known as epiphenomenalism) is currently out of favor among philosophers, many of whom dismiss it without bothering to argue against it. But it has enough legs to be the target of an article by Roy F. Baumeister and colleagues in this year’s Annual Review of Psychology. These authors review a large number of studies that they regard as presenting evidence “supporting a causal role for consciousness” (p. 333). A little more specifically, they are concerned with the causal role of “conscious thought”, which “includes reflection, reasoning, and temporally extended sense of self” (p. 333). The majority of the evidence they present is claimed to be evidence against the “steam whistle” hypothesis that “treats conscious thoughts as wholly effects and not causes” (p. 334).

To understand their argument, we need to know a little more about the contrast between unconscious thought and conscious thought. To this end, suppose that a process occurs in your brain that represents some fact, and enables you to behave in ways that are appropriate to that fact. Suppose that you cannot report – either to others or to yourself in your inner speech – what fact that process represented. That process would be a thought that was unconscious. But if a process occurs in you, and you can say – inwardly or overtly – what fact it is representing, then you have had a conscious thought.

What if I tell you something, or instruct you to do some action or to think about a particular topic? Does that involve conscious thought? Baumeister et al. assume, with plausible reason, that if you were able to understand a whole sentence, then you were conscious, and at least part of your understanding the sentence involved conscious thought. (For example, you could report what you were told, or repeat the gist of the instruction.) They also clearly recognize that understanding what others say to you may, in addition, trigger unconscious processes – processes that you would not be able to report on.

If you want to do a psychological experiment, you have to set up at least two sets of circumstances, so that you can compare the effect of one set with the effect of another. If your interest is in effects of conscious thoughts, you need to have one group of participants who have a certain conscious thought, and another group who are less likely to have had that conscious thought. The way that differences of this kind are created is to vary the instructions given to different groups of participants.

For example, in one of the reviewed studies, participants randomly assigned to one group were given information about costs and features of a cable service, and also instructed to imagine being a cable subscriber. Participants in another group received the same information about costs and features, but no further instruction. A later follow up revealed that a significantly higher proportion of those in the group that received the special instruction had actually become cable subscribers.

In another study, the difference was that one group was asked to form specific “implementation intentions”. These are definite plans to do a certain action on a certain kind of occasion – for example to exercise on a particular day and time, as contrasted with a more general intention to take up exercise, but without thinking of a particular plan for when to do it. The other group received the same information about benefits of the action, but no encouragement to form specific implementation intentions. Significantly more of those who were encouraged to form implementation intentions actually engaged in the activity.

The logic behind these studies is that one group was more likely to have a certain kind of conscious thought than the other (due to the experimenters’ instructions), and it was that group that exhibited behavior that was different from the group that was less likely to have had that conscious thought. The correlation between the difference in conscious thoughts and the difference in subsequent behavior is then taken as evidence for a causal connection between the (earlier) thoughts and the (later) behavior.

There is, however, a problem with this logic. It arises from the fact (which, as noted earlier, the authors of the review article acknowledge) that conscious processing of instructions triggers unconscious processes. We can easily see that this is so, because processing what is said to us requires that we parse the grammar of sentences that we understand. But we cannot report on how we do this; our parsing is an unconscious process. What we know about it comes from decades of careful work by linguists, not from introspection.

Since conscious reception of instructions triggers unconscious processes, it is always possible that behavioral effects of the different instructions are brought about by unconscious processes that are set in motion by hearing those instructions. The hearing (or reading) of instructions is clearly conscious, but what happens after that may or may not be conscious. So, the causal dependence of behavior on instructions does not demonstrate causal dependence of behavior on conscious processes that occur after receiving the instructions, as opposed to unconscious processes that are triggered by (conscious) hearing or reading of instructions.

This point is difficult to appreciate. The reason is that there is something else that sounds very similar, and to which we really are entitled to claim on the basis of the evidence presented in the review article. This claim is the following (where “Jones” can be anybody)

(1) If Jones had not had the conscious thought CT, Jones would not have been as likely to engage in behavior B.

This is different from

(2) Jones’s conscious thought CT caused it to be more likely that Jones engaged in behavior B.

What’s the difference? The first allows something that the second rules out. Namely, the first, but not the second, allows that some unconscious process, UP caused both whatever conscious thoughts occur after receiving instructions, and the subsequent behavior. The experimenter’s giving of the instructions may set off a cascade of unconscious processes, and it may be these that are responsible for both some further conscious (reportable) thoughts and for subsequent actions related to the instructions. If the instructions had not been given, those particular unconscious thoughts would likely not have occurred, and thus the action might not have been produced.

Analogously, if the flash of an exploding firecracker had not occurred (for example, because the fuse was not lit) it would have been very unlikely that there would have been a bang. But that does not show that, in a case where the fuse was lit, the flash causes the bang. Instead, both are caused by the exploding powder.

The procedure of manipulating instructions and then finding correlated differences in behavior thus establishes (1), but not (2). So, this procedure cannot rule out the steam whistle hypothesis regarding conscious thought.

Interestingly, there are some cases for which the authors of the review identify good reasons to think that the steam whistle view is actually the way things work.

For example, one study compared people who imagined a virtuous choice with those who had not done so. In a subsequent hypothetical choice, people in the first group were more self-indulgent than those in the comparison group. This difference was removed if the same activity was imagined as a court-ordered punishment rather than a choice to volunteer.

However, it seems very unlikely that anyone consciously reasoned “I imagined myself making a virtuous choice, therefore I’m entitled to a bit of self-indulgence”. In this, and several similar reported cases, it seems far more likely that the connection between imagining a virtuous choice, feeling good about oneself, and feeling entitled to self-indulgence runs along on processes that do not cause conscious thoughts with relevant content.

The article under discussion is full of interesting effects, and these are presented in a way that is highly accessible. But it does not succeed in overturning an alternative to its authors’ preferred view. According to this alternative view, the causing of behavior (after consciously perceiving one’s situation, or consciously receiving instructions) is done by unconscious processes. This alternative view allows that sometimes, but not always, these unconscious processes also cause some conscious thoughts that we express either in overt verbal behavior, or in sentences about what we are doing that we affirm to ourselves in our inner speech.

[The article under discussion is Roy F. Baumeister, E. J. Masicampo, and Kathleen Vohs, “Do Conscious Thoughts Cause Behavior?”, Annual Review of Psychology 62:331-361 (2011). The difference between (1) and (2) is further explained and discussed in Chapter 4 of Your Brain and You. ]


Mind the Gut

September 19, 2011

Johan Lehrer’s Wall Street Journal column for September 17-18, 2011 reports a fascinating pair of facts – and then makes a puzzling application of them.

The first fact concerns probiotic bacteria, which are often found in yogurt and other dairy products. Researchers provided mice with either a normal diet, or a diet rich in probiotic bacteria, and then subjected them to stressful situations. The mice with the probiotic-enriched diet showed less anxiety and had lower levels of stress hormones.

By itself, this result is not so interesting. After all, it could be that the probiotic bacteria affect digestion, then blood chemistry, and finally hormone levels. But the second fact shows that a different mechanism is at work.

The second fact is that when neural connections between gut and brain were severed, the probiotic-enriched diet no longer produced the effect of reducing symptoms of stress. This fact suggests that the effect of the difference in diet works directly through the gut-brain neural connection, rather than through a less direct blood chemistry path.

It’s as if we have a sense organ in our gut that feeds into an evaluative system. It doesn’t give us any sensations, but it tells our brains how things are in our digestive systems. If things are going well down there, we’re less prone to anxiety when stressful situations arise.

That’s a surprise that contributes to a sense of wonder at how deliciously complex unconscious processes can be. Lest one think that this has nothing to do with us, Lehrer also reports a study that showed an analogous result in human subjects who received large doses of probiotics for a month. (No cutting of nerves in that case, of course.)

Now for the puzzling conclusion. These and other studies are taken by Lehrer to show that “the immateriality of mind is a deep illusion. Although we feel like a disembodied soul, many feelings and choices are actually shaped by the microbes in our gut . . . . ” And, although he concedes that “This doesn’t mean, of course, that the mind-body problem has been solved”, he goes on to declare that “it’s now abundantly clear that the mind is not separate from the body . . . . Rather, we emerge from the very same stuff that digests our lunch.”

But “shaped” is one of the many words that mean “caused”, with the addition of something about the manner of causing (as in “burned” or “built”), or degree of causal contribution (as in “influenced” or “forced”). What the cited research shows is that causes of anxious behavior and hormone levels include the presence of probiotic bacteria in the gut, and that the means of that causal contribution works through a neural connection. That is surprising and fascinating, but it offers no evidence whatsoever that feelings of anxiety are the same things as any material events.

In general, causes and effects are different. From “How anxious you feel depends in part on what kind of bacteria you have in your gut” it does not follow that feelings are material – only that feelings, whatever they are, can be caused in a very surprising way.

Similar remarks apply to “emerge”. Different people use this word in different ways, so it’s not a very helpful term. But one of its meanings is “causes”. Yes, it is indeed fascinating that what’s in our gut can cause how we feel, and do so through a direct, neural pathway. But no, that does not show that feelings are material events. It does not show that immateriality of feelings is a deep illusion.

For some purposes, the point I’m making may not matter. It’s an important fact that what goes on in our consciousness is brought about by events in our neural systems, and the studies Lehrer cites in this article do help drive that point home. But when the mind-body problem is introduced into the discussion, it becomes important to distinguish between the views (1) that neural events cause mental events such as feelings and (2) feelings are the same things as neural events. The evidence Lehrer cites in his article support (1), but are silent as regards (2).

[Jonah Lehrer, “The Yogurt Made Me Do ItThe Wall Street Journal, September 17-18, 2011, p. C12.]


The Social Animal

August 8, 2011

In his recent Commentary article (see my previous post of 7/20/11), Peter Wehner mentions David Brooks’s recent book, The Social Animal. Wehner finds Brooks’s book “marvelous” and repeats a statement that Brooks quotes with approval from Jonathan Haidt: “unconscious emotions have supremacy but not dictatorship”.

I’m pleased to report that I too found much to admire in The Social Animal. In highly readable fashion, Brooks presents a feast of delectable morsels from studies in psychology and neuroscience. Many lines of evidence that reveal the operation of our unconscious brain processes are clearly explained, and we get insight into how they affect everything we do, including actions that have deep and lasting consequences for our lives.

Inevitably, recognition of our dependence on unconscious processes raises questions about the extent to which we control our actions, and the extent to which we are responsible for them. These questions come up for discussion on pages 290-292 of The Social Animal. It is these pages – less than 1% of the book – that I want to comment on today.

Most of what Brooks says in these pages is presented in terms of two analogies and a claim about narratives. I’m going to reflect on each of these. I believe that we can see some shortcomings of the analogies and the claim just by being persistent in asking causal questions.

The first analogy is that we are “born with moral muscles that we can build with the steady exercise of good habits” (290), just as we can develop our muscles by regular sessions at the gym.

But let us think for a moment about how habits get to be formed. Let us think back to early days, before a habit is established. Whether it’s going to the gym, or being a good Samaritan, you can’t have a habit unless you do some of the actions that will constitute the habit without yet having such a habit.

Some people habitually go to the gym; but what got them there the first time? Well, of course, they had good reasons. They may have been thinking of health, or status, or perhaps they wanted to look attractive to potential sex partners. Yet, many other people have the same reasons, but don’t go to the gym. What gets some people to go and not others?

That’s a fiendishly complex question. It depends on all sorts of serendipitous circumstances, such as whether one’s assigned roommate was an athlete,  whether a reminder of a reason arrived at a time when going to the gym was convenient, whether one overdid things the first time and looked back on a painful experience, or whether one felt pleasantly tired afterward.

The same degree of complexity surrounds the coming to be of a good Samaritan. In a more general context, Brooks notes that “Character emerges gradually out of the mysterious interplay of a million little good influences” (128). And he cites evidence that behavior is “powerfully influenced by context” (282). The upshot of these considerations is that whether a habit gets formed, and even whether an established habit is followed on a particular occasion, depends on a host of causes that we don’t control, and in many cases are not even aware of.

The second analogy is that of a camera that has automatic settings which can be overridden by switching to a “manual” setting. Similarly, Brooks suggests, we could not have a system of morality unless many of our moral concerns were built in, and were “automatic” products of most people’s genetic constitution and normal experience in families, schools, and society at large. But, like the camera, “in crucial moments, [these automatic moral concerns] can be overridden by the slower process of conscious reflection” (290).

Actions that follow a period of deliberation may indeed be different from actions that would have been done without deliberation. But if we take one step further in pursuit of causal questions, we have to ask where the deliberation comes from. Why do we hesitate? What makes us think that deliberation is called for?

The answers to these questions are, again, dependent on complex circumstances that we know little about and so are not under our control. To put the point in terms of the camera analogy, yes, if you decide on “manual” you can switch to that setting. But some people switch to manual some of the time, others do so in different circumstances, and some never do. What accounts for these differences? That’s a long, complex, and serendipitous matter. It depends on how you think of yourself as a photographer, whether you were lucky enough to have a mentor who encouraged you to make the effort to learn the manual techniques, whether you care enough about this particular shot. That history involves many events whose significance you could not have appreciated at the time, and over which you did not have control.

The claim about narratives is that “we do have some control over our stories. We do have a conscious say in selecting the narrative we will use to organize perceptions” (291) The moral significance of this point is that our stories can have moral weight: “We have the power to tell stories that deny another’s full humanity, or stories that extend it” (291).

We certainly have control over what words we will utter or not utter. But any story we tell about our place in society and our relations to other people has, first, to occur to us, and second, to strike us as acceptable, or better than alternative stories, once we have thought of it. On both counts, we are dependent on brain processes that lie outside our consciousness and that depend on long histories involving many events over which we have had no control.

We can provide a small illustration of this point by thinking about something Brooks brings up in another context. This is confirmation bias – the tendency to overvalue evidence that agrees with what we already think, and undervalue conflicting evidence. People don’t make this kind of error consciously. They tell themselves a story according to which they are objective evaluators who believe a conclusion because they have impartially weighed all the evidence. But, sometimes, they can find such a story about themselves acceptable only because they are unaware of their bias.

I am not being a pessimist here. Those who are lucky enough to read The Social Animal, or the experimental work that lies behind it, may very well be caused to take steps to reduce the influence of confirmation bias. The point remains that the acceptability of a narrative about one’s place in the scheme of things depends on many factors that lie in unconscious effects of complex and serendipitous histories.

[The book under discussion is by David Brooks, The Social Animal: The Hidden Sources of Love, Character, and Achievement (New York: Random House, 2011).]


A Key Problem of Mental Causation

April 25, 2011

Early this month, I attended the 2011 meeting of the Central Division of the American Philosophical Association in Minneapolis. There were several sessions devoted to philosophy of mind, and it occurred to me that it would be appropriate for this blog to report on what the philosophers of mind are up to these days – or, at least, some of it.

There are difficulties and limitations of such a project. The presenters intend to be at the cutting edge of philosophical research and they have only 25 minutes to make their case. So, inevitably, the papers at conferences like this one are thick with jargon. The arguments are complex, and they typically assume familiarity with several papers that have been published in recent years. Furthermore, a paper is not likely to be accepted at a conference like this one unless it argues against a view that’s been espoused by at least one respected member of the profession.

So, there is no hope of being able report on what philosophers as a group think about issues in philosophy of mind. They hold divergent views on a variety of claims. (There was a more generally directed conference session that considered the question of how much could be listed under the heading of “What philosophers know”. On analogy with “What chemists know”, what could we say that philosophers agree on as very well established and not likely to be disputed in the future? Persuasive reasons were given that there is precious little that falls into this category.)

But there ought to be hope that one could say in plain English what the philosophers who spoke at philosophy of mind sessions were interested in, or what Key Problem they were trying to solve. I think this hope can be fulfilled, and that’s what I’ll try to do in this post.

The Key Problem stems from a background view that is widely accepted: Everything we do is brought about by events in the neurons and synapses of our brains, and these neuro-synaptic events are sufficient to account for all our movements. It is neuro-synaptic events that send the impulses that contract our muscles, and the contractions of our muscles constitute our behavior.

Reasonable people have dissented from this view, and some of them are living. It’s even possible that some of them were at the Minneapolis conference. But in today’s philosophical climate that is unlikely; and certainly, no one expressed doubt about this background view in discussion.

The Key Problem arises because it is very attractive to hold each of three further views. But, on pain of contradiction, you can’t accept the background view and hold all three. The further claims are these.

(1) Mental events (for example, thinking of something you believe) are not the same as neuro-synaptic events.

It’s very likely that you and I both believe that Pittsburgh is farther west than Philadelphia. But it’s extremely unlikely that there is any kind of happening in our neurons and synapses that is exactly the same. So, we can’t say that bringing a belief to mind is exactly the same thing as having a certain kind of neuro-synaptic event occur in a brain

(2) Mental events causally contribute to human actions.

The commonsensical backing for this one lies in the many everyday remarks that say someone did something because they had a certain belief. “Jane was mean to Susan because she believed Susan was trying to steal her boyfriend.” “John took the longer route because he believed a section of road on the shorter route had been closed for repairs.” “Tom called off the picnic because he believed it was going to rain”. And so on. No one wants to hold that when we say these things, we are speaking falsely or nonsensically.

(3) If an event has one sufficient cause, nothing else can cause it – unless that other thing would have caused the event all by itself.

This one is a little trickier than the others. To begin at the end, there can sometimes be two sufficient causes of an event, as in a firing squad. Smith’s bullet enters the right side of the traitor’s brain at the same instant that Jones’s bullet enter the left side. Each is a sufficient cause of the traitor’s death, and, importantly, each would have caused the traitor’s death even if the other shooter had missed.

But the philosophers who are having the discussion I’m reporting agree that beliefs do not contribute to actions in this way. That is, none of them suppose that a belief could bring about an action even if there had been no neuro-synaptic cause that was sufficient to cause the action.

But if there was a neuro-synaptic cause that was sufficient for an action, and a belief is not the same thing as that neuro-synaptic cause (as (1) says), then it seems that there is nothing left for believing to do – no way in which believing could causally come into play.

You can’t accept the background view and all three of these statements as they are written. If mental events, such as beliefs, causally contribute to actions (as (2) says), then either they have to be the same as physical events (rejecting (1)) or they can causally contribute even though not the same as a physical event. That will mean rejecting (3), unless mental events can independently cause actions. But that would require giving up either (1) or the background view.

Perhaps you will think that one of the three statements is clearly the obvious candidate for abandonment. But it’s likely that the next two people you talk with about this will have a different candidate that seems just as obvious to them. That, at any rate, is how it works among philosophers.

In a nutshell: Most contemporary philosophers share a vision according to which all that we do is a product of what happens in our neurons and synapses. But then it seems hard to get beliefs to do any causal work. Yet most want to say that our beliefs do causally contribute to what we do.

[Of course, I have a view about the Key Problem – it’s in chapters 3 and 4 of Your Brain and You. Today’s post is aimed only at explaining what the Key Problem is, not at its solution.]


Tracing Causation

March 6, 2011

A colleague told me she’d found something puzzling about a recent article in Science News, and wondered what I might think of it. The article turned out to be quite interesting, so here’s a very brief summary, with my reactions.

The article is “Cerebral Delights” (Susan Gaidos, Science News, 2/26/11, v. 179(5), p. 22). It reports some research on the amygdala, a pair of almond-shaped structures symmetrically located in left and right sides of the brain. It has long been known that some cells in the amygdala increase their activity in circumstances we find fearful. The leading discovery reported in the article is that cells in these same structures also play a role in situations that provide something pleasant.

One line of research involved inserting electrodes in monkeys’ amygdalae and presenting them with either a rewarding sip of water or an annoying puff of air to the face. The finding was that different neurons increased their activation, depending on which of these stimuli were used.

Converging results came from another experiment in humans, who already had electrodes implanted in preparation for a medical procedure (necessary for reasons unrelated to the experiment). Assignments by these volunteers of different values to foods was correlated with differences in activation of some individual cells in their amygdalae.

A third line of approach again involved monkeys. Like people, normal monkeys will shift to a less desired food after having a chance to eat a preferred food for a while. Monkeys with damaged amygdalae, however, did not switch their food choices in this way.

Finally, a fourth study found that monkeys with damaged amygdalae made choices that led to larger rewards in preference to smaller ones, just as normal monkeys did. But they were different from normal monkeys on measures of pupil diameter and heart rate – measures typically indicative of emotional response.

These are important results that increase our understanding of our brains. I find them particularly interesting because they give a useful example of complexity of function in a limited region. They help us avoid the fallacious move from “Region R is involved in task T” to “What region R does is contribute to T” – fallacious, because there may be many tasks to which a given region may contribute.

So, why might my colleague have been puzzled? Our conversation was brief, so I’m not sure. But the likely answer lies in the many abilities that this article seems to attribute to the amygdala. The amygdala is said to “help shape behavior”, to “act as a spotlight, calling attention to sensory input that is new, exciting and important”. It is said to “evaluate information”, “assign value”, serve “as a kind of watchdog to identify potential threats and danger”, and “judge” the emotional value of stimuli. (Some of these phrases appear to have been introduced by the reporter, but others seem to have come from some of the researchers themselves.)

Phrases like these should make us pause. The evidence is that cells in the amygdala are activated by certain circumstances, and that behavior can be altered by interfering with its operations. That’s good reason to think that the amygdala is indeed a part of the causal chain leading from perception to behavior. But judging? Assigning value? The evidence does not show that such complex operations – operations we normally ascribe to whole persons – are done in, or by, the amygdala. *Maybe* they are done there, but the research doesn’t show that. It leaves open the question as to *how* a set of circumstances that is potentially dangerous gets to increase the activation of one cell in the amygdala rather than another, or how a situation that offers rewards gets to activate an amygdala cell that’s different from the ones activated by potential danger.

In short, as far as the reported study conclusions go, it could be that the actual sorting out of what is to be avoided from what is to be pursued is done elsewhere, and that what the amygdala does is to communicate the results quickly to the many brain regions that must be mobilized for appropriate action. Or it may be that evaluation results from interaction of cells in the amygdala with cells in other regions. In either case, attributing evaluation to the amygdala would be stopping too soon in tracing the causes of our behavior. Instead of stopping there, we should continue to press the causal question – in this case, What are the prior processes that cause one set of neurons rather than another in the amygdala to increase their activation?

[ Two papers reported in the article under discussion are  Morrison, S. E. & Salzman, C. D. (2010) “Re-valuing the amygdala”, Current Opinion in Neurobiology, 20:221-230; and Jenison, R. L., Rangel, A., Oya, H., Kawasaki, H. and Howard, M. A. (2011) “Value encoding in single neurons in the human amygdala duringdecision making”, Journal of Neuroscience, 31(1):331-338. In addition, conference presentations by S. Rhodes & E. Murray, and by P. Rudebeck are discussed.]


Explanation and Sensations

December 10, 2010

Readers of Your Brain and You will know that I deny that sensations are identical with neural events. Neural events cause sensations, but the effects are different from their causes. The qualities that distinguish sensations – colors, tastes, smells, itchiness, hunger, and so on are found in the sensations, not in the coordinated groups of neuron firings that cause the sensations.

But I do agree that water is identical with H2O and that the heat in, say, a frying pan, is nothing but the average energy in the motion (or, mean kinetic energy) of the molecules that compose the pan.

So, what is the difference between these two kinds of case? In a nutshell, the identities I accept are backed by explanations, but the alleged identity of sensations and neural events is a bald piece of ideology that is not backed by explanations.

Of course, physicalists, who propose that pain is nothing but neural firing of a certain kind (let’s call it Neural Kind NK1), and having an orange afterimage is nothing but having a neural firing of a different kind (let’s call it NK2) will not take this claim of difference lying down. They’ll say “What do you mean “is backed by an explanation”? And they’ll ask whether there is really any difference in the explanatory value of “water = H2O” and “pain = NK1”.

Those are fair questions, so let’s try to answer them. I’ll address the first one by focusing on just one case, which is representative of many others. Namely, ‘water = H2O” is supported by its explaining why water dissolves salt.

What the chemists tell us is that H2O molecules, in virtue of their chemical properties, can surround individual molecules of salt (i.e., NaCl molecules). So, when we put salt into water, its molecules don’t stay together, they get separated so that H2O molecules lie between the salt molecules. And that’s just what we mean by “dissolves salt”. So, the hypothesis that water = H2O explains why salt dissolves in water. And its giving us access to such an explanation is a reason for accepting the identity of water with H2O. Other cases go similarly, and give us further support.

Well, not so fast! While there’s a good idea expressed here, it won’t do as it stands, because it’s simply not true that “dissolves salt” just means “has its molecules surrounded by molecules of the solvent”. After all, people knew that water dissolves salt long before they had any ideas about molecules. And they knew that it also dissolves sugar, for example, but not gold.

So, how can we rescue the good idea without having to say something that is plainly false? Well, how did people know that water dissolves salt when they didn’t know anything about molecules? They put salt into water and, after a bit of stirring, they couldn’t see it any more. But if they put their wedding ring in water and stirred for a long time, there would be the ring, as plainly visible as when they took it off their finger.

What this example suggests is that a better description of what “water = H2O” explains, in our particular case, is why things look the way they do. When we put salt into water and stir, we can’t see it any more. Given that we can’t see individual molecules, the chemists’ story about what happens when we put salt into water explains why we can’t see it after stirring.

The generalization is that identity claims like “water = H2O”, together with other claims we already accept (such as that we can’t see individual molecules) explains why our evidence is what it is.

Here are a few other examples to illustrate the point. (1) Water evaporates, desks do not. E.g., if there is water on a kitchen counter and we don’t do anything, the counter will be dry a few hours later; but our desks don’t disappear. That’s something we see. Chemists’ stories about strength of bonding and the transfer of kinetic energy explain why we see the dry counter, but don’t lose our desks.

(2) We can put a cool thing up against a warm thing, and feel that the first has gotten warmer after a while. If heat is just the motion energy of molecules, and motion energy can be transferred by contact, we have an explanation of why the cooler body warms up.

(3) It would be a thankless task to try to define “life”. But we can see that things we agree are living typically grow. A strictly biological story (ingestion, digestion, metabolism) explains why organisms get bigger. This supports the idea that living things are nothing but biological systems; and similar explanations of other typical functions converge on the same result.

What about the second question? If “pain  = NK1” were like “water = H2O”, then there being NK1 should explain why some piece of evidence is what it is. No such explanation seems to be in the offing. If “having an orange afterimage = having NK2” were like “heat = mean kinetic energy” there should be some piece of evidence that is explained by NK2. Again, no such explanation seems available. It’s not even clear why any neural event should ever be correlated with any sensation at all.

Physicalists may suggest, however, that such explanations are available. Namely, we see that pains cause pain behavior (e.g., withdrawing, favoring injured parts, avoiding repeated contact with the source of the pain), and we see that having orange afterimages cause reports of their occurrence. “Pain = NK1” and “having an orange afterimage = having NK2” explain these pieces of evidence.

But these causal connections are not pieces of evidence: they are inferred conclusions. Moreover, the inferences depend on assuming the identities. Actions, including utterances of reports, are – as physicalists agree – caused by neural events. The neural events are themselves adequate to bring about the behavior. The causal relevance of pain or having an orange afterimage comes in only by using the identity claims. The pattern is: NK1 causes behavior B, pain = NK1, therefore pain causes behavior B.

It follows that these alleged pieces of evidence cannot be used to support identity claims; for to make such a use would be circular reasoning. The pattern would be this:

[1] pain = NK1

[2] NK1 causes behavior B

[3] Therefore, pain causes behavior B.

[4] We can explain why (or how) pain causes behavior B by assuming that pain = NK1.

[5] We are entitled to identity claims if they are backed by explanations.

[6] Therefore, we are entitled to the identity claim that pain = NK1.

But you can’t get to [3] without assuming [1]; and the legitimacy of that assumption is the conclusion of this reasoning. So this kind of “support” for “pain = NK1” depends on assuming what was to be supported.

Even those who think that “pain = NK1” or “having an orange afterimage = NK2” are true ought to be able to see that such claims cannot be supported in a way that parallels our reasons for accepting “water is H2O”, “heat is mean kinetic energy” and similar identity claims.


Does a Scientific View of Ourselves Undercut Science?

October 10, 2010

I’ve been reading “Human Freedom and ‘Emergence’ ” by the Stanford neurobiologist William T. Newsome. Newsome’s leading question is “What are we to make of human freedom when, from a scientific point of view, all forms of behavior are increasingly seen as the causal products of cellular interactions within the central nervous system . . . ?”

Newsome is particularly concerned with a point he quotes from a 1927 work of J. B. S. Haldane: “If my mental processes are determined wholly by the motions of atoms in my brain, I have no reason to suppose my beliefs are true . . . and hence I have no reason for supposing my brain to be composed of atoms.” Newsome takes this point to suggest that a consistent scientist must make room for free will. But he recognizes that a consistent scientist must make such room without supposing there are exceptions to what science shows us about the laws that apply to the motions of atoms. These two demands seem to be in considerable tension.

Newsome seeks to resolve this tension by distinguishing between “constraint” and “determination”. His example is MS Word. Newsome says the operation of this program is constrained by the operations of transistors, resistors, capacitors, and power supplies of the computer that’s executing the program. This means that everything that happens while the program is running depends on how these items work. And the way they work is completely accounted for by their physical properties and the laws of nature that relate those properties. But Newsome also says that “the most incisive understanding of Microsoft Word lies at the higher level of organization of the software.” The behavior of computers “is determined at a higher level of organization – the software – not by the laws of physics or the principles of electronic circuitry.”

In my view, this is an interesting example, because it actually shows us how to undercut Haldane’s point and resolve Newsome’s worry without having to make the puzzling distinction between constraint and determination.

Why do I say the distinction is puzzling? Because “determination” is a term that suggests causation. (That is, for example what Haldane means by “determined” in the sentence Newsome quotes.) But, first, that’s not what Newsome means by this term: “determination”, according to Newsome’s explanation, only means that there are higher level descriptions that can be used to express useful regularities. (Descriptions “at a higher level” do not refer to the small parts of what’s being described.) And, second, “determination” cannot add any causes to what the lower level provides. If it did add anything, there would be something that happened that was inconsistent with the constraints imposed by the laws that apply to the behavior of the small parts (the transistors, capacitors, and so on).

Newsome agrees that everything that happens during the execution of MS Word is consistent with the laws of operation of the small parts of the computer on which it is running. It is also clear that what happens can be described at a higher level at which useful regularities can be expressed. For example, a certain series of keystrokes always results in highlighting a portion of text. A following press of the ‘delete’ key removes the highlighted text; a following click on the paperclip does something else, and so on.

The moral that these facts illustrate is this: A thing whose small parts operate according to ordinary physical laws can have regularities describable at a higher level, provided its small parts are organized in the proper way. There is thus no conflict between holding (1) that everything in a brain happens as a result of its small physical parts (e.g., neurons, synapses, glial cells, neurotransmitters) operating according to physical laws and (2) there is a higher level description of what brains provide that explains how we normally perceive accurately and often reason correctly. Of course, atoms and molecules that are *not* organized in a very special way do not lead to accurate perceptions and reasonable conclusions. But that does not show that properly organized systems of atoms and molecules cannot conform their outputs to evidence and logic.

Because (1) and (2) are consistent, Haldane was wrong to think that we could not have good reasons for our beliefs (including those about atoms and brains) if our beliefs are caused by the motions of atoms in our brains. (The quotation does not say that many of these motions are ultimately caused by inputs to our senses, but Haldane surely would not have denied that.)

“Free will” is used by many thinkers in many senses – that’s why I avoid using that term, except when I write about the views of others who do use it. One of its senses requires that there be departure from causation. In *that* sense of the term, it should be evident that “free will” is something we do *not* want when we are doing science. What we want is that our beliefs about what is in the world should be caused – namely, caused by the things that we believe are there. If our beliefs about the world were cut loose from being caused by what is in the world, we could only expect to have erroneous beliefs about the world.

[Newsome’s essay appears in Nancey Murphy, George F. R. Ellis, and Timothy O’Connor, eds., Downward Causation and the Neurobiology of Free Will (Berlin: Springer-Verlag, 2009), pp. 53-62.]


%d bloggers like this: