Facts and Values

December 1, 2010

Sam Harris’ The Moral Landscape: How Science Can Determine Human Values is a book I admire for attending directly to our dependence on our brains and working out the consequences of that dependence – consequences for how we think about ourselves and our mental capacities and, of course, how we should think about our values.

Harris has achieved a style that is highly readable while giving full recognition and fair treatment to the complexities of the topics he considers. These range from the universality of moral judgments to issues regarding the interpretation of brain scans. (His discussion of the latter on pp. 220-222 is invaluable.)

Harris’ subtitle announces a contentious claim. I believe that in arguing for this claim, Harris goes too far. However, I also think he doesn’t need this excess to sustain the main point of the book. I’ll explain both points.

The excessive claim is that values are facts. That means: statements that something is good or bad, right or wrong, ought to be done or ought not to be done, are factual statements discoverable by scientific means.

There is a long tradition that asserts otherwise. Here are two expressions of the view that Harris is opposing.

      No description of the way the world is (facts) can tell us how we ought to behave (morality). – David Hume’s view, summarized on Harris’ p. 10.

      Science is about facts, not norms; it might tell us how we are, but it couldn’t tell us what is wrong with how we are. – Jerry Fodor’s view, quoted on Harris’ p. 11.

Harris, in contrast, describes a scientific account of human values as “one that places them squarely within the web of influences that link states of the world and states of the human brain” (p. 13). He says the divide between facts and values is “illusory” in several senses. He holds that “Whatever can be known about maximizing the well-being of conscious creatures – which is, I will argue, the only thing we can reasonably value – must at some point translate into facts about brains and their interaction with the world at large” (p. 11).

The argument behind Hume’s view, which prevents me from agreeing with Harris here, rests on the intuitive idea that your argument can’t be a good one if your conclusion asserts something about a topic you didn’t mention in your premises. For example, if you haven’t mentioned hyenas in your premises, you can’t draw justified conclusions about them. Even if you’ve said a lot about African mammals, you’re not entitled to conclude anything about hyenas unless you have a premise that says hyenas are African mammals.

And similarly, you can’t give a cogent reason for a value judgment – a judgment that something is good, or right, or ought to be done (or their opposites) – unless your premises say something about what is good, or right, or ought to be done (or their opposites).

That would be no obstacle if some value judgments could be known to be true by observation. But there is no sense organ for observing values, and no observational science of values.

So, value judgments that we can support must be supported by arguments, and if these are good arguments they must already assume some premise about values. Ultimately, therefore, if we have reasons for any of our values, we must have some value judgments that are not supported either by reasons or by observation. But science rests on observation and reasoning. So some of our value judgments are not supportable by scientific means.

But Harris does not need to disagree with this point. The rest of what he says in The Moral Landscape fits very well with accepting both Hume’s fact/value distinction and the reason just given for it. All we need to do is to distinguish the basic values that constitute well-being from less basic values that are believed – rightly or wrongly – to promote well-being.

The key point is already visible in the last remark quoted from the book. Suppose we agree on what constitutes the “well-being of conscious creatures” and that in actual fact we agree in positively valuing this well-being. Then we can turn to our sciences to tell us about better and worse ways to get it. There will be genuine, scientifically discoverable facts about what promotes well-being and what interferes with it.

This role of science is not in conflict with the view that scientific methods are not suited to tell us whether, for example, “Chronic hunger is bad” is true. Sociological science can, of course, tell us whether avoidance of hunger is valued, i.e., whether people agree that chronic hunger is bad. But Harris would be among the first to insist that finding out the facts about what people agree to is not the same thing as finding out what is true.

The right and wrong answers to moral questions that Harris wants science to provide can be found, provided we have wide agreement on what we most basically value. It may seem that we don’t agree in basic values, because moral questions are notorious for provoking disagreement. The response that is implicit in Harris’ book is this: We do in fact agree on many aspects of what constitutes well-being. Then we can use our sciences to distinguish between successful and unsuccessful ways of getting it. One result of applying our sciences is learning that some of our less basic moral judgments are wrong – objectively wrong. To say that a moral rule is wrong means: If people follow that moral rule, they will frustrate the achievement of the most basic values that constitute well-being.

Harris makes a plausible case for the widespread acceptance of his idea of well-being, and I predict that most readers will agree with him. Near the end of The Moral Landscape, he raises several concerns about the possibility that what constitutes well-being might be different for different people, or different for people in different cultures. Here is his reply:

We have all evolved from common ancestors and are, therefore, far more similar than we are different; brains and primary human emotions clearly transcend culture, and they are unquestionably influenced by states of the world . . . . No one, to my knowledge, believes that there is so much variance in the requisites of human well-being as to make the above concerns seem plausible.

In sum, a structure that would fit all of Harris’s statements except those where he explicitly rejects the fact/value distinction is this:

(1) There are important basic values (e.g., avoidance of hunger, cold, pain and other miseries, enjoying social relations, satisfying curiosity) that humans share. That we do share these values is a sociologically establishable fact, but the truth of the statements that such things are valuable is not something science can underwrite.

(2) Science can help us achieve more of what we basically value (and avoid more of what we basically disvalue).

(3) Many moral judgments (both those held by individuals and those built into social or religious institutions) are not expressions of basic values, but are instead intermediate level rules. These rules may or may not actually conduce to our well-being. Science can provide good, objective reasons for adoption of genuinely helpful rules and practices, and good, objective reasons for abandoning rules and practices that conflict with satisfying our basic values.

Harris seems to think that the fact/value distinction has to be rejected if we are going to be able to use science to help achieve well-being. This is not so. My drift here has been that we can accept all three of these key points while recognizing that the truth of “Chronic hunger is bad” (for example) is not something that scientific methods are relevant to establishing. The power of this truth comes not from scientific credentials, but from the fact that we all share this value judgment. And the strength of The Moral Landscape lies in its applying what we have learned from science to the evaluation of actually held intermediate level moral beliefs.

[The book under discussion is Sam Harris, The Moral Landscape: How Science Can Determine Human Values (New York: Free Press, 2010).]


Inner Speech and “Thoughts” (II)

November 21, 2010

This post concerns a second picture that’s found in the article “ ‘What the . . .?’ The role of inner speech in conscious thought” by Fernando Martínez-Manrique and Agustín Vicente. This picture is one that’s shared by these authors as well as by those they criticize. It’s the idea that inner speech ‘brings thoughts to consciousness’, ‘makes thoughts conscious’, or enables us to ‘monitor our cognitive processes’. These phrases suggest that the meanings of sentences we produce in inner speech are somehow already there in us before we inwardly say them.

It is important to note that these already-present meanings need not be supposed to be in a natural language format. But the picture that ‘bringing to consciousness’ suggests is that the meanings of what’s going to be said in inner speech are already there in some format or other, and need only to put into into linguistic form in our inner speech.

This picture contrasts with what I’ve suggested in Your Brain and You. The view there is that a bit of speech – whether inner or overt – can perfectly well be the very first time that a cognitive product occurs that has the meaning of what is said in the speech. On this view, inner speech need not correspond to, or ‘match’, or bring to consciousness, something that already has its meaning.

Of course, it may correspond to something already formulated. This happens, e.g., when we recite in inner speech a poem that we have memorized. The point is that there need noi be any such precursor, in any format. The argument, in very brief summary form, is that there has to be some time at which some meaning is first represented in us – if that were not so, we’d have to suppose, absurdly, that everything we are ever going to think is already fully formulated in some way in us. But if there has to be a first coming together of ideas to make a meaningful assertion somewhere, it may be that an episode of inner (or overt) speech is that very first coming together.

Martínez-Manrique and Vicente have a very plausible argument for their picture and against mine. Inner speech, they correctly note, is often fragmentary, but we generally feel that we know what we mean. On their view, this feeling would be explained by our having a thought that is already there, that is given only a fragmentary expression. The fragmentary expression is good enough, because we have a more complete thought in mind.

I think, however, that what we actually have in our consciousness, besides the fragmentary inner speech, is only a sense of confidence that we are able to go on to fill in what was not actually stated in our inner speech. And, usually, our confidence is justified –  we are able to go on in a coherent and useful way. But this ability does not require that there is already a fully formulated representation of a complete thought somewhere in us. It requires only that we are (usually, though not always) able to go on more or less successfully. And going on successfully requires only that our cognitive processes continue to operate in an organized way. We cannot deny that they can do this – they have to be able to do it if we are to cope with living. It adds needless complexity to suppose that they have to do it twice, once in non-linguistic format and then in linguistic re-format.

Let’s illustrate the difference in views with one of Martínez-Manrique’s and Vicente’s examples. Perhaps I say, in inner speech, only “The meeting!” If I reflect on this later, I will be inclined to feel that I knew which meeting I meant, and when it is (or was) to take place, even though I didn’t say anything about that in my inner speech. It seems ever so natural to think that these other, unstated matters are already there “in thought” and that if I go on to say more about the details of which meeting I meant, I am linguistically formulating matters that were previously put together in some way.

But all that’s required for me to know what I mean is that I am well enough organized to go on appropriately. For example, I might start hurrying to get to the meeting, or, if it’s already over, I might start preparing apologies to appropriate individuals, or start rehearsing excuses. If I feel that I can go on appropriately, and in fact do so, it will be tempting to think I had all the particulars fully formed in mind when I exclaimed to myself “The meeting!”. But that hypothesis, however natural, seems unnecessary. A “just in time” organization that provides appropriate details when they become relevant will lead to the same results, with a simpler set of assumptions.

A “just in time” view also seems to be a good account of our occasional failures. Suppose I have a regular weekly meeting, but for special reasons it’s been canceled this week, and I’ve made other plans. It’s possible that ten minutes after the usual time, I’ll suddenly think “The meeting!” and have a surge of anxiety. Then, almost immediately “Oh no! Canceled. Whew!” It’s hard to imagine that the second part of this was “already there”; it would seem to be a case where the eruption of the cancellation into consciousness just is the inner saying together with relief from anxiety. These later occurrences are products of our cognitive processes, of course, but they need not be conceived as a reformatting of something that those processes have already produced.

[The article referred to is Fernando Martínez-Manrique and Agustín Vicente, “ ‘What the . . .?’ The role of inner speech in conscious thought”, The Journal of Consciousness Studies 17(9-10):141-167 (2010).

There are several interesting posts about inner speech on Eric Schwitzgebel’s web site: go to http://schwitzsplintersinnerspeech.blogspot.com .

Besides what’s in Your Brain and You, I’ve discussed inner speech in my “Thoughts Without Distinctive, Non-Imagistic Phenomenology”, Philosophy and Phenomenological Research, 70:534-561 (2005) and “A Frugal View of Cognitive Phenomenology”, soon to appear in T. Bayne and M. Montague, eds., Cognitive Phenomenology (Oxford: Oxford University Press). ]


Inner Speech and “Thoughts” (I)

November 16, 2010

Inner Speech and “Thoughts” (I)

Fernando Martínez-Manrique and Agustín Vicente have an interesting article in a recent issue of The Journal of Consciousness Studies. (No link here because it’s a subscription journal. Full reference appears below.) The title is “ ‘What the . . .?’ The role of inner speech in conscious thought”. By “inner speech” they mean the silent ‘talking to ourselves’ that most people report as something they do quite a lot of.

Major sections of this article are devoted to raising problems for the views of other writers on inner speech, but I’m not going to go into those. Nor am I going to try to summarize all of the informative points made in this rich and well considered article. Instead, I’m going to confine myself to comments on a picture that is popular in discussions of brains and minds. (“Pictures” in philosophy is a metaphor for metaphors.) A follow up posting will comment on another popular picture that’s suggested by this article.

The picture I have in mind today is that of “broadcasting”. In the background of this picture is the assumption that there are many processes that go on unconsciously in our brains, and that proceed at the same time and largely independently of each other. Against this background, consciousness is often thought of as having a special role – namely, that of broadcasting information to all parts of (or, processes in) the brain. What gets into consciousness is thereafter available at many places in the brain and thus may influence many brain processes.

The broadcasting idea seems plausible. After all, if something gets into consciousness, I know about it. If I think of myself as a unified self, it seems that what I know should be available to my whole self, and therefore to any process that’s relevant to my thinking that may be going on in me.

Martínez-Manrique and Vicente, however, make a striking observation that should lead us to think carefully about this “broadcasting” picture. Inner speech is, evidently, linguistic – it’s composed of words and their order makes a difference. (“John loves Jane” is different from “Jane loves John” whether it’s said inwardly or out loud.) So, if inner speech were “broadcast”, what is said in it could have a useful effect only on a system that works on linguistic inputs – a language system that could understand the words as words and their order as making a distinctive contribution. Whatever effect a “broadcast” bit of inner speech might have on other kinds of systems could not be an effect that made use of the linguistically encoded information.

This point seems to have quite general application. If any neural event has informational content of any kind, and it causes some effect in other neural events, those latter neural events will “receive” that information only if they are structured so that they can use it. Otherwise, there may still be an effect, but it wouldn’t be one that depends on the informational content. The voice that shatters glass does so because of its pitch and loudness – not because of the meaning of the word being sung, even if that word happens to be “shatter”.

Sometimes we learn something on one occasion and then recall it and put it to use in some different context. Cases like this may suggest that once we have been conscious of some fact, it is then generally available for further use. But it would be risky to draw much of a conclusion from such happy occasions. That’s because we do not know now many occasions there may have been where something we learned would have been helpful, but it did not come to mind.

Sometimes we have evidence for unhappy cases of this kind. For example, after losing some game, a friend might ask why we didn’t make a certain move. The best answer we can give might be “I just didn’t think of it. I know the rules, so I would have known I could make that move if I’d thought of it, but it just never crossed my mind”. That would be a case of something I knew that was not available to my cognitive processes when it was needed. But while we know there are some such cases, it would take an ingenious experiment to figure out how often that kind of failure happens. If there had been no friend there to ask why we didn’t make a certain move, we would likely never have realized that we’d failed to use something that in some sense we knew, and that might become available for use on some other occasion.

A radio or TV broadcast is received by many receivers that are designed to process the incoming signal in a way that preserves information. And the message is received by listeners, each of whom can understand it. We won’t be making progress in understanding how our minds work if we populate our brain with a lot of understanders. (After all, understanding is something we want our cognitive science to explain.)  The brain events that cause our consciousness undoubtedly have many other effects, in many parts of the brain. But it may be more helpful to think of these simply as neural effects rather than as receptions of broadcast messages.

 [The article referred to is Fernando Martínez-Manrique and Agustín Vicente, “ ‘What the . . .?’ The role of inner speech in conscious thought”, The Journal of Consciousness Studies 17(9-10):141-167 (2010).

There are several interesting posts about inner speech on Eric Schwitzgebel’s web site: go to http://schwitzsplintersinnerspeech.blogspot.com .

I discuss inner speech in Your Brain and You. There’s more about it in my “Thoughts Without Distinctive, Non-Imagistic Phenomenology”, Philosophy and Phenomenological Research, 70:534-561 (2005) and “A Frugal View of Cognitive Phenomenology”, soon to appear in T. Bayne and M. Montague, eds., Cognitive Phenomenology (Oxford: Oxford University Press). ]


Canine Self-Control?

November 7, 2010

An article by Wray Herbert in Scientific American Mind (11/2/10) reports two related experiments with dogs, and reflects on self-control.

The dogs in the experiments by Holly Miller and colleagues at the University of Kentucky were all familiar with a toy (Tug-a-Jug) that contains treats that can be seen inside a clear cylinder. Normally, the dogs could manipulate the toy and obtain the treats. The toys used at the key point in each experiment had been altered so that they could not be opened.

The dogs were paired according to training history and then randomly assigned to one of two groups. In the experiments, owners of dogs in one of the groups commanded them to sit and then stay, then left the room. Owners of dogs in the other group placed them in a cage. If the owner of a commanded dog had to revisit the room to reissue the command, the owner of the paired caged dog visited the room at the same time interval. Dogs stayed or were caged for 10 minutes.

The interesting result in experiment 1 was that the caged dogs spent, on average, significantly more time than the commanded dogs in trying to open the toy.

In experiment 2, the dogs in each group were divided into two subgroups. Half got a sugar drink before being allowed to attempt to retrieve treats from the altered toy, while the other half got an artificially sweetened drink.

The commanded dogs that got the sugar then performed like the caged dogs in experiment 1. The commanded dogs that got the artificially sweetened drink did not: they gave up much more quickly. These results parallel those of several studies of “ego depletion” in human beings.

What first caught my attention in Herbert’s article was the following:

“These findings suggest that self-control may not be a crowning psychological achievement of human evolution and indeed may have nothing to do with self-awareness. It may simply be biology – and beastly biology at that. These are humbling results . . . .”

This passage got me to thinking about what could possibly be going on in the commanded dogs’ minds during the 10 minutes of staying. If they were people, I think they would be talking to themselves, something along the lines of “Got to wait. Boss says so. Boss won’t like it if I don’t wait for permission to move.” And so forth. There might be an exploration of the boss’s possible reasons, or the legitimacy of the boss’s instructions.

Dogs. of course, don’t have language, so they can’t be running a commentary of this kind. But there’s no reason to suppose they don’t have imagery, and I’m willing to speculate that they do. Perhaps they can have images of running around or exploring their surroundings. But they’ve been well trained. Perhaps they also have a images of Master’s frowns or harsh words if they move, or an image of Master’s smiles and good play after a new command that allows movement.

Such imagery would evidently not amount to a narrative of the canine self. But it would have to be a kind of self-awareness, albeit a minimal one. In the first case, the image could not be of just some dog or other moving about – it would have to be an image of its moving. And images of Master’s smiles or frowns would have to be images of Master’s frowning or smiling while looking at it not just of some master looking at some dog or other.

I think we can get a sense of this minimal kind of self-awareness by imagining ourselves doing something. That is not like imagining watching some person or other doing the same thing, and not even like imagining watching someone who looks just like ourselves doing it.

I’m also willing to speculate that there is a feeling of anxiety or tension in the commanded dogs. Images of moving, and of not moving and Master’s good play may alternate. It seems possible there might even be a muscular oscillation in which there is almost a movement interleaved with inhibitions of movement.

The interest of these speculations is not so much whether they are true, but whether, if they were true, we would be willing to say that self-control “may simply be biology”. Part of me wants to say “Well, of course it’s biology! After all, whether the dogs move or not depends on whether their muscles contract or not; and that depends on the state of motor neuron activations, which depends on the state of neural activations in their brains. And that’s all biological activity.”

Maybe it’s the “simply” that seems not quite right. If a dog’s mind were as I’ve imagined it, it would have a psychological parallel to its biological goings on, i.e., a series of images and feelings of tension that represented the merits and demerits of moving.

There is also an interesting question about the idea of “self-control”. This is raised by my belief that most of us would be willing to say that the owners of the commanded dogs who stayed for 10 minutes had good control of their dogs. Are the dogs exerting self-control? or are their owners exerting control?

I’ll suggest this resolution: It’s both. The owners have control because they aim to have their dogs stay, and (because they’ve spent the necessary time on training) can get that to happen by issuing a command. The dogs have control if they aim to earn Master’s smiles (or avoid Master’s frowns) and their behavior actually concords with that aim. – Of course, to accept this resolution, one has to be prepared to allow that dogs can have aims of this kind. (Miller and colleagues do seem to accept this; in fact they go quite far in this direction: “The ability to coordinate rule-based memories and current behavior in a goal-directed way is pervasive across species” (p. 537).)

I was also intrigued by Herbert’s concluding paragraph:

“So perhaps humans are not unique . . . . It appears that the hallmark sense of human identity – our selfhood – is not a prerequisite for self-discipline. Whatever it is that makes us go to the gym and save for college is fueled by the same brain mechanisms that enable our hounds to sacrifice their own impulses and obey.”

If dogs can have images of themselves doing something, and these are different from images of other dogs doing the same kinds of thing, then some minimal sense of self-awareness may still be required for self-control. It is also important to follow Miller and colleagues in identifying the common “fuel” as glucose. That’s what restores the energy that seems to be depleted by the tension between what dogs (or people) would like to do now and their longer-term aims, or by effort spent on solving a difficult problem.

If we focus on the glucose as the fuel, I think we won’t feel humbled by the commonality that Miller and colleagues have found between us and dogs. For the commonality of the influence of glucose leaves it open that there are many “brain mechanisms” that we have and dogs do not have. There are, for example, brain mechanisms that produce our inner speech, which may contain statements of reasons for going to the gym, and these same mechanisms may also causally contribute to our actually going there. It would seem very difficult to represent such reasons by sequences of images, however complex or lengthy.

[Herberts’ article is “Dog Tired: What Mutts Can Tach Us about Self-Control, available at http://www.scientificamerican.com/article.cfm?id=dog-tired&sc=CAT_MB_20101103 . The article being reported on is Miller, H. C., Pattison, K. F., DeWall, C. N., Rayburn-Reeves, R. and Zentall, T. R. (April, 2010) “Self-Control Without a “Self”? Common Self-Control Processes in Humans and Dogs”, Psychological Science 21(4):534-538.]


Top-Down Control?

October 20, 2010

In a recent paper, distinguished neuroscientist Chris D. Frith calls attention to a simple but arresting point: “there are no brain areas that have only outputs and no inputs”. Instead, every area that provides its output to other areas also receives inputs from other areas (as well as feedback from areas to which it sends output).

For example, if we act, the motor neurons that drive our muscles fire. Areas in which these motor neurons lie receive input from areas that lie a little farther forward in the brain. These more forward areas receive input from an area still farther forward (the prefrontal cortex), which receives input from many other areas, and so on. We never come to an outputter whose activity is not conditioned by inputs from elsewhere.

The context for this point will be clear from the article’s title: “Free Will and Top-Down Control in the Brain”. Frith contrasts top-down control with bottom-up control. The “bottom” is sensory inputs and in bottom-up control, action is driven by sensory inputs. Reflexes would be the clearest sort of case: if your knee is tapped in the right way, your lower leg will move as a direct result of the tap.

In contrast, Frith takes top-down control to occur when goals or plans are involved in actions. A key point is that goals or plans do not depend directly on what we are immediately sensing. Which food you purchase for dinner may well depend on the quality of what you see in the grocery store, but your goal to get some food did not depend on what you were seeing or hearing when you went to the store.

Your goal to get food did, however, depend on conditions somewhere in your brain, and if there is no area that is solely an outputter, the goals you have are the result of contributions from many brain areas. In Frith’s thinking, a true “top level” would be a brain area that affects other brain areas, but is not affected by other areas. The significance of the fact that all brain areas receive inputs from other areas is that there is no “top level” in this sense. There is no unaffected effector from which our goals emanate.

Frith sees this fact as a problem for locating free will in an individual. His view seems to be that free will requires a top level that is only a top level, i.e., is not affected by anything else. “Nothing must control the controller.” Since there is no top level of this kind in the brain, we cannot find a physiological area in an individual person that provides free will.

One might draw the conclusion that there is no such thing as “free will” if this term is understood to require a top level that has no inputs. That would, of course, leave open the possibility of offering some other conception of “free will” that would not imply a requirement that we know is not satisfied.

Interestingly, that is not the conclusion that Frith draws. Instead, he considers experiments on free actions. Typical tasks in these experiments include moving one’s finger whenever one wishes to do so; or moving the right or left index finger, whichever one wishes, in response to a signal; or generating a series of “random” digits. Frith notes that results in such experiments depend on a social interaction between participants and experimenters – the latter give instructions, and the participants cooperate in agreeing to try to follow them.  So, he says, “The top-down constraints that permit acts of will come from outside the individual brain. . . . If we are to understand the neural basis of free will, we must take into the account the brain mechanisms that allow minds to interact”.

This does not seem to be a happy solution to the problem as Frith sets it up. That’s because social interactions plainly do not provide a “top level” in the sense of something that gives outputs but receives no inputs. Participants and experimenters are themselves subject to many social influences; there are no people who give outputs to others but receive no inputs from others.

It seems that a better conclusion to draw from Frith’s reflections is that there is no “free will” in the sense of an outputter that has no inputs. If there is such a thing as “free will” at all, there must be some other way of conceiving what it amounts to.

[The paper from which I’ve quoted is Chris D. Frith, “Free Will and Top-Down Control in the Brain”, in Murphy, N., Ellis, G. F. R., and O’Connor, T., Downward Causation and the Neurobiology of Free Will (Berlin: Springer-Verlag, 2009), pp. 199-209. Frith attributes the simple point with which this post begins to another distinguished neuroscientist, Semir Zeki.]


Does a Scientific View of Ourselves Undercut Science?

October 10, 2010

I’ve been reading “Human Freedom and ‘Emergence’ ” by the Stanford neurobiologist William T. Newsome. Newsome’s leading question is “What are we to make of human freedom when, from a scientific point of view, all forms of behavior are increasingly seen as the causal products of cellular interactions within the central nervous system . . . ?”

Newsome is particularly concerned with a point he quotes from a 1927 work of J. B. S. Haldane: “If my mental processes are determined wholly by the motions of atoms in my brain, I have no reason to suppose my beliefs are true . . . and hence I have no reason for supposing my brain to be composed of atoms.” Newsome takes this point to suggest that a consistent scientist must make room for free will. But he recognizes that a consistent scientist must make such room without supposing there are exceptions to what science shows us about the laws that apply to the motions of atoms. These two demands seem to be in considerable tension.

Newsome seeks to resolve this tension by distinguishing between “constraint” and “determination”. His example is MS Word. Newsome says the operation of this program is constrained by the operations of transistors, resistors, capacitors, and power supplies of the computer that’s executing the program. This means that everything that happens while the program is running depends on how these items work. And the way they work is completely accounted for by their physical properties and the laws of nature that relate those properties. But Newsome also says that “the most incisive understanding of Microsoft Word lies at the higher level of organization of the software.” The behavior of computers “is determined at a higher level of organization – the software – not by the laws of physics or the principles of electronic circuitry.”

In my view, this is an interesting example, because it actually shows us how to undercut Haldane’s point and resolve Newsome’s worry without having to make the puzzling distinction between constraint and determination.

Why do I say the distinction is puzzling? Because “determination” is a term that suggests causation. (That is, for example what Haldane means by “determined” in the sentence Newsome quotes.) But, first, that’s not what Newsome means by this term: “determination”, according to Newsome’s explanation, only means that there are higher level descriptions that can be used to express useful regularities. (Descriptions “at a higher level” do not refer to the small parts of what’s being described.) And, second, “determination” cannot add any causes to what the lower level provides. If it did add anything, there would be something that happened that was inconsistent with the constraints imposed by the laws that apply to the behavior of the small parts (the transistors, capacitors, and so on).

Newsome agrees that everything that happens during the execution of MS Word is consistent with the laws of operation of the small parts of the computer on which it is running. It is also clear that what happens can be described at a higher level at which useful regularities can be expressed. For example, a certain series of keystrokes always results in highlighting a portion of text. A following press of the ‘delete’ key removes the highlighted text; a following click on the paperclip does something else, and so on.

The moral that these facts illustrate is this: A thing whose small parts operate according to ordinary physical laws can have regularities describable at a higher level, provided its small parts are organized in the proper way. There is thus no conflict between holding (1) that everything in a brain happens as a result of its small physical parts (e.g., neurons, synapses, glial cells, neurotransmitters) operating according to physical laws and (2) there is a higher level description of what brains provide that explains how we normally perceive accurately and often reason correctly. Of course, atoms and molecules that are *not* organized in a very special way do not lead to accurate perceptions and reasonable conclusions. But that does not show that properly organized systems of atoms and molecules cannot conform their outputs to evidence and logic.

Because (1) and (2) are consistent, Haldane was wrong to think that we could not have good reasons for our beliefs (including those about atoms and brains) if our beliefs are caused by the motions of atoms in our brains. (The quotation does not say that many of these motions are ultimately caused by inputs to our senses, but Haldane surely would not have denied that.)

“Free will” is used by many thinkers in many senses – that’s why I avoid using that term, except when I write about the views of others who do use it. One of its senses requires that there be departure from causation. In *that* sense of the term, it should be evident that “free will” is something we do *not* want when we are doing science. What we want is that our beliefs about what is in the world should be caused – namely, caused by the things that we believe are there. If our beliefs about the world were cut loose from being caused by what is in the world, we could only expect to have erroneous beliefs about the world.

[Newsome’s essay appears in Nancey Murphy, George F. R. Ellis, and Timothy O’Connor, eds., Downward Causation and the Neurobiology of Free Will (Berlin: Springer-Verlag, 2009), pp. 53-62.]


Do We Control Our Daydreams?

September 20, 2010

I’ve been reading Paul Bloom’s How Pleasure Works (New York & London: W. W. Norton, 2010), and ran across the arresting statement that “in a daydream you have perfect control” (p. 200).

Readers of Your Brain and You will recognize that this claim goes against the grain of chapter 5. But quite independently of the reasons given there, daydreaming seems a poor candidate for something we control. After all, daydreaming is supposed to be relaxing and pleasant. When we’re daydreaming we are not trying to do much of anything – we are taking a break from our effortful projects. In daydreaming, as Bloom also says (p. 198), our “minds are wandering”. But wandering – as we might do, for example, in a park – is exactly not trying to get anywhere in particular. If we are not aiming at some particular result, or series of actions, or series of mental images, it seems odd to think of ourselves as in control of what images come to mind. 

The puzzle I want to address today, however, is that what Bloom says seems initially plausible, even to me. Why should that be? Why should it seem natural to say we are in control of our daydreams when, on reflection, that does not seem to be so?

Part of the explanation is that Bloom describes daydreaming as involving “the creation of imaginary worlds” and portrays us as designers, casting directors, and screenwriters of these worlds and of the “imaginary beings [that we create] to populate” them (p. 198). But these descriptions actually apply to creators of fiction, i.e., writers of stories, novels, and plays. Such writers are not daydreaming: they are trying to do something, namely, to write a story that will be dramatic, convey a moral, tell us something about ourselves, and so on. They may have many ideas cross their minds, and they exercise control when they reject most of them as not contributing to the drama or atmosphere at which they are aiming. But trying to write a good story is not what we’re doing when we are daydreaming, or letting our minds wander.

A deeper clue comes from a contrast that Bloom draws between normal daydreaming and some cases of schizophrenia, “in which this other-self creation is involuntary and the victim of the disease believes that these selves are actual external agents such as demons, or aliens, or the CIA” (p. 198).

This contrast seems real. But “involuntary” does not seem to be the best description of it, since what occurs to us in normal daydreaming does not seem rightly described as “voluntary”. “Voluntary” actions are actions you consider beforehand and decide to do. But we do not set about trying to bring certain images to mind when we daydream – we relax and they come to us unbidden.

A better description of the contrast is that when we daydream, we have a palpable sense that the images we entertain are ours. This is not control, but something more like ownership. We do not control what comes into our minds, but we do have a sense that we are actively picturing to ourselves, and this is not just like passively perceiving something in the world outside our bodies.

This kind of active involvement seems similar to what happens in our inner speech. When we talk to ourselves, we have auditory imagery that is like hearing what we say to ourselves. But we also have a palpable sense that we are saying something to ourselves, and not merely hearing something being said. To lose this sense would be to “hear voices” – which would be disturbing, and a sign of illness.

The resolution of my puzzlement, then, is this. In daydreaming, we have a sense of active involvement in picturing to ourselves. We feel that our images are our images, something we produce. In many other cases, when we produce something, we have control over the character of what gets produced. So, our active involvement in picturing to ourselves is easy to confuse with control. But “producing” our images in this sense is not the same thing as controlling which images are popping up in our mind’s eye. Projectionists at your local theater are actively involved in producing the images on the screen, but they do not control the character of those images.


Interpreting Brain Scans

September 8, 2010

In Your Brain and You, I have a brief section on the need for caution in interpreting the “lighted up” regions commonly featured in reports of brain scan studies. I’ve been reading an article that’s very informative on this matter and since it is somewhat densely written and is available only in a subscription journal it seems worthwhile to summarize one or two points that it makes here.

The article is by Carrie Figdor at the University of Iowa. The title is “Neuroscience and Multiple Realization of Cognitive Functions” and it appears in Philosophy of Science, vol. 77, July 2010, pp. 419-456. The points I’m going to draw from this article are not its central thesis; they come up in the course of defending the main thesis against an objection.

An important piece of background is that many studies use “subtractive logic”. Briefly, scans are taken during the performance of several tasks that are highly similar, except that one requires a cognitive operation (the focus of interest) that is not required by the others. Activation in the other tasks is then subtracted from the task that requires the operation of focal interest. Regions whose activation levels remain significantly different after the subtraction are taken to be specially associated with the focal cognitive task.

Figdor calls attention to the fact that, since the differences of activation are small, results from several participants need to be collected, and then these results are averaged. This method has the consequence that the activation that shows up after averaging may represent regions that are smaller than those that are required to perform the focal task. That is because performance of the focal task may rely on many regions, not all of which are the same in different participants. In that case, subtractive logic plus averaging would show only regions that are common to several participants, and these may be fewer than are required for the task in any single participant.

An analogy may help here. I’ll venture one – it’s my own, not Figdor’s, so if it has some unintended misleading feature, that’s my fault, not hers. Consider a record of office workers assigned the same task (call it T1), which they all successfully complete. But they have different methods. One is more inclined to talk with colleagues, another does more internet searching, one does a lot of calculation, another uses heuristics to estimate values. These activities may wash out in averaging, even though each participant could not have completed T1 without doing his or her particular subset of them.

Now, perhaps they all send a request to archives for a file that contains a key fact, and they can do many other tasks that are similar in many ways to T1 without consulting with the archive department. Then the request to archives will show up distinctively in an averaged record of work on T1. But that request would not be “how the task is done’ – it would be only a common part of several ways of doing it.

Discovering the location of an operation that is required for a task would still be interesting, even if it were not sufficient for the task of focal interest. But subtractive logic plus averaging may not even reveal a region whose activation is required. In terms of the analogy, perhaps one worker is able to infer what’s in the archive file from other documents, and never puts in a request for it. This would lower the average value for the request somewhat, but if it were only one worker who did this, the archive request would still “light up” as distinctive of the task, even though it is not required to perform it. Figdor’s article shows an instructive pair of images that were generated from an fMRI study. One is the averaged result, for a certain layer, of the 12 participants, and the other is the image from just one participant who contributed to this average. The difference between these images is dramatic. 

Figdor describes research that uses more sophisticated variants of the subtractive logic paradigm, and that holds promise of avoiding the difficulties to which I am calling attention. The lesson to be drawn is emphatically not that results from brain scanning have nothing to teach us. It is, instead, that more caution is needed in interpreting these results than is often found in journalistic reports of this work.


Prospectus

August 25, 2010

Your Brain and You — the book — is now available on Amazon.  Blog items here will be on related topics.

Academic papers in neuroscience, psychology and even philosophy are usually written in language that is different from the common sense terms in which we ask questions that matter to us. So, there is always some work needed to figure out how the results of a study relate to the questions that are on our minds. That’s the kind of work I’ll be trying to do here.

Different thinkers often have different views about what a piece of research means for us — how it relates to questions that naturally arise as we think about ourselves and our brains. I’ll give reasons for my take on work I discuss, and sometimes reasons against other interpretations of the significance of research for understanding ourselves.


%d bloggers like this: