Zombie Neuroscience

October 14, 2014

In an opinion piece in the New York Times Sunday Review (October 12, 2014, p. 12), Michael Graziano asks “Are We Really Conscious?” His answer is that we are probably not conscious. If his theory is right, our belief in awareness is merely a “distorted account” of attention, which is a reality that consists of “the enhancing of some [neural] signals at the expense of others”.

This distorted account develops in all of us, and seems to us to be almost impossible to deny. But beliefs that the Earth is the center of the universe, that our cognitive capacities required a special creation, and that white light is light that is purified of all colors, have seemed quite natural and compelling, yet have turned out to be wrong. We should be skeptical of our intuitive belief that we are conscious.

In short, Graziano is saying that your impression that you are conscious is likely a false belief. “When we introspect and seem to find that ghostly thing –awareness, consciousness, the way green looks or pain feels – our cognitive machinery is accessing internal models and those models are providing information that is wrong.”

One might well wonder what is supposed to be “ghostly” about the experience of green you think you have when you look at, say, an unripe banana, or a pain that you might believe occurs when you miss a nail and hit your thumb with a hammer. But, of course, if you are already convinced that there are no such things, then you must think that their apparent presence is merely the holding of false beliefs. If you then try to say what these false beliefs are beliefs about, you will be hard pressed to produce anything but ghosts. There are, of course, neural events that are necessary for causing these allegedly false beliefs about the way bananas look or pains – but these beliefs are not beliefs about those neural events, nor are they beliefs about any neural events at all. (People believed that unripe bananas looked green and that they had pains long before anyone had any belief whatsoever about neural events.)

Graziano’s positive story about awareness is that it is a caricature: “a cartoonish reconstruction of attention” (where, recall, attention is enhancement of some signals at the expense of others). This description raises a puzzle as to what the difference is between a cartoonish reconstruction of a signal enhancement that is caused by light reflected from an unripe banana, and a cartoonish reconstruction of a signal enhancement caused by a blow to your thumb. But perhaps this puzzle can be resolved in this way: The banana causes enhancement of one set of signals, the blow to the thumb causes enhancement of a different set of signals, and which false belief you acquire depends on which set of signals is enhanced.

A problem remains, however. Your beliefs that you are experiencing green or that you are in pain are certainly not beliefs about your signal enhancements. They are not beliefs about wavelengths or neural activations caused by blows to your thumb (though, of course, wavelengths and neural activations are among the causes of your having beliefs about what colors or pains you are experiencing). There is nothing else relevant here that Graziano recognizes as real. Your false beliefs are beliefs about nothing that is real.

We can, of course, have beliefs about things that are not real – for example, unicorns, conspiracies that never took place, profits that will in fact never materialize. In all such cases, however, we can build the non-existent targets of the beliefs by imaginatively combining things that do exist. For example, we have seen horses and animals with horns, so we can build a horse with a horn in our thoughts by imaginative combination.

But green and pain are not like unicorns. They have no parts that are not themselves colors or feelings. There are no Xs and Ys that are not themselves colors or feelings, such that we can build green or pain in our imagination by putting together Xs and Ys. So, if we were to accept Graziano’s dismissal of color experiences and pains as unreal, we would have to allow that we can have beliefs about things that neither exist, nor can be imaginatively constructed. We have, however, no account of how there could be such a belief. The words “way green things look” and “pain” could not so much as mean anything, if we suppose that there are no actual examples to which these words apply, and no way of giving them meaning by imaginative construction.

Graziano invokes impressive authorities – Copernicus, Darwin, and Newton – in support of skepticism about intuitions that once seemed incontestable. (See list in the second paragraph above.) He presents his theory as coming from his “lab at Princeton”.

The view he proposes, however, is not a result supported by scientific investigation. It is supported by the other authorities to which he appeals – Patricia Churchland and Daniel Dennett. These writers are philosophers who offer to solve the notoriously difficult mind-body (or, consciousness-brain) problem by the simple expedient of cutting off consciousness. Voilà. No more problem.

But it is not good philosophy to affirm a view that commits one to there being beliefs of a kind for which one can give no account.

It is important to understand that resisting the dismissal of consciousness is fully compatible with affirming that there are indeed neural causes for our behavior. Hammer blows to the thumb cause neural signals, which cause reflexive withdrawals. Somewhat later, the interaction of these signals with neurons in our brains causes behaviors such as swearing and taking painkillers. But hammer blows to the thumb also cause pains. It is indeed difficult to understand how or why pains should result from neural events that such blows cause. But it is not a solution to this problem to dismiss the pains as unrealities. Nor is it true that science teaches us that we ought to deny consciousness.


The Cambridge Declaration on Consciousness

August 24, 2012

On July 7, 2012 a “prominent international group” of brain scientists issued The Cambridge Declaration on Consciousness. The full document has four paragraphs of justification, leading to the declaration itself, which follows.

We declare the following: “The absence of a neocortex does not appear to preclude an organism from experiencing affective states. Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Non-human animals, including all mammals and birds, and many other creatures, including octopuses, also possess the neurological substrates.”

Back in the 90’s I published a paper under the title “Some Nonhuman Animals Can Have Pains in a Morally Relevant Sense”. (In case you’re wondering, that view had been denied by Peter Carruthers in a paper in a top tier journal.) So, not surprisingly, I am quite sympathetic to the sense of this declaration.

I also approve of the declaration’s prioritizing of neurological similarities over behavior. The philosophy textbook presentation of the supposedly best reason for thinking that other people have minds goes like this:

1. When I behave in certain ways, I have accompanying thoughts and feelings.

2. Other people behave in ways similar to me. Therefore, very probably,

3.  Other people have accompanying thoughts and feelings that are similar to mine.

This argument is often criticized as very weak. Part of my paper’s argument was that we have a much better reason for thinking our fellows have minds, namely:

1. Stimulation of my sense organs (e.g., being stuck with a pin) causes me to have sensations (e.g., a pain).

2. Other people are constructed very much like me. Therefore, very probably,

3. Stimulation of other people in similar ways causes them to have sensations similar to mine.

If one approaches the matter in this second way, it is natural to extend the argument to nonhuman animals to the extent that they are found to be constructed like us. This is the main line of approach in the Cambridge Declaration (although some of the lead-up paragraphs also sound like the first argument).

In sum, I am inclined to accept the sense of the Cambridge Declaration, and to agree that the evidence and reasoning presented make its stand a reasonable one.

But still, there is something peculiar about this Declaration, even aside from its being unusual for academic conferences to issue position statements. The question is, Why? Just what is odd about it?

One of the Declaration’s authors, Christoph Koch, recently gave an interview on the radio. (The link is to the written transcript.) In it, he characterizes fMRI scans as a “very blunt” instrument. The point is that the smallest region that can be resolved by an fMRI scan contains about half a million neurons, some of which may be firing quite actively while others are hardly firing at all. So, our scanning techniques do not tell us what neural firing patterns occur, but only where there are some highly active neurons.

Ignorance of neural patterns is relevant here. Another point that Koch makes in the interview is that there are many neurons – about three quarters of what you have – in the cerebellum. Damage to this part of the brain disrupts smooth and finely tuned movements, such as are required for dancing, rock climbing and speech, but has little or no effect on consciousness.

So, it is not just many neurons’ being active, or there being a complex system of neural activations of some kind or other that brings about consciousness. It is some particular kind of complexity, some particular kind of pattern in neural activations.

I am optimistic. I think that some day we will figure out just what kind of patterned activity in neurons causes consciousness. But it is clear that we do not now know what kind of neural activity is required.

The peculiarity of the Cambridge Declaration, then, is that it seems to be getting ahead of our actual evidence, yet it was signed by members of a group who must be in the best position to be acutely aware of that fact. Of course, ‘not appear[ing] to preclude’ consciousness in nonhuman animals is a very weak and guarded formulation. The remainder of the declaration, however, is more positively committal.

The best kind of argument for consciousness in nonhuman animals would go like this:

1. Neural activity patterns of type X cause consciousness in us.

2. Certain nonhuman animals have neural activity patterns of type X. Therefore, very likely,

3. Those nonhuman animals have consciousness.

Since we do not now know how to fill in the “X”, we cannot now give this best kind of argument. The signers of the Declaration must know this.

[The radio interviewer is Steve Paulson, and the date is August 22, 2012. The paper of mine referred to above is in Biology and Philosophy (1997), v.12:51-71. Peter Carruthers’ paper is “Brute Experience” in The Journal of Philosophy, (1989) v.86:258-269.]


Responsibility and Brains

August 2, 2012

In a thoughtful recent article, John Monterosso and Barry Schwartz rightly point out that whatever action we may take, it is always true that our brains “made us do it”. They are concerned that recognition of this fact may undermine our practice of holding people responsible for what they do, and say that “It’s important that we don’t succumb to the allure of neuroscientific explanations and let everyone off the hook”.

The concern arises from some of their experiments, in which participants read descriptions of harmful actions and were provided with background on those who had done them. The background was given either (a) in psychological terms (for example, a history of childhood abuse) or (b) in terms of brain anomalies (for example, imbalance of neurotransmitters).

The striking result was that participants’ views about responsibility for the harmful actions depended on which kind of terms were used in giving them the background. Those who got the background in psychological terms were likely to regard the actions as intentional and as reflecting the character of the persons who had done them. Those who got the background in terms of brain facts were more likely to view the actions as “automatic” and only weakly related to the performers’ true character.

The authors of the article describe this difference as “naive dualism” – the belief that actions are brought about either by intentions or by physical laws that govern our brains, and that responsibility accrues to the first but not the second. Naive dualism unfortunately ignores the fact that intentions must be realized in neural form – must be brain events – if they are to result in the muscle contractions that produce our actions.

They recommend a better question for thinking about responsibility and the background conditions of our actions. Whether the background condition is given as a brain fact, or a psychological property, or a certain kind of history, we should ask about the strength of the correlation between that condition and the performance of harmful actions. If most of those who have background condition X act harmfully, their level of responsibility may be low, regardless of which kind of background fact X is. If most of those with background condition X do not act badly, then having X should not be seen as diminishing responsibility – again, regardless of which kind of background condition X may be.

This way of judging responsibility is compatible with recognizing that we all have an interest in the preservation of safety. Even if bad behavior can hardly be avoided by some people, and their responsibility is low, it may be perfectly legitimate to take steps to prevent them from inflicting damage on others.

I’ll add here a speculation on a mechanism by which citing brain facts may lead us to assign people less responsibility than we should. Many reports of brain facts emphasize the role of some part of the brain. But we do not think of people as their parts: Jones is not his insula or his amygdala, Smith is not her frontal lobes or her neurotransmitters. So, some reports of background conditions in terms of brain facts may lead us to think of actions as the result of people’s parts, and thus not as the actions of the whole person.

A corrective to this kind of mistake is to bear in mind that our encouragements of good behavior and our threats of punishment are addressed to whole persons. Whole persons generally do know what is expected of them, and in most cases knowledge of these expectations offsets deficiencies that may occur in some of our parts. Our brains are organized systems, and the effects of their parts can become visible only when the whole system is mobilized toward carrying out an action.

[The article referred to is John Monterosso and Barry Schwartz, “Did Your Brain Make You Do It?”, The New York Times, Sunday, July 29, 2012, Sunday Review section, p. 12.]


Science and Free Will

May 9, 2012

At last month’s “Toward a Science of Consciousness” conference in Tucson, Pomona College researcher Eve Isham reported on several studies under the title “Saving Free Will From Science”. These studies cast doubt on conclusions that are often drawn from a series of famous studies carried out by Benjamin Libet.

Participants in Libet’s experiments wore a device on their head that measures electrical activity at points under the scalp. They were instructed to make a small movement (e.g., a flick of the wrist) whenever they felt like doing so. They watched a clock-like device, in which a dot rotates around the edge of a circle at the rate of two and a half times per second. There are marks and numbers around the edge of the circle. Their task was not only to make a movement when they felt like it; they were also to take note of where the dot was when they (a) formed the intention to make their movement, and (b) when they actually moved. They were asked to report these two times right after they made their movement.

Let’s call these times Int and Mov. What Libet found was that participants’ electrical activity showed a telltale rise a short time (roughly, a half second) before the reported time Int. This telltale marker is called a “readiness potential”, usually abbreviated as RP. (There are complicated corrections that must be made for the time it takes for signals to travel along neurons; the interval between RP and Int is what remains after these corrections have been taken into account.)

The key points are that RP comes before Int, and that Int is the moment when the intention to move first becomes conscious. The conclusion that many have drawn is that the process that is going to result in a movement starts unconsciously before a person becomes aware of an intention to move. So, the intention comes too late to be the real cause of the movement. But if people’s intentions to move are not the real causes of their movements, then they don’t have free will.

(Libet tried to avoid this conclusion by holding that there was enough time for a person to insert a “veto” and stop the movement. This has earned him a quip: he undercuts free will, but allows for free won’t. Few have found this view attractive.)

Critics of Libet’s work have raised many questions about the experimental design, and I have long regarded these experiments as resting on assumptions that seem difficult to pin down with sufficient accuracy. Isham’s presentation significantly deepened these doubts.

Although I have read many papers on Libet’s work, I had never seen the clock in motion. Isham showed a video, and as I watched, I imagined myself as a participant in Libet’s experiments. I flicked my wrist a few times, and thought of what I would have reported as the times of my intention to move, and my actual movement.

I found that it was extremely difficult to try to estimate two times. In fact, I found it hopeless, and soon gave up, settling for focusing on trying to get an accurate estimate of the time of my intention to move.

But even this simpler task was difficult, and I had no sense of confidence that I could locate the time of my intention more accurately than about an eighth of a revolution (that’s the distance of about eight minutes around the circumference of a regular clock face). When trying to do the task, the dot seemed to be moving very fast.

It’s natural to wonder whether accuracy might be improved by using a clock where the dot was not traveling so fast. That’s one of the variations on Libet’s set up that Isham tried. And here is the decisive result: When she did that, the estimates of the time of intention were earlier. RP was still a little bit earlier than the reported Int time, but the interval was very significantly reduced.

Isham reported several other variations on Libet’s design, all of which lead to the same general result: the estimates of Int depend on clock speed and several other factors that shouldn’t make a difference, but do. These results offer strong support for her conclusion that the time of intention is not consciously accessible to us, and we cannot use Libet-style experiments to undercut the view that our intentions cause our actions.

Readers of Your Brain and You, or some other posts on this blog, may recall that I am sympathetic to the conclusions that are often drawn from Libet’s work. So, I don’t think Isham has saved free will from science. But her work gives us more reason than we have previously had for not basing anti-free-will conclusions on Libet-style investigations.

[The abstract of Isham’s conference paper is available here.]


What’s New in Consciousness?

April 18, 2012

I am just back from the four-day “Toward a Science of Consciousness” conference in Tucson. I heard 32 papers on a wide variety of topics and I’m trying to tell myself what I’ve learned about consciousness. Today, I’ll focus on the most fundamental of the difficulties in this area, known to afficionados as “the Hard Problem of consciousness”. The following four statements explain the problem in a way that leaves open to many different kinds of response to it.

1. Data concerning our sensations and feelings correlate with data concerning the firings of neurons in our brains.

There is overwhelming evidence for this statement. Strokes in the back of the head, or severe injuries there, cause loss of vision. The extent of the area in which one can no longer see is well correlated with the extent of the damage. Pin pricks normally hurt; but if injury or drugs prevent  the neural signals they cause from reaching the brain, you won’t feel a thing. Direct electrical stimulation of some brain locations produces experiences. And so on.

2. What neurons do is to fire or not fire.

Neurons can fire faster or slower. There can be bursts of rapid firing, separated by intervals of relative quiescence, and there can be patterns of such bursts. With 100 billion or so neurons, that allows an enormous range of possible combinations of neural activities. We know of no other aspects of neural activity that are relevant to what we are experiencing.

3. We experience a large number of qualities – the colors, the tastes, the smells, the sounds, feelings like warmth, itchiness, nausea, and pain; and feelings such as jealousy, anger, joy, and remorse.

4. The qualities noted in 3. seem quite different from the neural firing patterns in 2, and from any complex physical property.

The Hard Problem is this: What is the relation between our experiences and their qualities, and our neural firing patterns? How can we explain why 1. is true?

There are two fundamental responses to the Hard Problem, and many ways of developing each of them. They are:

Physicalism. Everything is physical. Experiences are the same things as neural events of some particular kind. (It is not claimed that we know what particular kind of neural event is the same as any particular kind of experience.) The explanation of 1. is that the data concerning experiences and neural events are correlated, since experiences and neural events are just the same thing. It’s like the explanation of why accurate data about Samuel Clemens’ whereabouts would be perfectly correlated with accurate data about Mark Twain’s whereabouts.

Dualism. Experiences are not identical to neural events. 1. is true either because neural events cause experiences, or because some events have both neural properties and other properties that cannot be discovered by the methods of science.

Today, the dominant view (about two to one, by my unscientific estimate) is physicalism. The reason is suggested by my descriptions. Dualists evidently have to say how neural events can cause experiences, or explain the relation between the properties known to science and the properties not known to science. Physicalists have no such task: if there is just one thing under two names, there is no “relation” or “connection” to be explained.

But physicalism has another task, namely, to explain how 4. can be true. According to physicalism, blueness = some complex physical property X, the feeling of nausea = some complex physical property Y, and so on. How could these pairs of items even seem to be different, if they were really just the same?

Of course, it will not do to say that blue is the way a physical property X appears to us, the feeling of nausea is the way a physical property Y appears to us, and so on. That would just introduce ways things appear. But then we would just have to ask how physical properties are related to their ways of appearing. They do not appear as what (according to physicalism) they really are; i.e., they do not appear as complex physical properties. So how, according to physicalism, could it happen that X can appear as blue, but not as complex physical property X, if blue = X?

The new developments reported at the conference that were of most interest to me were attempts by physicalists of ways of dealing with this last question. Here are brief summaries of two key ideas.

A. To be feeling something is nothing other than to record and adjust action in the following ways. (i) Recognize dependence of changes of input to one’s sensory devices upon movement of one’s own body. (ii) Recognize changes of input to one’s sensory devices from sources that do not depend on one’s own body (and distinguish these from the changes in (i)). (iii) Process some parts of input more intensively than others. (When we do this, it is called attending to some aspects of our situation more than others.)

We understand how these features could be instantiated in a robot; so we understand how we could make a robot – a purely physical thing – that feels.

B. What is in the world is all physical. Experienced qualities like blue and the feeling of nausea are not in the world – they are its “form”, not its “content”. So, there is no question of “relating” experienced qualities to what is in the world – in fact, it is misleading to speak of “experienced qualities” at all, since that phrase suggests (falsely, on this view) they are something that is in the world.

It’s time for disclosure: I am a dualist. Not surprisingly, I didn’t find either of these efforts to offer a good solution to what I see as the key problem for physicalism. I’ve done my best to represent A. and B. fairly, but you should, of course, remember that what you’re getting here is what a dualist has been able to make of physicalists’ efforts.


How Does Your Food Taste?

March 26, 2012

One question I’ve thought about, without much success, is what exactly is different about our neural firings when we have different sensations. Imagine, for example, looking at a ripe, red tomato on a white tablecloth. Now imagine everything the same, except that you’ve replaced the tomato with an unripe green one of the same size. It’s a sure bet that something different is happening in the back of your head that’s different in these two cases. We know some parts of the story about this kind of difference, but it would be nice to have a full and detailed accounting.

I thought I might get some enlightenment about this question by looking up what’s known about tastes. I suspected that this might be a somewhat simpler case, because I knew that despite the complexity of the lovely tastes that emerge from my wife’s exquisite cooking, they are all combinations of just five basic tastes – sweet, salty, sour, bitter, and umami.

Or, then again, maybe not.

The article that causes my doubts was published by David V. Smith and colleagues in 2000, but I was unaware of it until recently. The Wikipedia entry for “Taste” is organized around the above five “basic” tastes, so perhaps others have also missed this article.

The key problem about taste arises from the fact that taste buds are generally sensitive to many different chemicals. They differ in the degree to which different substances will raise their activation, but they will show some amount of increased activation for many different components of the foods we eat.

This fact supports the view that the cause of a taste is a set of relative degrees of activation across many taste buds. Moreover, we cannot think of each taste bud as devoted to one of the five tastes, and the pattern of relative degrees of activation as a pattern of five kinds of response. Instead, there is a pattern of greater and lesser activations of cells, each of which responds in one degree or another to many kinds of inputs.

A natural question is how the five-basic-taste theory has seemed so good for so long. According to Smith and colleagues, the answer lies in a certain methodological practice. This method involves identifying different kinds of taste buds by their “best response”. That means, for example, that a cell will be classified as a “sweet” cell if its activation is raised more by glucose than by other inputs such as salt or quinine.

Such classifications tend to obscure the point that a so-called “sweet” cell will also be activated by things that aren’t sweet. They may be highly activated by other substances, just not as much so as by glucose.

An even more important point is that the results of this method depend on one’s choice of substances used in comparing relative activations. Smith and colleagues identify another set of chemicals that are quite different from the usual ones, but that, using the same methodology, would give a different set of “basic” tastes. This result casts doubt on the utility of the “best response” methodology for identifying basic tastes.

More generally, it suggests, at least to me, that the story about the neural activations that underlie the differences among our sensations will turn out to be very complex, and will involve patterns of activity across a large number of neural cells.

Bon appetit!

[ David V. Smith, Steven J. St. John, John D. Boughter, Jr., (2000) “Neuronal cell types and taste quality coding”, Physiology and Behavior 69:77-85. ]


Be Creative! Have a Drink!

March 5, 2012

In a recently published experiment, researchers administered a task to a group of intoxicated participants and a group of sober controls. The task is thought to involve creativity, and the intoxicated participants did better than the controls.

The task was the Remote Associates Test (RAT). Each item in the test consists of three words, e.g., Peach, Arm, and Tar, and the task is to find a fourth word that will make a good two-word phrase with each of the given words. Perhaps the first word immediately suggests something – in this example, perhaps Tree – but that doesn’t make sense with the other two, and one had better look elsewhere. If one finds it difficult to let go of this first thought, it will take longer to find a good answer than if one can be open to further gifts from one’s brain, and let other, less closely (more remotely) associated words appear in one’s consciousness. Sooner, later, or perhaps never, the good candidate, Pit, may arrive. Participants responded as soon as they thought they had a good answer, which they then provided. If they had not responded by the end of 1 minute, they were asked to guess, and were then given the next item.

The intoxicated participants got that way through consuming a vodka and cranberry juice drink, which resulted in an average blood alcohol content of  .075. (The range was from .059 to .091. For comparison, the legal limit for driving in the US is .08.)

On average, the intoxicated group solved more problems than the sober controls. They also reached their solutions faster. Further, for each correct solution, participants were asked to rate the degree to which they had reached their solution by strategically searching, versus by “insight” (an Aha! moment). The intoxicated participants were more likely than the sober controls to attribute their successes to insight.

The researchers offer an explanation of these results. In brief, the account is this. Sobriety is good for maintaining attention, staying focused on goals, remembering where one is in a calculation, keeping track of one’s assumptions, and so on. But what’s needed for high performance on the RAT is the relaxing of attention, easy letting go of ideas that don’t pan out, and openness to letting one’s network of associations work without hindrance. A bit of alcohol helps reduce focused attention, so it helps with being open to letting one’s network operate without inhibition. The authors point out that this explanation is consilient with a number of other studies, including some that examined effects of frontal lobe damage, and others that showed an advantage for those who had an opportunity to sleep between exposure to a problem type and attempts to provide solutions.

Of course, creativity requires the knowledge necessary to recognize a useful idea when it arises. (In our illustration, for example, participants have to be able to recognize that Pit fits with each of the given words.) But this experiment suggests that there is another part in a creative process, a period in which unconscious processes do their work, combining materials that are already present, but have not previously been brought together in just the way that is necessary for creative success.

I think I have often profited from following the advice, “Sleep on it”, so perhaps I should worry about confirmation bias (the tendency to give more weight than one should to evidence that agrees with what one already accepts, and to discount conflicting information). But I’m glad to have this study, identified by the authors as the first empirical demonstration of the effects of alcohol’s effects on creative problem solving.

[Andrew F. Jarosz, Gregory J. H. Colflesh, and Jennifer Wiley (2012) “Uncorking the muse: Alcohol intoxication facilitates creative problem solving”, Consciousness and Cognition 21:487-493. ]