The Cambridge Declaration on Consciousness

August 24, 2012

On July 7, 2012 a “prominent international group” of brain scientists issued The Cambridge Declaration on Consciousness. The full document has four paragraphs of justification, leading to the declaration itself, which follows.

We declare the following: “The absence of a neocortex does not appear to preclude an organism from experiencing affective states. Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Non-human animals, including all mammals and birds, and many other creatures, including octopuses, also possess the neurological substrates.”

Back in the 90’s I published a paper under the title “Some Nonhuman Animals Can Have Pains in a Morally Relevant Sense”. (In case you’re wondering, that view had been denied by Peter Carruthers in a paper in a top tier journal.) So, not surprisingly, I am quite sympathetic to the sense of this declaration.

I also approve of the declaration’s prioritizing of neurological similarities over behavior. The philosophy textbook presentation of the supposedly best reason for thinking that other people have minds goes like this:

1. When I behave in certain ways, I have accompanying thoughts and feelings.

2. Other people behave in ways similar to me. Therefore, very probably,

3.  Other people have accompanying thoughts and feelings that are similar to mine.

This argument is often criticized as very weak. Part of my paper’s argument was that we have a much better reason for thinking our fellows have minds, namely:

1. Stimulation of my sense organs (e.g., being stuck with a pin) causes me to have sensations (e.g., a pain).

2. Other people are constructed very much like me. Therefore, very probably,

3. Stimulation of other people in similar ways causes them to have sensations similar to mine.

If one approaches the matter in this second way, it is natural to extend the argument to nonhuman animals to the extent that they are found to be constructed like us. This is the main line of approach in the Cambridge Declaration (although some of the lead-up paragraphs also sound like the first argument).

In sum, I am inclined to accept the sense of the Cambridge Declaration, and to agree that the evidence and reasoning presented make its stand a reasonable one.

But still, there is something peculiar about this Declaration, even aside from its being unusual for academic conferences to issue position statements. The question is, Why? Just what is odd about it?

One of the Declaration’s authors, Christoph Koch, recently gave an interview on the radio. (The link is to the written transcript.) In it, he characterizes fMRI scans as a “very blunt” instrument. The point is that the smallest region that can be resolved by an fMRI scan contains about half a million neurons, some of which may be firing quite actively while others are hardly firing at all. So, our scanning techniques do not tell us what neural firing patterns occur, but only where there are some highly active neurons.

Ignorance of neural patterns is relevant here. Another point that Koch makes in the interview is that there are many neurons – about three quarters of what you have – in the cerebellum. Damage to this part of the brain disrupts smooth and finely tuned movements, such as are required for dancing, rock climbing and speech, but has little or no effect on consciousness.

So, it is not just many neurons’ being active, or there being a complex system of neural activations of some kind or other that brings about consciousness. It is some particular kind of complexity, some particular kind of pattern in neural activations.

I am optimistic. I think that some day we will figure out just what kind of patterned activity in neurons causes consciousness. But it is clear that we do not now know what kind of neural activity is required.

The peculiarity of the Cambridge Declaration, then, is that it seems to be getting ahead of our actual evidence, yet it was signed by members of a group who must be in the best position to be acutely aware of that fact. Of course, ‘not appear[ing] to preclude’ consciousness in nonhuman animals is a very weak and guarded formulation. The remainder of the declaration, however, is more positively committal.

The best kind of argument for consciousness in nonhuman animals would go like this:

1. Neural activity patterns of type X cause consciousness in us.

2. Certain nonhuman animals have neural activity patterns of type X. Therefore, very likely,

3. Those nonhuman animals have consciousness.

Since we do not now know how to fill in the “X”, we cannot now give this best kind of argument. The signers of the Declaration must know this.

[The radio interviewer is Steve Paulson, and the date is August 22, 2012. The paper of mine referred to above is in Biology and Philosophy (1997), v.12:51-71. Peter Carruthers’ paper is “Brute Experience” in The Journal of Philosophy, (1989) v.86:258-269.]


Gazzaniga’s Modules

January 3, 2012

I’ve been reading Michael Gazzaniga’s 2009 Gifford Lectures, now published as Who’s In Charge? Free Will and the Science of the Brain. I can’t say that I think he’s untied the knot of the free will problem, but the book contains some interesting observations about split brain patients, brain scans, and modules. Most of this post is about modules, but Gazzaniga’s remarks about brain scans deserve top billing.

These remarks come in the last and most useful chapter, which focuses on problems in brining neuroscience into court. Gazzaniga provides a long list of such problems, and anyone who is interested in neuroscience and the law should certainly read it.

The general theme is this. Extracting statistically significant conclusions from brain scans is an extremely complex business. One thing that has to be done to support meaningful statements about the relation between brain regions and our mental abilities is to average scans across multiple individuals. This kind of averaging is part of what is used to generate the familiar pictures of brain regions “lighting up”.

But in any court proceeding, the question is about the brain of one individual only. Brain scans of normal, law-abiding individuals often differ quite noticeably from averages of scans of people doing the same task. So, inferences from an individual’s showing a difference from average in a brain scan to conclusions about that individual’s abilities, proclivities, or degree of responsibility are extremely difficult and risky.

The individual differences in brain scans show that our brains are not standard-issue machines. Brains that are wired differently can lead to actions that have the same practical result across a wide variety of circumstances. This implies that there is a limit to how specialized each part of our brains can be.

But what about modularity? Don’t we have to think of ourselves as composed of relatively small sub-networks that are dedicated to their given tasks?

Here is where things get interesting; for in Gazzaniga’s book, there seem to be two concepts of “module” at work, although the distinction between them is not clearly drawn.

The first arises out of some observations that have been known for a long time, but are not often referred to. (They’re on pp. 32-33 of Gazzaniga’s book.) One of these is that while our brains are 2.75 larger than those of chimpanzees, we have only 1.25 times more neurons. So, on average, our neurons are more distant from each other. What fills the “extra” space is connections among neurons; but if the same degree of connectivity among neurons were maintained with the extra distance, there would have to be many more miles of connecting lines (axons) than there actually are. So, in us, the degree of connectivity is, on average, less than that in chimps. There are still groups of close-lying neural cells that are richly connected, but the connections of one group to another are sometimes relatively sparse. We have thus arrived, by a sort of physiological derivation, at modules.

It must be noticed, however, that this explanation of the existence of modules does not say anything about what kind of functions the small, relatively well connected groups might be performing. So, this explanation does not contribute any reason for supposing that there are “modules” for anything so complex as – to take a famous case – detecting people who cheat on social rules. There is good evidence that we have a well developed ability to detect cheaters, and that this ability fails to extend to similar problems that are not phrased in terms of social rules. But it is another question whether there is one brain part that is dedicated to this task, or whether we can do it because we have several small groups of neurons, each of which does a less complicated task, and whose combined result enables us to detect cheaters with ease.

Modularity reaches its apogee when Gazzaniga introduces the “interpreter module”. The job of this item is to rationalize what all the other modules are doing. It is the module that is supposed to provide our ability to make up a narrative that will present us – to others, and to ourselves – as reasonable actors, planning in accord with our established desires and beliefs, and carrying out what we have told ourselves we intend to do.

According to the interpreter module story, we can see this inveterate rationalizer at work in many ways. It reveals itself in startlingly clear ways in cases of damaged brains. Some of the patients of interest here are split brain patients; others have lost various abilities due to stroke or accidents. Some parts of their brains receive less than the normal amount of input from other parts. Their interpreter modules have incomplete information, and the stories they concoct about what their owners are doing and why are sometimes quite bizarre.

But people with intact brains can be shown to be doing the same sort of rationalizing. For example, Nisbett and Wilson (1977) had people watch a movie. For one group, the circumstances were normal, for another the circumstances were the same except for a noisy power saw in the hall outside. Participants were asked to rate several aspects of the movie, such as interest, and likelihood of affecting other viewers. Then they were asked whether the noise had affected their ratings. In fact, there was no significant difference in the ratings between the non-distracted group and the group exposed to the noise. But a majority of those in the latter group believed that the noise had affected their ratings.

While false beliefs about our own mental processes are well established, I am suspicious of the “interpreter” story. The interpreter is called a “module” and is supposed to explain how we bring off the interesting task of stitching together all that goes on, and all that we do, into a coherent (or, at least, coherent sounding) narrative. But we might be doing any of very many things, under many different circumstances. To make a coherent story, we must remain consistent, or nearly so, with our beliefs about how the physical and social worlds operate. We must anticipate the likely reactions of others to what we say we are doing. So, according to the interpreter module story, this “module” must have access to an enormous body of input from other modules, and be able to process it into (at least the simulacrum of) a coherent story.

To me, that sounds an awful lot like an homunculus – a little person embodied in a relatively tightly interconnected sub-network, that takes in a lot of inputs and reasons to a decently plausible output that gets expressed through the linguistic system. That image does nothing to explain how we generate coherent, or approximately coherent, narratives; it just gives a name to a mystery.

It would be better to say that we have an ability to rationalize – to give a more or less coherent verbal narrative – over a great many circumstances and actions. Our brains enable us to do this. Our brains have parts with various distances between them; somehow, the combined interactions of all these parts results in linguistic output that passes, most of the time, as coherent. We wish we understood how this combined interaction manages to result in concerted speech and action over extended periods of time; but, as yet, we don’t.

[Michael S. Gazzaniga, Who’s In Charge? Free Will and the Science of the Brain, The Gifford Lectures for 2009 (New York: Harper Collins, 2011). Nisbett, R. E. & Wilson, T. D. (1977) “Telling More Than We Can Know: Verbal Reports on Mental Processes”, Psychological Review 84:231-259. This paper describes many other cases of false belief about our mental processes. The physiological comparison between humans and chimpanzees, and its significance, are referenced to Shariff, G. A. (1953) “Cell counts in the primate cerebral cortex”, Comparative Neurology 98:381-400; Deacon, T. W. (1990) “Rethinking mammalian brain evolution”, American Zoology 30:629-705; and Ringo, J. (1991) “Neuronal interconnection as a function of brain size”, Brain, Behavior and Evolution 38:1-6.]


Thinking About Modules

November 21, 2011

In a recent Wall Street Journal review article, Raymond Tallis expresses dissatisfaction with what he calls “biologism” – the view that nothing fundamental separates humanity from animality. Biologism is described as having two “cardinal manifestations”.

The first is that the mind is the brain, or its activity. This view is held to have the consequence that one of the most powerful ways to understand ourselves is through scanning the brain’s activities.

The second manifestation of biologism is the claim that “Darwinism explains not only how the organism Homo sapiens came into being (as, of course, it does) but also what motivates people and shapes their day-to-day behavior”.

Tallis suggests that putting these ideas together leads to the following view. The brain evolved under natural selection, the mind is the (activities of the) brain, our behavior depends on the mind/brain, therefore the mind and our behavior can be explained by evolution. A further implication is claimed, namely, that “The mind is a cluster of apps or modules securing the replication of genes that are expressed in our bodies”. Studying the mind can be broken down into studying (by brain scans) the operation of these modules.

Tallis laments the wide acceptance of this way of looking at ourselves. He affirms that brain activity is a necessary condition of all of our consciousness, but holds that “many aspects of everyday human consciousness elude neural reduction”.

But how could aspects of our consciousness elude neural reduction, if everything in our consciousness depends on the workings of the brain? Tallis answers: “For we belong to a boundless, infinitely elaborated community of minds that has been forged out of a trillion cognitive handshakes over hundreds of thousands of years. . . . Because it is a community of minds, it cannot be inspected by looking at the activity of the solitary brain.”

This statement, however, is not an answer to the question of how aspects of our consciousness can elude neural reduction. It explains, instead, why we cannot understand facts about societies by looking at a solitary brain, and why we cannot reconstruct the evolutionary history of our species by looking at the brain of one individual. But the question about elusiveness of neural reduction concerns the consciousness of individuals. It’s about how individual minds work, and what gives rise to each person’s behavior.

Aside from rare cases of feral children, individuals grow up in societies. Even so, their motivations and behavior depend on their individual brains. Individuals must have some kind of representation of societal facts and norms in their own brains, if those brains are to produce behaviors that are socially appropriate and successful. At present, alas, we do not understand what form those representations take, nor how they are able to contribute, jointly with other representations, to intelligent behavior. But the question of how the individual mind works is a clear one, and the search for an answer is one of the most exciting inquiries of our time.

Despite my dissatisfaction with Tallis’s account, I am sympathetic to some of his doubts about reduction of motivation and behavior to the operations of modules. The true source of the problem, however, is not our attention to the mind/brain of solitary individuals.

The real problem is, instead, uncritical acceptance of modules. The modular way of looking at things does not follow from Tallis’s two cardinal manifestations. They say, in sum, that whatever we think and whatever we do depends on the activities of a brain that developed under principles of Darwinian evolution. They do not say one word about modules. They do not imply any theory of how the evolved brain does what it does.

These remarks are in no way a denial of modules, and in some cases, there is very good reason to accept them. But, even accepting that there are many modules, it does not follow that for any given motivation or behavior, X, there is a module that is dedicated to providing X – i.e., that functions to provide X and does not do anything else. Moreover, it is clear that our evolved brain allows for learning. If we learn two things, they may be related, and if we recognize a relation among things that we had to learn in the first place, there cannot be a module for recognizing that relation.

Caution about introducing modules for specific mental or behavioral features that may interest us is compatible with supposing not only that there are many modules, but even with supposing that operations of several modules is required for everything we do. That’s because plurality of modules carries with it the possibility of variability in how they are connected. Such variability may depend on genetic differences, developmental differences, and/or differences in learning. In any case of combined action of several modules, therefore, there will be no simple relation between a motivation or a behavior and a single module, nor any simple relation between a motivation or behavior and a collection of modules.

So, even granting Tallis’s two cardinal manifestations and a commitment to extensively modular brain organization, we cannot expect any simple relation to hold between some ability that interests us and the operation of a module dedicated to that ability. So, I agree with Tallis that we should be suspicious of facile “discoveries” of a module for X, where X may be, e.g., an economic behavior or an aesthetic reaction. But I think that the complexities that lie behind this suspicion are to be found in the complexity of the workings of individual brains. Our social relations with others provides distinctive material for us to think about, but they will not explain how we do our thinking about them.

[Raymond Tallis, “Rethinking Thinking”, The Wall Street Journal for November 12-13, 2011, pp. C5 and C8. Readers of _Your Brain and You_ will be familiar with reasons for regarding sensations as effects of, rather than the same thing as, neural activities; but this kind of non-reducibility is not relevant to the issues discussed in this post. They will also be aware of reasons for saying that we do not presently understand how individual minds work.]


Interpreting Brain Scans

September 8, 2010

In Your Brain and You, I have a brief section on the need for caution in interpreting the “lighted up” regions commonly featured in reports of brain scan studies. I’ve been reading an article that’s very informative on this matter and since it is somewhat densely written and is available only in a subscription journal it seems worthwhile to summarize one or two points that it makes here.

The article is by Carrie Figdor at the University of Iowa. The title is “Neuroscience and Multiple Realization of Cognitive Functions” and it appears in Philosophy of Science, vol. 77, July 2010, pp. 419-456. The points I’m going to draw from this article are not its central thesis; they come up in the course of defending the main thesis against an objection.

An important piece of background is that many studies use “subtractive logic”. Briefly, scans are taken during the performance of several tasks that are highly similar, except that one requires a cognitive operation (the focus of interest) that is not required by the others. Activation in the other tasks is then subtracted from the task that requires the operation of focal interest. Regions whose activation levels remain significantly different after the subtraction are taken to be specially associated with the focal cognitive task.

Figdor calls attention to the fact that, since the differences of activation are small, results from several participants need to be collected, and then these results are averaged. This method has the consequence that the activation that shows up after averaging may represent regions that are smaller than those that are required to perform the focal task. That is because performance of the focal task may rely on many regions, not all of which are the same in different participants. In that case, subtractive logic plus averaging would show only regions that are common to several participants, and these may be fewer than are required for the task in any single participant.

An analogy may help here. I’ll venture one – it’s my own, not Figdor’s, so if it has some unintended misleading feature, that’s my fault, not hers. Consider a record of office workers assigned the same task (call it T1), which they all successfully complete. But they have different methods. One is more inclined to talk with colleagues, another does more internet searching, one does a lot of calculation, another uses heuristics to estimate values. These activities may wash out in averaging, even though each participant could not have completed T1 without doing his or her particular subset of them.

Now, perhaps they all send a request to archives for a file that contains a key fact, and they can do many other tasks that are similar in many ways to T1 without consulting with the archive department. Then the request to archives will show up distinctively in an averaged record of work on T1. But that request would not be “how the task is done’ – it would be only a common part of several ways of doing it.

Discovering the location of an operation that is required for a task would still be interesting, even if it were not sufficient for the task of focal interest. But subtractive logic plus averaging may not even reveal a region whose activation is required. In terms of the analogy, perhaps one worker is able to infer what’s in the archive file from other documents, and never puts in a request for it. This would lower the average value for the request somewhat, but if it were only one worker who did this, the archive request would still “light up” as distinctive of the task, even though it is not required to perform it. Figdor’s article shows an instructive pair of images that were generated from an fMRI study. One is the averaged result, for a certain layer, of the 12 participants, and the other is the image from just one participant who contributed to this average. The difference between these images is dramatic. 

Figdor describes research that uses more sophisticated variants of the subtractive logic paradigm, and that holds promise of avoiding the difficulties to which I am calling attention. The lesson to be drawn is emphatically not that results from brain scanning have nothing to teach us. It is, instead, that more caution is needed in interpreting these results than is often found in journalistic reports of this work.


%d bloggers like this: